This Pub has been submitted to - but is not yet featured in - this journal.
This publication can be found online at http://pubpub.ito.com/pub/setwrite-2016-03-16-emoting-machines-brief.
expressive black-boxes
s s
in the future, people will see emotions in complex machines. since these future machines will grow out of the (simpler) machines of today, inventors should lay sound foundations for such machine expressiveness. moreover, making machines look like people isn’t the only way to enable this: the effective attribution of emotions hinges more on behaviour than appearance.
hashed thoughts, to accompany a project around humans, future machines, black-boxes, interactions and emotions.

intent

in the future, we will live and interact with numerous and complex machines. present research suggests how machines will learn like we do, mimic human emotions, empathise with us and let us feel comfortable confiding in them, etc.
through my projects, i am communicating how machines will gain our trust without needing to look like us, and how we shall understand complex machines as having emotions. this piece concerns the last point.
machines will be capable of expressing their (complex) states, and people will see emotions in such machines. since these future machines will grow out of the (simpler) machines of today, inventors should lay sound foundations for such machine expressiveness. moreover, making machines look like people isn’t the only way to enable this: the effective attribution of emotions hinges more on behaviour than appearance.
to communicate this, we intend to create non-simple objects which, without looking like people, are perceived to be emotional.

background

emotions in things, and not just in people

when the computer doesn’t behave as we want it to, we tell others that it is angry at us on that day. some motorcycle riders are deeply attached to their bikes: stroking them, giving them names and personalities, and responding to each sound and quirk in the machine affectionately. they can sense the bike getting tired at the end of a journey, or feel like the bike is happy after a tune-up. a candle’s flame may appear to be excited ; or seem to be struggling, urging me to shield it from a draft. trees seem to be happy, sad or tired ; and a howling wind is clearly expressing anger. at times, i even consider a country’s bureaucratic setup to be a lethargic or spiteful object, and adopt a more calculated approach while dealing with it.
these objects are emotive even though they aren’t consciously trying to express anything to us. emotions are expressed and perceived. since perception completes an emotion, people can perceive emotions in several things: in other people, or in objects that move or make sounds or animate in some way, or (perhaps) even in inanimate objects.

emotions are perceived abstractions

when we observe someone’s expressions (i.e., appearance and behaviour), we can use abstractions—i.e., emotions—for describing what-is-actually a complex set of internal processes. the quality of sound or heart-rate or movement or direction of gaze can be measured, but the complex set of things that cause these to vary can not be empirically described (by us, so far at-least). when we observe these things together, we can bundle them and understand their source as being ‘emotional’ in a certain way. since emotions are abstractions, we also assign them on other (non-human) things which express like emoting people would. while each person’s understanding of emotions varies with quality of observation, circumstances, past experience, semantics, context, etc ; using different emotions to understand different situations is still sufficiently unambiguous and informative. that is why, when someone tells me that joe seems ‘angry’, i get a good idea of how joe must be behaving.
one may argue that emotions lie in the emotional person, i.e., one can say that a person first feels angry, and then behaves like an angry person would, and that it is that which is seen by us to be anger. while such an argument can be contested (i.e., i can argue how a person is a intelligent system with motives and limitations[1], and how this complex machine has certain external constraints preventing it from achieving its goals, and how different processes within the machine function in different ways in this constrained state, and that these processes are expressed and thereafter observed by us, and since we learned the word ‘anger’ to describe the collection of such observations, we now ‘see’ the person as being angry), it is enough to know that we perceive a set of expressions as being an emotion.
if one set of expressions make people perceive one emotion, then an actor or a robot can make those same expressions and be thought to have the that emotion ; or, if some machine happens to work in a certain way, it is possible that we see that machine as having an emotion.

a short note on ‘emotions’

i recognise that there is no consensus on what ‘emotion’ is. different theories explain it differently, and some differentiate emotions from moods, feelings, etc. i, however, use ‘emotions’ as a general term for any and all such abstractions.

emotions in non-decomputible machines

we shall perceive emotions in future machines

experiencing a loss of mastery and control over one’s environment can lead to depression, anxiety, and an overall pattern of learned helplessness ... unpredictable and unexpected behavior activates the motivation to understand and explain the behavior ... anthropomorphism may help counteract these consequences by providing a sense of understanding, predictability, and control ... anthropomorphism should be operationalized as the attribution of humanlike mental states—a mind—to nonhuman agents ... unpredictability alone (rather than some other correlated feature of the stimulus) can stimulate anthropomorphism ... anthropomorphizing nonhuman agents seems to satisfy the basic motivation to make sense of an otherwise uncertain environment.
"making sense by making sentient: effectance motivation increases anthropomorphism". Journal of Personality and Social Psychology. Vol. 99. (2010): Num. 3. 410-435. [http://www.careymorewedge.com/papers/SensebySentience.pdf]
in a (future-)world full-of complex machines, where people aren’t always in complete control of everything, they will try to make sense of things through anthropomorphisation[2]. this will include attributing emotions to machines.
as our daily lives involve ever more sophisticated computers, we will find that ascribing little thoughts to machines will be increasingly useful in understanding how to get the most good out of them.
"the little thoughts of thinking machines". psychology today. (1983): [http://www-formal.stanford.edu/jmc/little.pdf]
while one can argue that machines will inherently be emotional, all i want to argue at this point is that future machines will be perceived as having emotions, and that this may be the only way we understand them.
the need to cope with a changing and partly unpredictable world makes it very likely that any intelligent system with multiple motives and limited powers will have emotions.

what kind of machines should we perceive emotions in

simple machines:
  1. they can be programmed by people ;
  2. they can communicate their state precisely (through suitable interfaces) ; and
  3. their state and working can be empirically grasped by people.
we donot need to use emotions to explain how simple objects and machines perform[3]. we may still choose to do so, but using emotions in such cases does not add anything we cannot otherwise know, i.e., it is uninformative.
anthropomorphism is the ascription of human characteristics to things not human. When is it a good idea to do this? When it says something that cannot as conveniently be said some other way.
"the little thoughts of thinking machines". psychology today. (1983): [http://www-formal.stanford.edu/jmc/little.pdf]
however: when we are unable to empirically describe how and why something works, we need to look for other ways to describe it, because:
  1. people can not fully understand, quickly debug or reprogramme such a machine (or cannot even hope to do so).
  2. the machine cannot communicate its state precisely, because such information it is either too complex, voluminous and/or rapidly-changing.
such a machine is like a locked black-box, and may be termed as being non-decomputible.

moving beyond superficial appearance

the present obsession with exo-anthropomorphism

cinematic depictions of machines-of-the-future are usually humanoid, like cars that speak to people with voices, or robots that have eyes and lips and skin and arms and are essentially human-like in appearance (etc). there is much ongoing research in making robots that are human-like in appearance, with eyes, cheeks and a voiced words. i call such attempts at recreating human appearance in machines to be exo-anthropomorphism, i.e., making things externally (or, superficially) human-like.
while there are reasons for and benefits in doing so—after all, machines that people interact with must function in a world full-of human-affordances and also be accepted by people ; and a human-like form can be useful in both cases—, i sense an obsession where exo-anthropomorphic machines are first built, and then their technology re-appropriated for whatever real-world use it fits best into. even in those places where machines mimic human behaviour well, that behaviour is enclosed in an exo-anthropomorphic form. i am against the notion of making every machine that we interct with exo-anthropomorphic.

behaviour versus appearance

people are perfectly capable of anthropomorphising completely non-human-looking objects and assigning emotions to them. philippa mothersill argues how different objects can be seen to have different emotions based on their form, but form is a limited conveyor of emotional information: it offers a limited and fixed range of expressions. on the other hand, heider & simmel’s 1944 study of apparent behavior showed us how movements are much stronger conveyors of emotional information:
we should urgently appreciate that most machines in the future will look nothing like people. inventors today should look to build systems which will grow into expressive systems in future machines. we may not fully understand those machines : but sensing a machine’s emotions—whether it likes us, or is wearing out and will soon be unable to work, or if it is trying to figure something out but is confused, etc—is something we need to allow ourselves to do.
so, since emotions are so important:
  1. people need to warm up to the notion of emotions in any complex machine, no matter what it looks like ; and
  2. inventors need to build systems, today, that will help any machine be more expressive. this may be by inventing technologies, or even looking at new ways to express with existing materials, etc.

behaviours: patterns of movements

basing our discussion on literal black-boxes

we donot know how future machines may be constituted, or even how they may appear. but i’m trying to construct a theory about how they will express themselves (output/reaction). since a black-box is an abstract representation of any opaque object—i.e., whose composition is not known—that can only be understood in terms of its input (stimulus) and output (reaction), building the theory on a literal black-box has the following advantages:
  1. a black-box can represent anything, including a complex mechanism.
  2. a black-box, clearly, looks nothing like a person.
  3. it may be possible to apply such a theory to anything, by simple replacing the black-box with a machine (or anything else!).

movements, patterns, behaviour

an object can be understood as a collection of parts, each of which can be described by their qualities (such as: colour, size, texture, form, sound, intensity, softness, beat, etc). we observe these qualities and also how some of them change with time, and assign emotions to the object.
notes:
  1. an object is not a straightforward sum of its parts and qualities. we can perceive a whole to be greater or less than the sum of its parts.
  2. our imagination also lets us perceive emotions in non-existent or inanimate things : but that is beyond the scope of this discussion.

movements are not merely spatial

when using their bodies, performance-artists use ‘movements’ (changes in direction, speed, weight and flow) to produce ‘efforts’ like flicks, glides, etc. this is described in a system called ‘laban efforts’. artists can recreate emotions by using efforts in patterns or combinations.
like dancers, anything with a body—say, a machine—may also express with movements. simple movements can be expressive, like in a stick which rotates about a fixed end. when patterns of movement differ, the stick appears frightened or tired or excited to us. a movement, however, needn’t be a spatial displacement : any time-measured change (Δ) in a set quality (position, colour, amplitude, texture, shape, temperature, etc) is a movement. for example: movement in the intensity of light can also be expressive.
it is convenient to use the symbol Δ to represent movement.

1D movements

when only one quality in an object ‘moves’, we use terms like 1D Δ, 1D pattern or 1D behaviour to describe such a simple object. 1D behaviours, alone, may or may-not be informative enough to be perceived as emotions; and combining Δs in multiple qualities can help reduce ambiguity. for performance artists: simple efforts are not sufficient to let the audience see an emotion clearly; and combining multiple efforts helps them express specific emotions and other complex ideas. a light getting brighter might appear threatening if it moves toward you, and might appear to be helpful if it points toward something else. however, overacting is as much a sign of ineptitude as an inability to express ; and movements can also lead to over-expression information-overload if overused, or be confusing or obfuscatory. in any case, it is important to understand 1D movements first, while framing a theory about movements, patterns and behaviour.

1D movements in a black-box

a black-box has several qualities: its colour, form, size, surface-texture, temperature, etc. the box’s appearance is expressive, to some extent. for example, we perceive a red box differently than a green one, or a box with spikes appears more aggressive than a box with a smooth surface. however, appearance is static and conveys a limited set of expressions. movements allow the box a broader range of expressions.
a black-box can have 1D Δs in size, form, colour or texture, or even temperature or sound or position. when these movements occur in specific patterns and combinations, a black-box may appear to be aggressive or timid, eager or exhausted, etc.

exploring more channels for communication

today, we have simple machines, whose state can be clearly communicated using screens or beeps. but when machines are more complex and numerous, these channels of communication will quickly get overloaded. also, each channel has its limitations: for example, we may not have the presence-of-mind to look at a screen in an emergency, and voices and words cannot pass language barriers. we need to look at other channels of communication—touch, smell, taste, form, texture, etc—to distribute information and prevent overloading. moreover, instead of attaching screens or speakers to every object, we should explore using what-it-already-has to help an object express itself.

what are these ‘machines’?

since this write-up supports a project to create an exhibited artwork, i talk about objects with physical bodies. however: because understanding movements, patterns and behaviours helps us explore several other channels for communication, the idea of an expressive black-box can apply to any system (including computer programmes that may be entirely intangible or even clothing made by fashion designers).

what we’re building

we intend to communicate the notion of ‘expressive black-boxes’ to:
  1. people, so that they learn to see emotions in complex machines.
  2. inventors, so that they invent/explore different ways for objects to express to us.
for this, we should exhibit machines (objects) that:
  1. appear non-simple ;
  2. donot look like people (i.e., are non-exo-anthropomorphic) ;
  3. can demonstrate behaviour ; and
  4. are perceived to be emotional.
so we’re building black-boxes—yes, literally—which express themselves using simple movements—usually, 1D. if people can look at the boxes and unambiguously assign different emotions to them, then we will have succeeded in our communication.

References

[2]"making sense by making sentient: effectance motivation increases anthropomorphism". Journal of Personality and Social Psychology. Vol. 99. (2010): Num. 3. 410-435. [http://www.careymorewedge.com/papers/SensebySentience.pdf]
[3]"the little thoughts of thinking machines". psychology today. (1983): [http://www-formal.stanford.edu/jmc/little.pdf]
Add to Comment
Creative Commons License
All discussions are licensed under a Creative Commons Attribution 4.0 International License.
Submit
^
1
^
Rohit Sharma 4/10/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 11
to communicate this, we intend to create non-simple objects which, without looking like people, are perceived to be emotional.
non-simple
are you referring to something broader than “complex”?
^
1
^
han sh 4/25/2016
Permalink|Reply
Private. Collaborators only.
lack the technical ability to make something that’s truly complex. and don’t want to make something that the audience can look at and instantly figure out. so want to make something that could pass off as complex to an audience who doesn’t have more than a minute or two to spend with it. hence: non-simple.
ArchivedComment by Rohit Sharma1 point
^
1
^
Rohit Sharma 4/10/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 11
these objects are emotive even though they aren’t consciously trying to express anything to us. emotions are expressed and perceived. since perception completes an emotion, people can perceive emotions in several things: in other people, or in objects that move or make sounds or animate in some way, or (perhaps) even in inanimate objects.
perception completes an emotion
not sure what you meant by this. perhaps the empathy loop? another thing you could mention is that perception is seldom devoid of emotion.
^
1
^
Rohit Sharma 4/10/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 11
these objects are emotive even though they aren’t consciously trying to express anything to us. emotions are expressed and perceived. since perception completes an emotion, people can perceive emotions in several things: in other people, or in objects that move or make sounds or animate in some way, or (perhaps) even in inanimate objects.
emotions are expressed and perceived
and felt?
^
1
^
han sh 4/25/2016
Permalink|Reply
Private. Collaborators only.
expressed and perceived, outward . . . and internally too. the body is a complex machine, and complex processes manifest in many ways, and we ‘feel’ them. i would like to be able to say that an emotion is basically first a bunch of subconscious responses that the mind perceives… etc… but i don’t really have a clue.
^
1
^
Rohit Sharma 4/10/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 11
when the computer doesn’t behave as we want it to, we tell others that it is angry at us on that day. some motorcycle riders are deeply attached to their bikes: stroking them, giving them names and personalities, and responding to each sound and quirk in the machine affectionately. they can sense the bike getting tired at the end of a journey, or feel like the bike is happy after a tune-up. a candle’s flame may appear to be excited ; or seem to be struggling, urging me to shield it from a draft. trees seem to be happy, sad or tired ; and a howling wind is clearly expressing anger. at times, i even consider a country’s bureaucratic setup to be a lethargic or spiteful object, and adopt a more calculated approach while dealing with it.
when the computer doesn’t behave as we want it to, we tell others that it is angry at us on that day. some motorcycle riders are deeply attached to their bikes: stroking them, giving them names and personalities, and responding to each sound and quirk in the machine affectionately. they can sense the bike getting tired at the end of a journey, or feel like the bike is happy after a tune-up. a candle’s flame may appear to be excited ; or seem to be struggling, urging me to shield it from a draft. trees seem to be happy, sad or tired ; and a howling wind is clearly expressing anger. at times, i even consider a country’s bureaucratic setup to be a lethargic or spiteful object, and adopt a more calculated approach while dealing with it.
i think you’re mixing up the different ways we attribute or perceive emotions from object and perhaps you should elucidate these tendencies.
ArchivedComment by Rohit Sharma1 point
ArchivedComment by Rohit Sharma1 point
^
1
^
Rohit Sharma 3/22/2016
Permalink|Reply
Private. Collaborators only.
i think you might be skipping an argument which you might like to clarify; i think you’re taking the stand that the meachines won’t have emotions of their own or those emotions would be trivial in the wake of emtions which would be interpreted/assigned onto them by us, particularly when they would look nothing like us?
^
1
^
han sh 4/8/2016
Permalink|Reply
Private. Collaborators only.
initially, i thought that machines would have emotions, as if emotions are an intrinsic property of the expressive object. but i am not so sure about this now. if a child were to see a machine ‘M’, he might not understand it precisely and hence only get a general idea etc etc, i.e., be likely understand it through an abstraction (like an emotion). if a scientist saw the same machine ‘M’, then she might not need to resort to emotions or such abstractions. i think emotions are only recognised at the moment of perception, and that is why, instead of discussing whether an object, person or complex system can have an emotion, we need to discuss when an observer perceives something to be an emotion.
^
1
^
han sh 4/8/2016
Permalink|Reply
Private. Collaborators only.
initially, i thought that machines would have emotions, as if emotions are an intrinsic property of the expressive object. but i am not so sure about this now. if a child were to see a machine ‘M’, he might not understand it precisely and hence only get a general idea etc etc, i.e., be likely understand it through an abstraction (like an emotion). if a scientist saw the same machine ‘M’, then she might not need to resort to emotions or such abstractions. i think emotions are only recognised at the moment of perception, and that is why, instead of discussing whether an object, person or complex system can have an emotion, we need to discuss when an observer perceives something to be an emotion.
^
1
^
Rohit Sharma 3/22/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 8
hence, assigning emotions to such machines is not greatly useful. we donot need to use emotions to explain how simple objects and machines perform. we can still choose to do so, but using emotions in such cases is largely uninformative. Anthropomorphism is the ascription of human characteristics to things not human. When is it a good idea to do this? When it says something that cannot as conveniently be said some other way. john mccarthy. "the little thoughts of thinking machines". psychology today. (1983): [http://www-formal.stanford.edu/jmc/little.pdf]
hence, assigning emotions to such machines is not greatly useful.
seems to be a jump. why do you take here that emotions are only useful for “transmitting state/information”? how about the efforts of the car industry or other makers of “simple machines”?
^
1
^
han sh 4/8/2016
Permalink|Reply
Private. Collaborators only.
i’m afraid i didn’t quite understand what you were referring to 😦 could you please share some example of what these ‘efforts’ are?
^
1
^
han sh 4/8/2016
Permalink|Reply
Private. Collaborators only.
could you please share some examples of these ‘efforts’? i’m not sure what you were referring to 😦
ArchivedComment by han sh0 points
ArchivedComment by Rohit Sharma0 points