Categories
Technology Thought Experiments

How to Program a Human – Part 2: Emotions

Have you ever considered why it is people have emotions? What are emotions really comprised of? What triggers them, what determines their intensity? What causes the same stimuli to be interpreted in two different ways, by two different observers?

I’ve put some thought into this during a discussion about the creation of an emotions chip and how one would go about programming emotional responses into a computer or robot.

Consider this hypothetical humanoid example:

John and Matt are friends – they’ve been friends since they were little kids. They like to rough-house, have insult contests to see who can come up with the most insulting quip, and have generally different preferences in women.

While running down the city sidewalk, John takes a physical jab at Matt, Matt reacts with a friendly reciprocal jab. During the horseplay, a bystander gets run into, and also jabbed. The bystander yells out some profanity and insults, and fumes about it the rest of the day.

What psychological forces are in play here? We have John and Matt who are happy and playing, there is a certain level of trust between them that indicates anything John or Matt does to the other is going to be in jest, in accordance with how they’ve each understood the personality of the other. The bystander, however, is not keen on the intentions and understanding of the two boy’s personalities, and considers their horseplay to be rude, disrupting, and inappropriate.

Now, if we were going to mimic this same scenario, but Matt is a robot named M477, we’d need a few things in the computer to interpret and record this data. Here’s the same scenario, but using M477 as John’s childhood toy:

John has owned his robot M477 since he was a child. Over time, M477 has observed and taken note that John likes to rough-house, try to come up with the most creative insults, and likes a particular type of woman.

While running down the city sidewalk, John takes a physical jab at M477, and M477 receives the jab, calculates the perceived force intended (taking into consideration the momentum of the fist and the time at that momentum, the subsequent pull-back of the fist before actually making contact and the time taken to pull-back, and the velocity they were moving down the sidewalk :: v((m*t’) – (p * t”)) = r) and reciprocates with an equally playful jab back. …

What about the Bystander? If you consider the bystander as a robot, BY574ND3R, he is not inherently privy to the trust-factor that M477 and John have between each other, nor does he contain the data that M477 has about John’s personality.

…During the horseplay, BY574ND3R gets run into, and also jabbed. As programmed, when encountering an unexpected result (getting ran into with force, and jabbed, without a display of consideration from the offending party), a calculation is performed (taking into account the current system morale, the inherent non-zero initial level of trust between the offending parties and BY574ND3R, the force of the encounter, and the apparent level of disregard for BY574ND3R’s right to exist in a particular point of space-time :: m * t * f * d = a) the anger emotion triggers – with the intensity gauged by all the variables of the equation, as well as a decrease in BY574ND3R’s system morale.

That’s just two emotions: Comradery and Anger. In this same way, other emotions, and even variables could be affected. For the sake of example, if there were a situation in which exclusivity was promised to a particular computer system for a task, and the Promiser was discovered to be using other systems to perform the same tasks, it would result in a decreased trust level, and a negative value factor applied to the system’s overall morale, and to the system’s certainty of understanding what it’s owner’s desires and expectations are.

And if such a situation would arise that caused the system’s overall morale to drop to a negative value, the system’s performance would be hindered, or stop altogether. Conversely, an increase in system morale would increase the system’s performance. Alternatively, if the system morale was low, and yet a neutral-morale gesture (i.e. John & M477’s playful jabs) were to take place, the response from M477 could be neutral or even negative towards John.

It’s my position that even something as complex as emotions could be represented with an artificially intelligent machine, a database of previous experiences with the user, a running variable of user trust, system morale, and the ability to measure the world around them indirectly, keeping such measurements in virtual memory to be recalled if needed, or discarded after a period of time.

Of course, some of the limitations that humans have with regard to trust and experiences with other human could be overcome in the world of robots. If they were all connected with 4G or better type communications devices and the ability to share their trust levels with regard to humans they have come into contact with (similarly to the word-of-mouth reputation some humans share amongst themselves) via something like “RoboEarth” then perhaps there would be a less intense response from BY574ND3R when he was run into. He would have been told by M477 that this was expected behavior, and nothing malicious was intended by it – it’s just his personality. This shared trust factor would then be weighted based on the length of time the reporting robot has spent with the human in the report.

The issue of human privacy would come into question, since this broadcast would also include personality traits to accompany the level of Trust – otherwise it would be a meaningless number. I’d be inclined to suggest that this is not really an issue, unless the robot knows of things you do that you do not want others to know you do – but then it would be no different than having a human in the room with you while you do those things.

This thought experiment could suggest that having emotions is what makes us “human” – in that there are actions and thoughts we have, and the ability to have them would be passed to robots and machines. I’m inclined to believe then that the ability to harbor the capacity for emotion is then not solely a human characteristic. However, this would obviously create problems in the world productivity, if machines were as susceptible to fluctuations in output as humans are based on environmental conditions. Thus, this is not to say that robots should be permitted to have emotions chips – but merely to conjecture that such a thing could exist given enough storage and experience with the particular user.

By [[Neo]]

I am a web programmer, system integrator, and photographer. I have been writing code since high school, when I had only a TI-83 calculator. I enjoy getting different systems to talk to each other, coming up with ways to mimic human processes using technology, and explaining how complicated things work.

Of my many blogs, this one is purely about the technology projects, ideas, and solutions that I have come across in my internet travels. It's also the place for technical updates related to my other sites that are part of The-Spot.Network.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.