Categories
Culture Technology

Towards Emotional Machines

While the vast community of AI is focused on getting machines to recognize everyday objects there is a small faction figuring out how to get machines to meaningfully communicate with us. The following are three ethical points that should be considered in the endeavor of emotional machines.

Deception

We need to look no further than science fiction for reasons for pause about fully developing emotional systems. The 2012 film “Prometheus”, features an android named David who’s ulterior motives and ability to lie kill the majority of the human characters in the film. On the opposite end of this spectrum, the 2013 film “Her” features a fully developed relationship between a human and a non-corporeal, artificial, digital assistant: Samantha. While David can lift boulders and Samantha is trapped in a machine, we should not discount either of their capability to manipulate and evoke a spectrum of emotions, from anger to love. Feelings are real and we should be careful in developing artificial systems which have the power to manipulate them.

It is not out of the realm of imagination that, in a zero-sum scenario, our pipeline [a machine learning model which takes an image of the face and identifies emotions] could be used to exploit human actors involved in the scenario. By analyzing the emotion on an actor’s face, with a simple “scheming” module, our pipeline could deliver an expression that could trick or deceive the human. We must not make any mistake here, the machine does not feel “happy.” Last winter, an AI avatar received criticism for joking about destroying humanity. In response, the creator had this to say:

“Many of the comments would be good fun if they didn’t reveal the fact that many people are being deceived into thinking that this (mechanically sophisticated) animatronic puppet is intelligent. It’s not. It has no feeling, no opinions, and zero understanding of what it says. It’s not hurt. It’s a puppet.”

“Facebook’s head of AI really hates Sophia the robot (and with good reason)”, James Vincent, The Verge, Jan 2018. (URL)

Though the Muppets are puppets too, their nature is more observable, the puppet master is often nearby and we can see his or her lips moving subtly. They are transparently built and controlled for the purpose of entertainment. Until we can reasonably establish a framework for whether a being has emotions which it can reason about, we must keep the above warning in mind.

Relationships

When does a personal assistant become capable of being in a more emotional relationship? When does Andrew, the Bicentennial Man cease to become merely a robot? When does Siri become Samantha? Personal assistants can remember every quantifiable piece of information about us, our names, our birthdays, our calendar appointments; but they have no agency. When will they ask us if we want to buy tickets to a movie we might like or recommend a grocery item for a dish we might like?

If they can be in a relationship they must be responsive within the context of one. Today, emotional robots and agents are primarily built for the purposes of research (personal assistants are not yet emotional). If and when they reach some emotionally functional pinnacle and they are deployed in the real world, will we be nice to them? Without any repercussions we permit ourselves to yell in frustration at Alexa and Siri for not understanding us, will emotional agents change our behavior if they can react to our anger? Should a personal assistant refused to give us answers–or provided less efficient routes to our destinations–if we were regularly rude to them? This brings in some contextual questions about the relationship too; with the agent: are we roommates, are they an old friend, or have we just decided to get serious and move in together? How should they behave when two people do actually move in together?

Today agents such as Siri or Alexa are individual per phone or Amazon account, isolated from other instances of themselves. If we are to see them more as coherent individuals that we care about they should be unified across our computing devices, like Jarvis from Marvel’s Ironman.

Perception

How can we change the way we treat machines if all we see are machines? Sophia, the subject of the quote above from The Verge, can exhibit an impressive range of emotions. However, ignoring the abrasive effect of the Uncanny Valley, to anyone who can recognize its her façade (the cluster of wires on the back of her head are sometimes covered with a wig) will never see anything but a robot.

Without some heavy investment in robotics as well as the material sciences, it will be quite difficult to make an android which will not trigger some disconcerting uncanny valley effect. For this reason we should be focusing on the digital realm to make machines more human-like. We now delve very closely to the subject of the 1950 paper “The Imitation Game” by Alan Turing.

“No engineer or chemist claims to be able to produce a material which is indistinguishable from the human skin. It is possible that at some time this might be done, but even supposing this invention available we should feel there was little point in trying to make a ‘thinking machine’ more human by dressing it up in such artificial flesh.”

The Imitation Game, Alan Turing, 1950

Machines must be able to think as we do–rather unidentifiably machine as Turing states–before we should make efforts to give them some corporeal form.

Thanks for reading. “Deception” section adapted from a project paper for Intelligent Interactive Systems (Spring 2018). (PDF of report)

Leave a Reply

Your email address will not be published. Required fields are marked *