Towards Emotional Machines

While the vast community of AI is focused on getting machines to recognize everyday objects there is a small faction figuring out how to get machines to meaningfully communicate with us. The following are three ethical points that should be considered in the endeavor of emotional machines.

Deception

We need to look no further than science fiction for reasons for pause about fully developing emotional systems. The 2012 film “Prometheus”, features an android named David who’s ulterior motives and ability to lie kill the majority of the human characters in the film. On the opposite end of this spectrum, the 2013 film “Her” features a fully developed relationship between a human and a non-corporeal, artificial, digital assistant: Samantha. While David can lift boulders and Samantha is trapped in a machine, we should not discount either of their capability to manipulate and evoke a spectrum of emotions, from anger to love. Feelings are real and we should be careful in developing artificial systems which have the power to manipulate them.

It is not out of the realm of imagination that, in a zero-sum scenario, our pipeline [a machine learning model which takes an image of the face and identifies emotions] could be used to exploit human actors involved in the scenario. By analyzing the emotion on an actor’s face, with a simple “scheming” module, our pipeline could deliver an expression that could trick or deceive the human. We must not make any mistake here, the machine does not feel “happy.” Last winter, an AI avatar received criticism for joking about destroying humanity. In response, the creator had this to say:

“Many of the comments would be good fun if they didn’t reveal the fact that many people are being deceived into thinking that this (mechanically sophisticated) animatronic puppet is intelligent. It’s not. It has no feeling, no opinions, and zero understanding of what it says. It’s not hurt. It’s a puppet.”

“Facebook’s head of AI really hates Sophia the robot (and with good reason)”, James Vincent, The Verge, Jan 2018. (URL)

Though the Muppets are puppets too, their nature is more observable, the puppet master is often nearby and we can see his or her lips moving subtly. They are transparently built and controlled for the purpose of entertainment. Until we can reasonably establish a framework for whether a being has emotions which it can reason about, we must keep the above warning in mind.

Relationships

When does a personal assistant become capable of being in a more emotional relationship? When does Andrew, the Bicentennial Man cease to become merely a robot? When does Siri become Samantha? Personal assistants can remember every quantifiable piece of information about us, our names, our birthdays, our calendar appointments; but they have no agency. When will they ask us if we want to buy tickets to a movie we might like or recommend a grocery item for a dish we might like?

If they can be in a relationship they must be responsive within the context of one. Today, emotional robots and agents are primarily built for the purposes of research (personal assistants are not yet emotional). If and when they reach some emotionally functional pinnacle and they are deployed in the real world, will we be nice to them? Without any repercussions we permit ourselves to yell in frustration at Alexa and Siri for not understanding us, will emotional agents change our behavior if they can react to our anger? Should a personal assistant refused to give us answers–or provided less efficient routes to our destinations–if we were regularly rude to them? This brings in some contextual questions about the relationship too; with the agent: are we roommates, are they an old friend, or have we just decided to get serious and move in together? How should they behave when two people do actually move in together?

Today agents such as Siri or Alexa are individual per phone or Amazon account, isolated from other instances of themselves. If we are to see them more as coherent individuals that we care about they should be unified across our computing devices, like Jarvis from Marvel’s Ironman.

Perception

How can we change the way we treat machines if all we see are machines? Sophia, the subject of the quote above from The Verge, can exhibit an impressive range of emotions. However, ignoring the abrasive effect of the Uncanny Valley, to anyone who can recognize its her façade (the cluster of wires on the back of her head are sometimes covered with a wig) will never see anything but a robot.

Without some heavy investment in robotics as well as the material sciences, it will be quite difficult to make an android which will not trigger some disconcerting uncanny valley effect. For this reason we should be focusing on the digital realm to make machines more human-like. We now delve very closely to the subject of the 1950 paper “The Imitation Game” by Alan Turing.

“No engineer or chemist claims to be able to produce a material which is indistinguishable from the human skin. It is possible that at some time this might be done, but even supposing this invention available we should feel there was little point in trying to make a ‘thinking machine’ more human by dressing it up in such artificial flesh.”

The Imitation Game, Alan Turing, 1950

Machines must be able to think as we do–rather unidentifiably machine as Turing states–before we should make efforts to give them some corporeal form.

Thanks for reading. “Deception” section adapted from a project paper for Intelligent Interactive Systems (Spring 2018). (PDF of report)

AI Could be the Social Atomic Bomb of the 21st Century

I’ve seen some impressive AI demos across the web where researchers teach an AI to reproduce human faces or lip sync audio over pre-existing videos. Impressive, and unimaginable, several years ago for sure. Similarly unimaginable today is the way in which social networks and other tools have been used to undermine democratic systems through the propagation of misinformation. I fear that these two things have a shared future, where fake news is generated by AI’s is indistinguishable from the real thing.

One of my first classes in graduate school was artificial intelligence, after the course summary the instructor and the TA’s held a bit of a Q and A session. One of the questions that stuck out to me was along the lines “people like Elon Musk and Stephen Hawking are giving us really big warnings about AI, could their predictions be accurate, should we be worried?” The panel looked at one another for a moment and they gave several answers in so many words saying: “they’re not in the field and don’t know what they’re talking about” and concluding “there is always an off switch.”

In college there was an article assigned as reading in my introductory CS class, published in Wired magazine in 2000: “Why the Future Doesn’t Need Us” by Bill Joy. In which, Joy stresses that we could be on the verge of creating the social or existential equivalent of the atomic bomb for the 21st century if scientists (computer and bio-scientists in particular) forego ethics or if standards are relaxed in anyway. I make it a point to reread it every couple of years, it rang true when I first read it in 2009, today that ring is deafening.

NRA slogans are simple logical blanket statements but they do not directly refute criticisms of gun control. A recent one like “the only thing that can stop a bad guy with a gun is a good guy with a gun” is based on a lot of assumptions like how a good guy can never become a bad guy. Banning guns nationally –as Australia did in 1996– also stops bad guys with guns. An older one which I don’t see very often anymore is “guns don’t kill people, people kill people.” This is true, guns -inanimate objects- don’t just walk around autonomously going off, but can be more accurately phrased “Guns don’t kill people, people with guns kill people.”

I think the TA’s of that introductory AI class are largely right in the way that the NRA slogans are right. There is always an off switch, and AI’s do not possess any physical autonomy no matter how much digital autonomy we give them. Yet the larger point is being missed about the dangers of over-developing AI. “AI doesn’t harm people, bad actors using AI harm people.” Social networks don’t spread propaganda, bad actors, AI algorithms–designed to make us click as much as possible–and unsuspecting users on social networks spread propaganda. When it comes to these complex social systems which AI can run on top of the off switch is difficult to find, and the on switch is often tripped without fully recognizing it.

In another 20 years, will AI be a tool for empowerment, justice, and productivity? Or a tool for suppression, discrimination, and distraction? I therefore believe it’s critically important that AI researchers think critically and ethically about what they are building. It’s no longer a question of what can AI do but of what someone can do with it. In the context of some tool or model, what malicious actions can be performed with using it, how to recognize it, and finally, how to turn it off.

Web 3.0 and Beyond

This post was initially drafted in January, 2013. I stumbled on it in my drafts folder and five years on I think it still offers a good retrospect and today a good summary. From the original version I’ve only added a few links and revised the wording. For today, I’ve Appended a section predicting the attitudes of Web 4.0.

Like it or not I think it (Web 3.0) is right around the corner, within years the new web will be staring right into our eyes and unless we change something about how we work and interact, we won’t be able to look away. What does Web 3.0 mean exactly? Essentially it is the next step of behavior of the internet, this prediction is solely based on the trends of the time. To see why I’m predicting the new nature of this web and what it is exactly a brief history of what we know as “the Web” must be examined.

Web 0.0: “Did you (singular) get that?”

  • Very small communities
  • Most likely you knew a decent subset of everyone on there.

Web 1.0: “Did you (plural) get that?”

  • Scaling and everyday incorporation of email
  • Small scale sharing
  • Internet spreading into ubiquity (eternal September)

Web 2.0: “Did you make that?”

  • User created content
  • The ubiquitous internet
  • Blogging/vlogging
  • Democratic video and music production and consumption

Web 3.0: “Did you see that?”

  • Share everything
  • The ubiquity of (the) social networks.
  • All about you
  • Sometimes indistinguishable from desktop experience
  • The end of privacy. [1]

Web 4.0: “Did you feel that?”

  • Either more empathetic or more tribal and toxic. Everyone is certainly aiming for the former, but social networks need to take a more firm stance towards behavior in the latter category.
  • Emphasis on privacy, the EU has been making the best moves in this direction. Though not always properly, for example the cookie law is a bit misguided, while General Data Protection Regulation is the right one.
  • Emphasis on security, penalizing companies which leak sensitive information need to be penalized more. Again the EU is making the right steps here. This goes beyond securing just private information; with the prevelance of the internet of things, this extends to systems of all sizes.
  • More personal applications. The line where the Web begins and ends blurs on this point, are devices like Alexa and Google Home the Web? This is not to say more personal devices but the nature of applications on our devices will be more focused around us.

  1. This is the only thing I read from the original draft that was hyperbolic. Privacy hasn’t ended, but the way we understand it has changed considerably. Keeping in mind too that this was written before the Snowden leaks in the summer of 2013.