Towards Emotional Machines

While the vast community of AI is focused on getting machines to recognize everyday objects there is a small faction figuring out how to get machines to meaningfully communicate with us. The following are three ethical points that should be considered in the endeavor of emotional machines.


We need to look no further than science fiction for reasons for pause about fully developing emotional systems. The 2012 film “Prometheus”, features an android named David who’s ulterior motives and ability to lie kill the majority of the human characters in the film. On the opposite end of this spectrum, the 2013 film “Her” features a fully developed relationship between a human and a non-corporeal, artificial, digital assistant: Samantha. While David can lift boulders and Samantha is trapped in a machine, we should not discount either of their capability to manipulate and evoke a spectrum of emotions, from anger to love. Feelings are real and we should be careful in developing artificial systems which have the power to manipulate them.

It is not out of the realm of imagination that, in a zero-sum scenario, our pipeline [a machine learning model which takes an image of the face and identifies emotions] could be used to exploit human actors involved in the scenario. By analyzing the emotion on an actor’s face, with a simple “scheming” module, our pipeline could deliver an expression that could trick or deceive the human. We must not make any mistake here, the machine does not feel “happy.” Last winter, an AI avatar received criticism for joking about destroying humanity. In response, the creator had this to say:

“Many of the comments would be good fun if they didn’t reveal the fact that many people are being deceived into thinking that this (mechanically sophisticated) animatronic puppet is intelligent. It’s not. It has no feeling, no opinions, and zero understanding of what it says. It’s not hurt. It’s a puppet.”

“Facebook’s head of AI really hates Sophia the robot (and with good reason)”, James Vincent, The Verge, Jan 2018. (URL)

Though the Muppets are puppets too, their nature is more observable, the puppet master is often nearby and we can see his or her lips moving subtly. They are transparently built and controlled for the purpose of entertainment. Until we can reasonably establish a framework for whether a being has emotions which it can reason about, we must keep the above warning in mind.


When does a personal assistant become capable of being in a more emotional relationship? When does Andrew, the Bicentennial Man cease to become merely a robot? When does Siri become Samantha? Personal assistants can remember every quantifiable piece of information about us, our names, our birthdays, our calendar appointments; but they have no agency. When will they ask us if we want to buy tickets to a movie we might like or recommend a grocery item for a dish we might like?

If they can be in a relationship they must be responsive within the context of one. Today, emotional robots and agents are primarily built for the purposes of research (personal assistants are not yet emotional). If and when they reach some emotionally functional pinnacle and they are deployed in the real world, will we be nice to them? Without any repercussions we permit ourselves to yell in frustration at Alexa and Siri for not understanding us, will emotional agents change our behavior if they can react to our anger? Should a personal assistant refused to give us answers–or provided less efficient routes to our destinations–if we were regularly rude to them? This brings in some contextual questions about the relationship too; with the agent: are we roommates, are they an old friend, or have we just decided to get serious and move in together? How should they behave when two people do actually move in together?

Today agents such as Siri or Alexa are individual per phone or Amazon account, isolated from other instances of themselves. If we are to see them more as coherent individuals that we care about they should be unified across our computing devices, like Jarvis from Marvel’s Ironman.


How can we change the way we treat machines if all we see are machines? Sophia, the subject of the quote above from The Verge, can exhibit an impressive range of emotions. However, ignoring the abrasive effect of the Uncanny Valley, to anyone who can recognize its her façade (the cluster of wires on the back of her head are sometimes covered with a wig) will never see anything but a robot.

Without some heavy investment in robotics as well as the material sciences, it will be quite difficult to make an android which will not trigger some disconcerting uncanny valley effect. For this reason we should be focusing on the digital realm to make machines more human-like. We now delve very closely to the subject of the 1950 paper “The Imitation Game” by Alan Turing.

“No engineer or chemist claims to be able to produce a material which is indistinguishable from the human skin. It is possible that at some time this might be done, but even supposing this invention available we should feel there was little point in trying to make a ‘thinking machine’ more human by dressing it up in such artificial flesh.”

The Imitation Game, Alan Turing, 1950

Machines must be able to think as we do–rather unidentifiably machine as Turing states–before we should make efforts to give them some corporeal form.

Thanks for reading. “Deception” section adapted from a project paper for Intelligent Interactive Systems (Spring 2018). (PDF of report)

(Almost) nothing will save your traditional retail business

Sales will not save your traditional business. No amount of volume on 60%-off sales will recover the expenses on physical space in malls and shopping centers. The attendants are out of your target demographic or are perusing through the stores because there is nothing else to do in the town at that time of day. 

Digital services will not save your traditional business. The investment cost and upstart efforts will only take you deeper into red. People will be dissuaded right before checkout by shipping costs where they second-guess themselves if they should just go to one of your terrible stores and try and buy it there. Let’s be honest though, no one wants to drive anywhere anymore.

Downsizing your store fronts will not save your traditional business. No one visits you in the first place, making fewer stores will only make it easier to not visit your stores, still no one will go to them.

Management shuffling will not save you. The reasons for failing have nothing to do with management, and new management who only do all the above things slightly differently, will not be enough –and may ultimately bring on your inevitable plunge into bankruptcy even faster.

Recognize that today’s market currents do not include your products. That you IPO’ed at just the right time, when your products were “in” with the key demographics. That today it is are chronically uncool to such a degree that no one in the next generation will even touch your products. You need to press restart on a few, if not the majority of, paradigms.

Price matching may save your traditional business. Yes, $30 gold plated HDMI cables are profitable, but any rube that checks that price for a just a second will find the same for a fraction of the price and would have to be desperate to buy one at such a premium.

Same day delivery –with free shipping– may save your traditional business. Even one day delivery or in-store pickup may be sufficient. The basic principle is no-one likes going to stores and wandering labyrinthine aisles in which they are looking for a single, simple, night-light. Do the work for them, bring it to their door quickly or make them pick it up at the entrance to the maze–save us a ball of yarn.

Companies in the bearish mindset when writing this: Sears, Abercrombie & Fitch, Underarmor, JC Penny, Harley Davidson.

… in a bullish mindset: Wallmart, Best Buy, Target.

A Discarded Paragraph From a Machine Learning Paper

Cut from the introduction:

Finally the economic incentive. By using machine learning for this task we are freeing up the man-hours of dozens if not hundreds of astrophysics PhD students whose sole existence is to categorize galaxies, quasars, and red giants for their thesis advisers. In this manner we follow the footsteps of Bertrand Russel who advocated for shorter working hours in American industrial revolution of the 20th century \cite{russell-idleness}. Today, thanks to the increasing plurality of machine learning tools we are again questioning what is a necessary weekly hourly allocation of work.