AI Could be the Social Atomic Bomb of the 21st Century

I’ve seen some impressive AI demos across the web where researchers teach an AI to reproduce human faces or lip sync audio over pre-existing videos. Impressive, and unimaginable, several years ago for sure. Similarly unimaginable today is the way in which social networks and other tools have been used to undermine democratic systems through the propagation of misinformation. I fear that these two things have a shared future, where fake news is generated by AI’s is indistinguishable from the real thing.

One of my first classes in graduate school was artificial intelligence, after the course summary the instructor and the TA’s held a bit of a Q and A session. One of the questions that stuck out to me was along the lines “people like Elon Musk and Stephen Hawking are giving us really big warnings about AI, could their predictions be accurate, should we be worried?” The panel looked at one another for a moment and they gave several answers in so many words saying: “they’re not in the field and don’t know what they’re talking about” and concluding “there is always an off switch.”

In college there was an article assigned as reading in my introductory CS class, published in Wired magazine in 2000: “Why the Future Doesn’t Need Us” by Bill Joy. In which, Joy stresses that we could be on the verge of creating the social or existential equivalent of the atomic bomb for the 21st century if scientists (computer and bio-scientists in particular) forego ethics or if standards are relaxed in anyway. I make it a point to reread it every couple of years, it rang true when I first read it in 2009, today that ring is deafening.

NRA slogans are simple logical blanket statements but they do not directly refute criticisms of gun control. A recent one like “the only thing that can stop a bad guy with a gun is a good guy with a gun” is based on a lot of assumptions like how a good guy can never become a bad guy. Banning guns nationally –as Australia did in 1996– also stops bad guys with guns. An older one which I don’t see very often anymore is “guns don’t kill people, people kill people.” This is true, guns -inanimate objects- don’t just walk around autonomously going off, but can be more accurately phrased “Guns don’t kill people, people with guns kill people.”

I think the TA’s of that introductory AI class are largely right in the way that the NRA slogans are right. There is always an off switch, and AI’s do not possess any physical autonomy no matter how much digital autonomy we give them. Yet the larger point is being missed about the dangers of over-developing AI. “AI doesn’t harm people, bad actors using AI harm people.” Social networks don’t spread propaganda, bad actors, AI algorithms–designed to make us click as much as possible–and unsuspecting users on social networks spread propaganda. When it comes to these complex social systems which AI can run on top of the off switch is difficult to find, and the on switch is often tripped without fully recognizing it.

In another 20 years, will AI be a tool for empowerment, justice, and productivity? Or a tool for suppression, discrimination, and distraction? I therefore believe it’s critically important that AI researchers think critically and ethically about what they are building. It’s no longer a question of what can AI do but of what someone can do with it. In the context of some tool or model, what malicious actions can be performed with using it, how to recognize it, and finally, how to turn it off.

Leave a Reply

Your email address will not be published. Required fields are marked *