AI Could be the Social Atomic Bomb of the 21st Century

I’ve seen some impressive AI demos across the web where researchers teach an AI to reproduce human faces or lip sync audio over pre-existing videos. Impressive, and unimaginable, several years ago for sure. Similarly unimaginable today is the way in which social networks and other tools have been used to undermine democratic systems through the propagation of misinformation. I fear that these two things have a shared future, where fake news is generated by AI’s is indistinguishable from the real thing.

One of my first classes in graduate school was artificial intelligence, after the course summary the instructor and the TA’s held a bit of a Q and A session. One of the questions that stuck out to me was along the lines “people like Elon Musk and Stephen Hawking are giving us really big warnings about AI, could their predictions be accurate, should we be worried?” The panel looked at one another for a moment and they gave several answers in so many words saying: “they’re not in the field and don’t know what they’re talking about” and concluding “there is always an off switch.”

In college there was an article assigned as reading in my introductory CS class, published in Wired magazine in 2000: “Why the Future Doesn’t Need Us” by Bill Joy. In which, Joy stresses that we could be on the verge of creating the social or existential equivalent of the atomic bomb for the 21st century if scientists (computer and bio-scientists in particular) forego ethics or if standards are relaxed in anyway. I make it a point to reread it every couple of years, it rang true when I first read it in 2009, today that ring is deafening.

NRA slogans are simple logical blanket statements but they do not directly refute criticisms of gun control. A recent one like “the only thing that can stop a bad guy with a gun is a good guy with a gun” is based on a lot of assumptions like how a good guy can never become a bad guy. Banning guns nationally –as Australia did in 1996– also stops bad guys with guns. An older one which I don’t see very often anymore is “guns don’t kill people, people kill people.” This is true, guns -inanimate objects- don’t just walk around autonomously going off, but can be more accurately phrased “Guns don’t kill people, people with guns kill people.”

I think the TA’s of that introductory AI class are largely right in the way that the NRA slogans are right. There is always an off switch, and AI’s do not possess any physical autonomy no matter how much digital autonomy we give them. Yet the larger point is being missed about the dangers of over-developing AI. “AI doesn’t harm people, bad actors using AI harm people.” Social networks don’t spread propaganda, bad actors, AI algorithms–designed to make us click as much as possible–and unsuspecting users on social networks spread propaganda. When it comes to these complex social systems which AI can run on top of the off switch is difficult to find, and the on switch is often tripped without fully recognizing it.

In another 20 years, will AI be a tool for empowerment, justice, and productivity? Or a tool for suppression, discrimination, and distraction? I therefore believe it’s critically important that AI researchers think critically and ethically about what they are building. It’s no longer a question of what can AI do but of what someone can do with it. In the context of some tool or model, what malicious actions can be performed with using it, how to recognize it, and finally, how to turn it off.

Web 3.0 and Beyond

This post was initially drafted in January, 2013. I stumbled on it in my drafts folder and five years on I think it still offers a good retrospect and today a good summary. From the original version I’ve only added a few links and revised the wording. For today, I’ve Appended a section predicting the attitudes of Web 4.0.

Like it or not I think it (Web 3.0) is right around the corner, within years the new web will be staring right into our eyes and unless we change something about how we work and interact, we won’t be able to look away. What does Web 3.0 mean exactly? Essentially it is the next step of behavior of the internet, this prediction is solely based on the trends of the time. To see why I’m predicting the new nature of this web and what it is exactly a brief history of what we know as “the Web” must be examined.

Web 0.0: “Did you (singular) get that?”

  • Very small communities
  • Most likely you knew a decent subset of everyone on there.

Web 1.0: “Did you (plural) get that?”

  • Scaling and everyday incorporation of email
  • Small scale sharing
  • Internet spreading into ubiquity (eternal September)

Web 2.0: “Did you make that?”

  • User created content
  • The ubiquitous internet
  • Blogging/vlogging
  • Democratic video and music production and consumption

Web 3.0: “Did you see that?”

  • Share everything
  • The ubiquity of (the) social networks.
  • All about you
  • Sometimes indistinguishable from desktop experience
  • The end of privacy. [1]

Web 4.0: “Did you feel that?”

  • Either more empathetic or more tribal and toxic. Everyone is certainly aiming for the former, but social networks need to take a more firm stance towards behavior in the latter category.
  • Emphasis on privacy, the EU has been making the best moves in this direction. Though not always properly, for example the cookie law is a bit misguided, while General Data Protection Regulation is the right one.
  • Emphasis on security, penalizing companies which leak sensitive information need to be penalized more. Again the EU is making the right steps here. This goes beyond securing just private information; with the prevelance of the internet of things, this extends to systems of all sizes.
  • More personal applications. The line where the Web begins and ends blurs on this point, are devices like Alexa and Google Home the Web? This is not to say more personal devices but the nature of applications on our devices will be more focused around us.

  1. This is the only thing I read from the original draft that was hyperbolic. Privacy hasn’t ended, but the way we understand it has changed considerably. Keeping in mind too that this was written before the Snowden leaks in the summer of 2013. 

How to Run a Crowdfunding Scam

Despite only ever funding one crowdfunding campaign in my entire life, the targeted ad powers that be have determined that I really like crowdfunding. A specific category always gets advertised to me: technology that can do something truly amazing which no one else is pursuing. A year or so later I catch an article that these same projects either get removed from Kickstarter, move over to Indiegogo, or vanish indefinitely into development hell. I’ll absolutely give the benefit of the doubt to the majority these creators, unexpected snags can pop up when you’re trying to scale hardware products. Though sometimes I wonder if people are just cashing in on technolust. In this brief post I’ll layout some steps to running a crowdfunding scam around a technology product.

  1. Come up with a tech product that:
    1. Fills some niche. Good categories include smartphone accessories, drones, and VR gadgets. For the appearance of novelty you can also take regular day item which just has some bluetooth addon so you can call it “smart” some ideas for free:
      • Smart ring (addendum December 6, 2018: this)
      • Smart post-it note
      • Smart drawers
    2. Claims to do something way better than anything out there, especially since you can connect it to your phone!
    3. Technically impossible, but not so much as to make customers think that it is impossible. For example (real examples):
      • A hub that transcribes lectures which is also a portable battery and wireless speaker. (Titan Note)
      • A drone that follows you autonomously and takes selfies. (Lily drone)
      • A bracelet which projects a smartphone interface on your wrist. (Cicret)
  2. Hire a prop shop to make a mock of the product.
  3. Make an amazing marketing video featuring the prop-product. Using the magic of editing and special effects, make it look like it’s a functional product that just needs a couple thousand dollars to bring to market. You could skip step 2 and use a digital model and place it in the video too but you’ll want something physical for later steps.
  4. Post it to Kickstarter or Indiegogo, preference to the latter because if you don’t meet the goal you still get the money.
  5. While the campaign is on-going take the time to develop a crude actually functioning prototype to post as an update to the backers. You don’t want to deceive people too much. This step is optional.
  6. After the campaign is over, post updates every couple months about the challenges and the “progress” you’re making. Do this less frequently as time goes on, a sort of fade away to make people ever forget you still have their money.
  7. Finally, after several months–or years–make a post about how you’ve run out of funding. How there were too many problems with the manufacturers but that you’re proud of your team and the work that you accomplished.
  8. Disappear with all that money. You can do this as early as step 5 too.