(Almost) nothing will save your traditional retail business

Sales will not save your traditional business. No amount of volume on 60%-off sales will recover the expenses on physical space in malls and shopping centers. The attendants are out of your target demographic or are perusing through the stores because there is nothing else to do in the town at that time of day. 

Digital services will not save your traditional business. The investment cost and upstart efforts will only take you deeper into red. People will be dissuaded right before checkout by shipping costs where they second-guess themselves if they should just go to one of your terrible stores and try and buy it there. Let’s be honest though, no one wants to drive anywhere anymore.

Downsizing your store fronts will not save your traditional business. No one visits you in the first place, making fewer stores will only make it easier to not visit your stores, still no one will go to them.

Management shuffling will not save you. The reasons for failing have nothing to do with management, and new management who only do all the above things slightly differently, will not be enough –and may ultimately bring on your inevitable plunge into bankruptcy even faster.

Recognize that today’s market currents do not include your products. That you IPO’ed at just the right time, when your products were “in” with the key demographics. That today it is are chronically uncool to such a degree that no one in the next generation will even touch your products. You need to press restart on a few, if not the majority of, paradigms.

Price matching may save your traditional business. Yes, $30 gold plated HDMI cables are profitable, but any rube that checks that price for a just a second will find the same for a fraction of the price and would have to be desperate to buy one at such a premium.

Same day delivery –with free shipping– may save your traditional business. Even one day delivery or in-store pickup may be sufficient. The basic principle is no-one likes going to stores and wandering labyrinthine aisles in which they are looking for a single, simple, night-light. Do the work for them, bring it to their door quickly or make them pick it up at the entrance to the maze–save us a ball of yarn.

Companies in the bearish mindset when writing this: Sears, Abercrombie & Fitch, Underarmor, JC Penny, Harley Davidson.

… in a bullish mindset: Wallmart, Best Buy, Target.

A Discarded Paragraph From a Machine Learning Paper

Cut from the introduction:

Finally the economic incentive. By using machine learning for this task we are freeing up the man-hours of dozens if not hundreds of astrophysics PhD students whose sole existence is to categorize galaxies, quasars, and red giants for their thesis advisers. In this manner we follow the footsteps of Bertrand Russel who advocated for shorter working hours in American industrial revolution of the 20th century \cite{russell-idleness}. Today, thanks to the increasing plurality of machine learning tools we are again questioning what is a necessary weekly hourly allocation of work.

AI Could be the Social Atomic Bomb of the 21st Century

I’ve seen some impressive AI demos across the web where researchers teach an AI to reproduce human faces or lip sync audio over pre-existing videos. Impressive, and unimaginable, several years ago for sure. Similarly unimaginable today is the way in which social networks and other tools have been used to undermine democratic systems through the propagation of misinformation. I fear that these two things have a shared future, where fake news is generated by AI’s is indistinguishable from the real thing.

One of my first classes in graduate school was artificial intelligence, after the course summary the instructor and the TA’s held a bit of a Q and A session. One of the questions that stuck out to me was along the lines “people like Elon Musk and Stephen Hawking are giving us really big warnings about AI, could their predictions be accurate, should we be worried?” The panel looked at one another for a moment and they gave several answers in so many words saying: “they’re not in the field and don’t know what they’re talking about” and concluding “there is always an off switch.”

In college there was an article assigned as reading in my introductory CS class, published in Wired magazine in 2000: “Why the Future Doesn’t Need Us” by Bill Joy. In which, Joy stresses that we could be on the verge of creating the social or existential equivalent of the atomic bomb for the 21st century if scientists (computer and bio-scientists in particular) forego ethics or if standards are relaxed in anyway. I make it a point to reread it every couple of years, it rang true when I first read it in 2009, today that ring is deafening.

NRA slogans are simple logical blanket statements but they do not directly refute criticisms of gun control. A recent one like “the only thing that can stop a bad guy with a gun is a good guy with a gun” is based on a lot of assumptions like how a good guy can never become a bad guy. Banning guns nationally –as Australia did in 1996– also stops bad guys with guns. An older one which I don’t see very often anymore is “guns don’t kill people, people kill people.” This is true, guns -inanimate objects- don’t just walk around autonomously going off, but can be more accurately phrased “Guns don’t kill people, people with guns kill people.”

I think the TA’s of that introductory AI class are largely right in the way that the NRA slogans are right. There is always an off switch, and AI’s do not possess any physical autonomy no matter how much digital autonomy we give them. Yet the larger point is being missed about the dangers of over-developing AI. “AI doesn’t harm people, bad actors using AI harm people.” Social networks don’t spread propaganda, bad actors, AI algorithms–designed to make us click as much as possible–and unsuspecting users on social networks spread propaganda. When it comes to these complex social systems which AI can run on top of the off switch is difficult to find, and the on switch is often tripped without fully recognizing it.

In another 20 years, will AI be a tool for empowerment, justice, and productivity? Or a tool for suppression, discrimination, and distraction? I therefore believe it’s critically important that AI researchers think critically and ethically about what they are building. It’s no longer a question of what can AI do but of what someone can do with it. In the context of some tool or model, what malicious actions can be performed with using it, how to recognize it, and finally, how to turn it off.