The hidden dangers of our robotic rivals
Technology has been displacing human jobs since the Industrial Revolution. But as researchers come ever closer to artificial intelligence, it's worth asking how such advances in technology will affect our jobs. What happens when artificial intelligence replaces manufacturing and sales jobs? Our caddies? What happens when it replaces our stock brokers?
Letting nature take its course — like it did during the Industrial Revolution — is a dangerous gamble, says Jerry Kaplan, author of "Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence."
Economics correspondent Paul Solman sat down with Kaplan to discuss how robots of the future will compete for our jobs. Tune in to tonight's Making Sen$e segment for more on the topic. Below, in an adapted excerpt from his book, Kaplan warns us about the hidden dangers of our robotic rivals.
— Kristen Doerer, Making Sen$e Editor
For a glimpse of the future, consider what happened the lazy afternoon of May 6, 2010. By that time, the percentage of securities trades initiated by computer programs had ballooned to an astonishing 60 percent. For all practical purposes, machines, not people, populated markets. Your innocent E*Trade order for 100 shares of Google was a mere snowflake in this perpetual blizzard, executed mainly as a courtesy to perpetuate the illusion that you can participate in the American dream.
Starting at precisely 2:42 p.m., the Dow Jones Industrial Average plunged more than 1,000 points in a matter of minutes, nearly 10 percent off from its opening that day. Apple's stock price inexplicably soared to more than $100,000 per share, while Accenture crashed to the bargain basement price of 1¢ per share. Over $1 trillion in asset value had disappeared by 2:47. That's real money — your and my savings, retirement accounts and school endowments. The stunned traders on exchange floors around the world could hardly believe their eyes. It was as if God himself had taken a hammer to the market. Surely this was some sort of horrible mistake?
It wasn't. It was the result of high-frequency trading programs doing exactly what they were designed to do — furiously grind away shaving fractions of a cent off the momentary market inefficiencies caused by their slower-moving biological rivals: human traders. The problem was, there were too few patsies left to exploit, since the electronic cardsharps had grown to dominate the market. And once they started feeding on each other, all hell broke lose at a pace incomprehensible to mere mortals.
In a moment as dramatic as a Hollywood cliffhanger, a single unassuming party saved the day with a simple action. The Chicago Mercantile Exchange simply stopped all trading for a fleeting five seconds. A flash to you and me, but an eternity for the rampaging programs brawling as ferociously as they could. That was sufficient time for the markets to take a breath and for the programs to reset. As soon as the mayhem ended, the usual market forces returned and prices quickly recovered to near where they had started just a few short minutes ago. The life-threatening tornado evaporated just as suddenly and inexplicably as it had appeared.
While the story may seem to have a happy ending, it does not. Confidence in the institutions we trust to shepherd our hard-earned savings is the bedrock of our financial system. No blue-ribbon presidential panel or SEC press release can restore this loss of faith. The threat that it might happen again hangs over our every spending and savings decision. The sorry truth is that our fate is in the hands of the machines — not just for financial markets, but in many other important arenas where people and programs cohabitate and often compete, such as in computer network security, credit card fraud, even waging war.New hazards arise as machines increasingly intrude into domains that were formerly the exclusive province of humans. We frequently rely on a hidden assumption of a level playing field to allocate resources in a reasonable way. However, this simple principle is often violated when we permit electronic and human agents to compete unsupervised.
Take event tickets, for instance. When Ticketmaster first went online, it greatly increased consumer convenience. But soon after the service was available to the general public on the Internet, scalpers began using programs to scarf up online tickets the moment they became available. Lacking a regulatory framework to address the problem, Ticketmaster has attempted technological fixes, such as requiring you to interpret those annoying little brain twisters known as CAPTCHAs, to little effect, because the scalpers simply employ armies of live humans, mostly in third world countries, to decode them.
The problem here has nothing to do with whether you use an agent to purchase a ticket. It's fine for you to buy a ticket on behalf of a friend or to pay someone else to do it. The issue arises when we permit electronic agents to compete for resources with human actors. In most circumstances, it violates our intuitive sense of fairness. That's why there are separate tournaments for human and computer chess players. It's also why allowing programs to trade securities alongside humans is problematic, though I think we would have a hard time putting that genie back in the bottle.
Lines and queues are great cultural equalizers because they force everyone to incur the cost of waiting, spending his or her own personal time. That's why it somehow seems wrong when lobbyists pay people to hold their place at congressional hearings, squeezing ordinary citizens out of their chance to attend. Variations of this problem are about to invade many aspects of our day-to-day lives. For instance, how will you feel when your neighbor sends her robot out before you wake up to snag the last Sunday Times, or to claim the cabana closest to the surf?
This same principle, appropriately generalized, can apply to just about any circumstance where mechanical agents compete with humans—not just to lines. Do the participants differ in their ability, or the cost they pay, to access the resource? This question needs to be answered on a case-by-case basis. For instance, suppose I send my robot to move my car every two hours to avoid a parking ticket, or instruct my self-driving car to re-park itself by a yellow curb at precisely 6 P.M? Will we judge that cost sufficiently equivalent to doing it myself to consider it fair to those without a robotic driver or car to spare? Should the answer be different if it costs the same to send the robot as it would to send a human administrative assistant?
Many of these conundrums are easy to identify when they involve physical resources or actions, but are challenging to detect when the damage is confined to cyberspace. For instance, who's to know if someone uses a clever program to reserve an entire row of camping spots at Yellowstone Park while you're still waiting for the web page to load? As of now, electronic agents can operate unfettered because the intangibility of the Internet shrouds transactions in a darkness that permits all manner of skullduggery. But soon these injustices will become painfully clear, as these systems — embodied in the form of robots — invade our coffee shops and parking lots. Their appearance will force us to confront complex social issues that are already affecting our social order in unexpected and deleterious ways. To maintain a just and equitable society, the arrival of intelligent agents — whether tangible or incorporeal — will compel us to extend our principles of fair play in new and unfamiliar directions.
Adapted from Jerry Kaplan's new book, "Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence," Yale University Press, Aug. 4, 2015