The economic logic of robotics and automation is irresistible. Just like every other type of
information technology, the price of robotics has decreased steadily while quality has improved.
Now, automated systems like Kiva often pay for themselves within the first year of deployment, sometimes faster.
Once a unit of robot labor grows cheaper than a unit of human labor, additional benefits for
employers kick in: for many tasks it is safer, more efficient, and more reliable to use a machine than a human being. Furthermore, companies have a huge financial incentive to find ways to eliminate human employees and thereby reduce such associated costs as payroll taxes, disability payments, insurance costs, fringe benefits, plant costs, training expenses, and worker safety claims. The market rewards
companies that minimize the number of employees. Which is exactly why Facebook and Google, information technology companies that don’t create many jobs, are so highly valued by investors: they create more with less.
The future of companies is lean and leaner. As more industries get vaporized, more firms will begin to resemble info-tech companies, both in terms of their output and their workforce. They will find ways to operate with lower headcount and higher profit. Information technology ensures this outcome, because manufacturing—like every part of society—is increasingly defined by software.
And just like everything else defined by software, innovation in this sector is accelerating. No matter what industry your business is in, automation probably already plays a growing role in your
operations. And if not, it will soon—competition demands it.
Until 1990, layoffs generally occurred at one company at a time. When a worker was laid off, he or she could find a similar job at a plant across the street or on the other side of town, or wait six months and then get rehired when the economy recovered. Today, entire job categories are being permanently erased by robotics. There will be no rehiring in these categories. If a job at one company is
eliminated, the displaced worker will be unlikely to find a similar job at the rival shop across the street because it will have upgraded to the latest robots too.
In this scenario, displaced workers may have no choice but to compete for another job in an entirely different field, and as robots acquire more capabilities, workers may find themselves
competing for a dwindling number of positions. Some people are freaking out about this possibility.
TWO VIEWS: THE TECHNO-OPTIMISTS VS THE TECHNO-PESSIMISTS
A long-simmering debate among the techno-elite burst into full boil in June 2014 when venture capitalist Marc Andreessen posted a volley of comments in his signature style, a Twitter “tweet storm.” Andreessen’s tweets consisted of one side of an argument about robotics and automation. He dismissed fears about worker displacement as “textbook Luddism, relying on a ‘lump-of-labor’
fallacy—the idea that there is a fixed amount of work to be done.”
Andreessen wasn’t tweeting into the void. His Twitter account is followed by 333,000 people, including many of the leaders in the venture capital industry. The argumentative nature of
Andreessen’s remarks drew several responses, including many who took up the other side of the debate. What ensued was the closest thing to soul-searching that happens in Silicon Valley. Here is a radically oversimplified synopsis of the longstanding feud.
Techno-optimists like Andreessen believe that new technology will do more to improve lives than to harm them, by increasing productivity and expanding the economy. Technology gives all of us more for less, delivering valuable goods at lower prices, which raises the standard of living for all. That’s especially true of Silicon Valley’s bread-and-butter line of products, networked computing
technologies. In a grandiose moment, enthusiasts might also argue that technology expands individual freedom, enhances communication, builds understanding and global empathy. It’s a pretty appealing vision.
On the other side are techno-pessimists who believe that, left unchecked, new technology (and especially networked computing technologies) will diminish the quality of human life, erode communities and civic institutions, and wreak havoc on the economy by introducing instability, eroding wages, and killing jobs through automation. The pessimists also argue that flaws in
technology leave us vulnerable to massive security breaches from malicious hackers, cyber criminals,
and foreign spies. In their most extreme moments, the pessimists argue that technology enslaves us, reduces our attention span, and decreases human interpersonal connection which leave us vulnerable to manipulation and control. The techno-pessimist view is a pretty grim vision, inspired by dystopian science fiction and underscored by a steady tattoo of breaking news stories about data breaches, identity theft, societal dysfunction, filter bubbles, and a growing sense of isolation and alienation in our techno-cottages.
Lately the focus of both groups has turned specifically to the possibility of massive unemployment as the by-product of mass deployments of robots and automation.
To the optimists, robots are the source of liberation and abundance. Robots will displace some workers, but only temporarily. Robot workers will save humanity from the drudgery of repetitive tasks and low-wage jobs so that individuals can fulfill their destiny in creative pursuits,
entrepreneurialism, teaching, charity, social work, and other projects. For the optimists, the rise of the robot worker signals the dawn of a brilliant age of prosperity, freedom, and choice. A new
Renaissance lies just ahead.
To the pessimists, what the optimists miss is that machines are getting smarter, faster. In the pessimist view, robotics will render half of humanity permanently unemployable, mired in poverty, and trapped in a cycle of ever-diminishing opportunity for personal advancement. It’s far worse than just the end of the American Dream; it’s the Robot Apocalypse and the rise of superintelligent AI that will exterminate the human race. Yikes!
So who’s right? There’s no doubt that robots destroy jobs. That’s a fact and an economic necessity for growth. The debate is whether robots will steal more jobs than the economy can replace, and thereby render a growing segment of the human population unemployable. That claim is dismissed as the Luddite fallacy by techno-optimists. And the Luddite fallacy is considered an article of faith by economists: if it became true, it would undermine the logic of the consumer economy.
The most extreme pessimists conclude with the observation that machine intelligence is improving much faster than humans can adapt or learn new skills. They envision job destruction that accelerates beyond our capacity for invention. In this formula, the human economy will be stuck creating jobs at an arithmetic rate while technology is destroying them at a geometric rate. That’s the scary nightmare scenario. If the economy can’t generate enough high-wage jobs—or if the human workers can’t upskill fast enough to stay ahead of the machines—then the Luddite fallacy may become an economic fact.
PREDICTIONS FOR THE FUTURE: WHAT IS CERTAIN/PROBABLE/POSSIBLE
The optimists and pessimists locked in debate tend to talk past each other. Both sides make some partial points, even if the pessimists do so with more apocalyptic flair. Both parties do each other a disservice by ignoring the opposing case or setting up a straw man argument. This is not a case of either/or. It’s a case of both/and.
What is happening is technologically-driven demand destruction for human labor. Thanks to
advances in computing and robotics, this process is happening on a much larger scale than in previous eras. We are at the beginning of this process: as robots improve, it may accelerate. By considering both viewpoints, we can speculate about certainties, probabilities, and possibilities for the near future.
Certainly: Robots and automation will continue to replace human workers in every role possible, first in jobs that consist of routine work within a specifically defined domain, and then gradually
expanding to broader roles that require more versatility, dexterity, and intelligence. The trajectory moves from narrow expertise to broader versatility: from the checkout kiosk to the robot manager;
from the quick-serve restaurant order-taker to the fully automated restaurant; from a single manual task on the assembly line to robotizing the entire factory; from cruise control to self-driving
autonomous vehicles.
Probably: Mass displacement of labor won’t happen overnight. It will occur gradually, but it may well remain a persistent feature of the economic landscape for decades as the old industrial economy is redefined by software. Technological unemployment will be a chronic condition in every industrial economy rather than an acute crisis. But that’s no cause for complacency if large numbers of jobs are destroyed faster than the economy can generate new ones.
Probably: Large groups of workers will be displaced en masse as soon as cheap machines can substitute for human labor across entire job categories. New job creation will remain slower than the displacement. The interval between job destruction and new job creation will continue to grow. The result could be a growing backlog of displaced workers. If a growing number of unemployed workers compete for new jobs, wages will stagnate or decline. That will increase income inequality.
Probably: Most governments will find themselves ill equipped to cope with the growing ranks of the unemployed. Large-scale training programs will be necessary. Unemployment benefits and income assistance programs will need to be topped up in the event that no new jobs will be immediately available.
Probably: Robots will continue to learn, improve, and further displace more workers. This process won’t stop with low-level employees. Any job that can be automated will be, including middle management and even senior management positions.
Probably: Prices for goods and services produced by robots will decline, thanks to competitive pressure. Rivals hungry for market share will undercut any company that seeks to extract excess profit. The cost of robot labor will continue to decline, perhaps in line with Moore’s law, driving down the cost of production in every field that can be automated. Companies that ride the cost curve down will pass savings on to customers. As a result, standards of living will rise for everyone.
Probably: Automated workforces will yield windfall profits for employers. Investment in robots will bring great returns to investors, some of which will be channeled towards new businesses, which may help some people find new careers in startup ventures funded by the profit from demolishing old jobs.
Possibly: The benefits of automation will not be evenly distributed in society, leading to unrest, political turmoil, and continuing conflict. Some citizens will call for rollbacks or limits on robot technology, but such measures will likely backfire, causing robot owners to relocate their factories to friendlier jurisdictions. Citizens will pressure their elected officials to pass measures to redistribute wealth via confiscatory taxation of profits on robot labor. Already some groups in the US and Europe are agitating for a guaranteed basic income for all citizens, presumably paid for by taxes on gains from investments in robots and automation. The distribution of wealth will remain a divisive political issue for decades. If gains flowing back to investors are reinvested in even more robotics, income inequality will persist and may worsen. This feedback loop will accelerate job destruction while bringing increasing returns to capital.
Possibly: Human ingenuity will devise new ways to work with and around the robots. Artificial intelligence and robotics will make many products cheaper and more ubiquitous, which will
paradoxically drive up the perceived value of handmade objects by humans. Entirely new job categories will be devised by humans to serve entirely new categories of wants and needs. These might include a resurgence of handmade craft goods untouched by robot hands, live performances by human/robot hybrid troupes, new forms of education and personal care, and unique experiences such as human-guided adventures, and thousands of other new services that we can’t yet imagine.
LIVING WITH HUMAN 2.0
If we think of mechanical robots as non-biological “brawn,” then artificial intelligence can be likened to a non-biological brain. Machine intelligence affects human workers in two ways: sometimes as a substitution for human labor and cognition, and sometimes as an enhancement that augments human capabilities. Artificial intelligence is not evolving entirely on its own; the computers must be trained by human users. In that sense, it’s not really artificial intelligence; it’s more like vaporized human intelligence distilled into a computer. It’s a reflection of humanity, refracted through the prism of sensor networks and machine intelligence. And we’re making it more knowledgeable than any single human being could ever be.
As Kevin Kelly wrote in Wired magazine: “It’s human-robot symbiosis. Our human assignment will be to keep making jobs for robots—and that is a task that will never be finished. So we will always have that one ‘job.’” Artificial intelligence is already in a lot more things than we realize. For example, virtual assistants like Apple’s Siri, Google Now, Microsoft’s Cortana; gaming systems like Microsoft Kinect and most console games; and mobile apps that translate speech in real time.
Financial institutions use AI to manage properties, trade stocks, and detect fraud. Hospitals use AI to improve diagnosis and to monitor life-support equipment. Air traffic control systems rely upon AI to monitor all aircraft in flight. Customer support centers use AI to process natural language in order to interpret and understand human callers. We don’t always recognize AI as such because it is invisible to us, but we deal with it every day. And through our interaction and online behaviors, we are
teaching these artificial neural networks to mimic the neural networks in human minds.
The field of artificial intelligence was stuck in a permanent dawn for sixty years, but a series of technological advances is now making it possible to bring AI out of the lab into the real world. “We believe we’ve finally turned the corner because of new algorithms (e.g., see deep neural nets), new hardware (massive improvements in parallelism, throughput and interconnect, all at continuously plummeting prices), and great seas of data to feed to the hardware and algorithms. The second half of this decade will see many of the early dreams of AI theorists finally coming true,” said Tom Austin, a Gartner vice president who was quoted in TechRepublic in March 2014.
Mechanical robots don’t benefit directly from Moore’s law, but software robots do. That’s why artificial intelligence is progressing faster. Citing “a perfect storm of parallel computation, bigger data, and deeper algorithms,” Kevin Kelly predicted that we will soon see AI everywhere.
“Everything we formerly electrified we will now cognitize,” he says.
The newest AI systems exist in the cloud, offering “intelligence on demand” just like other cloud- based services such as storage and hosting. That’s an opportunity to launch a new platform business.
Just as Amazon Web Services eliminated a huge barrier to dot-com startups by providing scalable infrastructure on demand, companies that offer intelligence on demand will enable another round of disruptive startup ventures to reinvent existing industries. As Kelly put it, “The business plans of the next 10,000 startups are easy to forecast: take ‘X’ and add AI.”
The first big entrant in the AI platform wars is IBM. Two years after trouncing the top human trivia champions on Jeopardy!, IBM’s Watson AI computer system has been retooled to assist doctors to diagnose and treat cancer. The best doctors in the world will teach the AI, and then through
intelligence on demand, their collective wisdom will be made available to physicians all over the world. Already the Bumrungrad International Hospital in Thailand is using the IBM Watson For Oncology platform to improve diagnoses. “The power of the technology is that it has the ability to take the information about a specific patient and match it to a huge knowledge base and history of treatment of similar patients,” wrote Dr. Mark Kris, the former chief of the Thoracic Oncologic Service at the Memorial Sloan Kettering Cancer Center, in The Atlantic. “This process can help medical professionals gain important insights so that they can make more informed decisions, evidence-based decisions, about what treatment to follow . . . Watson’s ability to mine massive quantities of data means that it can also keep up—at record speeds—with the latest medical breakthroughs reported in scientific journals and meetings.”
The amount of medical research published in journals has been increasing to the point where it would take more than 100 hours each week to consume it all. No doctor can read everything; few of them, in fact, can peel off enough time to read even for one hour a day. At that rate it’s impossible to keep up with the millions of new research documents published each year. But Watson is a
prodigious learner and a voracious reader. Watson reads everything and can recall it instantly, which makes the AI a useful adjunct to a clinical diagnostician. And institutions are beginning to take
advantage of that prodigious knowledge.
By 2014 the IBM Watson program had expanded to other cancer research institutions, including the Cleveland Clinic and the University of Texas. Watson now helps clinicians develop, observe, and adjust treatments for patients with breast and lung cancer and a dozen other common cancers,
including leukemia, colon, prostate, ovarian, cervical, pancreas, kidney, liver, and uterine cancers.
According to Samuel Nussbaum, the chief medical officer at health care provider WellPoint, doctors who confer with Watson correctly diagnose disease nearly 90 percent of the time, compared to
50 percent for human doctors unaided by AI.
You don’t have to be a brain surgeon to use AI, and Watson is not limited to oncology and other medical research. This first iteration of cloud-based intelligence on demand is designed to be a versatile platform for enhancing human capability across many fields. As a result, IBM is spending
$100 million to make Watson available as a platform for app developers who are training the system to find answers in fields ranging from cooking to shopping, travel, security, and financial planning.
An app called Sofie, developed by LifeLearn, enables veterinarians to use their mobile phones to ask Watson a question just as they would a colleague.
The possibilities of AI are intriguing other industries too. In March 2015 Bridgewater Associates, the world’s largest hedge fund manager, recruited David Ferrucci, the scientist who led the IBM team that created Watson’s computer system. Why would a hedge fund with $165 billion in assets need a roboticist on staff to create an AI unit? Simple: to help fund managers make better decisions.
Increasingly, in white-collar professions, AI will provide tools to augment and assist human workers as they assess complex data sets, identify patterns, and attempt to make smarter decisions. The robots are here to help. And of course, as they help us, we will also teach them. Eventually AIs will know more about each profession than any one practitioner, even the best human in the field. The cloud never stops learning and never forgets. It never retires or takes its knowledge to the grave. And that