Technopessimism Is Bunk

BY Paul Solman  July 26, 2013 at 11:04 AM EDT

By Joel Mokyr


In a 2011 Making Sen$e report, we explored the slowing of innovation with “The Great Stagnation” author Tyler Cowen.

The developed world enjoys the benefits of running hot and cold water, refrigerators and electric stoves, all of which have been around for generations. But can we innovate beyond that? Can today’s inventions — the iPod, for example — even compete with those “low-hanging fruit” that dramatically altered our homes and daily lives? In a 2011 Making Sen$e report, which you can watch above, George Mason University economist Tyler Cowen, author of “The Great Stagnation,” argued that we’re in an “innovation drought” where the rate of progress has slowed.

That’s just not so, economic historian Joel Mokyr of Northwestern University argues Friday on the Business Desk: innovation has not peaked. Inventors may have already plucked the “low-hanging fruit” — big inventions in everyday use — but the thing about technology is that it’s ever-evolving, allowing us to constantly climb even higher.

In 2011, we also visited the MIT Media Lab for some tangible evidence that innovation is still booming. You can see some of what we saw here.

But with innovation comes concerns that human workers will become redundant, with the profits for new inventions going only to the high-skilled few. In our 2012 report “Man vs. Machine,” which you can watch at the end of this post, Singularity University’s Vivek Wadhwa, who’s made a reappearance on the Business Desk this week, predicted that “the convergence of these technologies will create jobs in areas we can’t even think of.” The workforce, he has argued on the Business Desk, has to keep adapting.

Responding to “technopessimists,” (also the subject of a story in this weekend’s New York Magazine), Mokyr picks up on Wadhwa’s prediction. He too foresees new technologies creating new jobs, the nature of which we cannot yet even imagine. After all, technology’s double-edged sword — that new inventions create new problems, such as labor force disruption — is what constantly pushes us to further innovate.

Joel Mokyr: One of the most unjust misattributions is the famous statement “everything that can be invented already has been,” supposedly uttered in 1899 by Charles Holland Duell, commissioner of the U.S. Patent Office.

In fact, Duell wrote the very opposite: “In my opinion, all previous advances in the various lines of invention will appear totally insignificant when compared with those which the present century will witness. I almost wish that I might live my life over again to see the wonders which are at the threshold.”

The true Duell turned out to be correct: the 20th century was indeed a century of huge technological progress. Indeed, only the growth of science and technology can explain how the industrialized world in the 20th century was able to survive a seemingly endless chain of man-made disasters, from two world wars, economic depressions, financial panics, inflations and the rise of totalitarianism. Many observers expected the Western world to sink into barbarism and darkness, much like had happened after the decline of the Roman Empire in Europe. Instead, despite its many economic woes, material life in 2013 is immeasurably better than ever before, not just in the industrialized rich parts of the planet, but in most economies.

Yet today, once again, we hear concerns that innovation has peaked. Some claim that “the low-hanging fruits have all been picked.” The big inventions that made daily life so much more comfortable — air conditioning, running cold and hot water, antibiotics, ready-made food, the washing machine — have all been made and cannot be matched, so the thinking goes.

Entrepreneur Peter Thiel’s widely quoted line “we wanted flying cars, instead we got 140 characters” reflects a sense of disappointment. Others feel that the regulatory state reflects a change in culture: we are too afraid to take chances; we have become complacent, lazy and conservative.

Still others, on the contrary, want to stop technology from going much further because they worry that it will render people redundant, as more and more work is done by machines that can see, hear, read and (in their own fashion) think. What we gained as consumers, viewers, patients and citizens, they fear, we may be about to lose as workers. Technology, while it may have saved the world in the past century, has done what it was supposed to do. Now we need to focus on other things, they say.

This view is wrong and dangerous. Technology has not finished its work; it has barely started. Some lessons from history may show why. For one thing, technological progress has an unusual dynamic: it solves problems, but in doing so it, more often than not, creates new ones as unintended side-effects of the previous breakthroughs, and these in turn have to be solved, and so on.

A historical example is coal. The Industrial Revolution of the 18th century increased the use of coal enormously, with steam power replacing water mills, wind mills and horses. This was more powerful and efficient, but in turn, it created new environmental problems such as London’s famous smog. In our own time, the burning of all hydrocarbons has been shown to be a factor in climate change. So technological progress hopefully will make renewable energies available at costs that will slow down planetary warming.

Or consider our war against harmful species, from malaria-carrying mosquitoes to TB mycobacteria. Science and technology came up with means to poison them, but nature has a way of pushing back, and many harmful species developed resistance.

Does that mean that technology is powerless to fight these plagues? No, only that its progress is an ongoing project, with two steps forward and one step back, and that we cannot rest on our laurels. As we know more, we can push back against the pushback. And so on. The history of technology is to a large extent the history of unintended consequences.

For thousands of years, people dreamed of having sex without worrying about pregnancy. The unintended consequence of widely-used contraception, however, is the relentless aging of societies. With fewer new births and higher life expectancies, there are now fewer people of working age to support a rising percentage of retirees. But technology is now responding to that consequence, developing to make mature persons more productive citizens (think knee replacements and bypass surgery). Aging is not what it used to be.

Another peculiar dynamic of technology is its complex relation to science. Can one build a nuclear reactor without understanding nuclear physics? Of course not. But in many cases, the technology came first, the science later. Physics learned more from the steam engine than the steam engine from physics. We don’t always realize, however, how much tools and instruments from inventors affected science. The great scientific revolution of Galileo, Robert Boyle and Isaac Newton was to a large extent made possible by the invention of new gadgets such as telescopes, microscopes and vacuum pumps. Nature did not intend us to see microbes or the moons of Jupiter any more than it intended us to make petaflops of calculations. But we saw more of nature through increasingly clever tools and better and better laboratories. Once the scientific insights improved understanding of why things worked the way they did, it was possible to improve them further, thus creating a vast virtuous circle, in which science strengthened technology and technology helped create more science.

If this kind of story applies at all to technological progress in the future, the instinctive line that comes to mind is “you ain’t seen nothin’ yet.” Modern scientific research relies on tools and instruments that no one could have dreamed of in 1914. There are so many examples that any short list would be arbitrary. But, for instance, in 1953, the discovery of the structure of DNA would not have been possible without the x-ray crystallography provided by Rosalind Franklin. Biologists today can use automatic gene sequencing machines and cell-sorting machines called flow cytometers (one of the many applications of laser technology). Physicists can experiment with the gigantic synchrotrons, which allow them to analyze the molecular structure of almost anything. Astronomers can now choose between the images beamed back from space through the Hubble Space Telescope, the instruments on board unmanned spacecraft touring the planetary system or the adaptive-optics telescopes that automatically correct the distortions introduced by the Earth’s atmosphere while looking at the stars.

Above all, no scientific research today, from English literature to economics to nanochemistry, is even thinkable without computers. The question scientists most frequently ask about computers is not “what do they do,” but “how did we ever do anything without them?” The advances in science will make it possible (among other things) to make even more sophisticated instruments, some of them foreseeable just by extrapolating what we already have, some as unimaginable as the Large Hadron Collider would be to Archimedes.

There is one more aspect of modern research and development that makes it different from anything that came before. In the age of Aristotle, it was still possible for an exceptionally bright individual to know (almost) anything worth knowing. As the body of knowledge expanded, this became impossible given the finite capacity of even the best brains. Scientists began to practice specialization, a division of knowledge, similar in principle to the division of labor so beloved by economists. But the division of knowledge, much like the division of labor, requires organization.

If society is going to make use of the expert knowledge that has accumulated, it needs to ensure that this knowledge can be stored at low costs and that it’s accessible. Pieces of knowledge should be retrievable, not just by other scientists building on its foundations, but by engineers, industrial chemists and entrepreneurs trying to apply the science to practical use. The art of finding ever-smaller needles in ever-larger haystacks is itself a critical technology that determines how fast both science and technology can move. Search technology made a huge step forward when the alphabetical organization of knowledge became widespread in the 18th century with the emergence of alphabetically arranged encyclopedias, technical dictionaries and lexicons, as well as well-organized compilations of classified facts (think of the “Father of Taxonomy” Carl Linnaeus).

All of these wonderful developments of the past are dwarfed by the storage and search capabilities of our own age. Throughout history, humans had to struggle with costly and perishable information storage. Some storage technology was durable but costly, such as clay tablets. Others, like papyrus, did not last. Paper and movable type, both originating in China, were huge advances, but books and articles were still expensive.

Today, copying, storing and searching vast amounts of information is, for all practical purposes, free. We no longer deal with kilobytes or megabytes, and even gigabytes seem small potatoes. Instead, terms like petabytes (a million gigabytes) and zettabytes (a million petabytes) are bandied about. Scientists can find needles in data haystacks as large as Montana in a fraction of a second. And if science sometimes still proceeds by “trying every bottle on the shelf” — as it does in some areas — it can search many bottles, perhaps even petabottles.

But as we’ve seen, technology is a double-edged sword: it solves problems while creating new ones. This is especially true for a technology that controls knowledge. As we are constantly reminded these days, a large body of knowledge can be abused by paranoid or totalitarian governments, overzealous law enforcement agencies and aggressively greedy commercial interests. Equally worrisome, how can users of data tell the wheat from the chaff; how can we distinguish between best-practice (peer-reviewed) science and crackpot pseudoscience, flat-earthers and climate-change deniers? Who will review billions of sites and sources, including many zettabytes of data, for veracity? And who will review the reviewers?

Many of these issues, it will be said, do not have a “technological fix” — they need intangibles like human trust. Fair enough — and yet it appears that without better technology that allows us to discriminate between what is plausible (and perhaps even “true”) and what is blatantly misleading and tendentious, such trust will be hard to establish. Much like the war against insects and our efforts to keep the planet’s environment people-friendly, this is an ongoing process, in which we may have to run in order to stay in place.

Unless something goes terribly wrong then, the human race is still in its technological infancy, and we may be moving toward some very different kind of life. Whether we will become “singular” and see our minds merge somehow with megacomputers, as Google’s director of engineering Ray Kurzweil and his followers predict, is hard to say. (See more of Making Sen$e’s coverage of Kurzweil here and here.)

As an economist, I am especially interested in what will happen to the nature of work. Technology leads to machines replacing people, and the more capabilities these machines (whether we call them robots or not) have, the less there is for us to do. Some jobs still seem beyond mechanization as of now, but the same was said 50 years ago about many tasks carried out by machines today. If machines make certain jobs redundant, what will people do?

Part of the answer is that new types of work will emerge that we cannot foresee. A hundred years ago, people were wondering what would happen to the workers who would no longer be able to find work farming. Nobody at the time would have been able to imagine the jobs of today, such as video game programmer or transportation security employee.

But if the bulk of unpleasant, boring, unhealthy and dangerous work can be done by machines, most people will only work if they want to. In the past, that kind of leisurely life was confined largely to those born into wealth, such as aristocrats. Not all of them lived boring and vapid lives. Some of them wrote novels and music; many others read the novels and listened to the music. Some even were engaged in scientific research, such as the great Robert Boyle (one of the richest men in 17th century England), and a century later, Henry Cavendish, the English chemist and physicist who identified hydrogen gas.

Aristocratic life in the past depended on servants, and the servants of the future may be robots — but so what? More worrisome, the aristocratic life depended on a flow of income from usually hard-working and impoverished farmers paying rich landowners. The economic organization and distribution in a future leisure society may need a radical re-thinking. As John Maynard Keynes wrote in 1931 in his “Economic Possibilities for our Grandchildren,” “With a little more experience we shall use the new-found bounty of nature quite differently from the way in which the rich use it today, and will map out for ourselves a plan of life quite otherwise than theirs.”

But leisure opportunities in the more remote past were largely limited to a few activities, and most working people rarely enjoyed them. In the 20th century, with the shortening of the labor year, early retirement and long weekends, people have more free time, and modern technology has responded with producing a bewildering menu of enjoyable things to do.

The other sea change that new technology is eventually going to bring about is the demise of the “factory system” that emerged during the Industrial Revolution. For intelligent contemporaries, the rise of “dark, satanic mills,” bleak, ugly, noisy buildings in which people worked over 60 hours a week subject to the harsh discipline of the clock, was the most egregious consequence of the Industrial Revolution. Before 1750, there were few places in which people congregated to work together: farmers, artisans, doctors and clergymen worked mostly from home, when they felt like it. The factory introduced stringent controls on the time and space of labor.

Modern technology is well on the way of liberating more and more work from the tyranny of the rush-hour commute to work. We may not go back to the days in which people worked from home; instead they may work from wherever they happen to be, as anyone can observe during an hour in an airline lounge. Distance may not be quite dead, as Exeter College’s Frances Cairncross announced a decade and a half ago, but it is quite ill and its tyranny, one should hope, is near its end.

What will a future generation think of our technological efforts? During the Middle Ages, nobody knew they were living in the Middle Ages (the term emerged centuries later), and they would have resented a notion that it was an age of unbridled barbarism (it was not). During the early stages of the Industrial Revolution in the 18th century, few had a notion that a new technological dawn was breaking. So it is hard for someone alive today to imagine what future generations will make of our age. But to judge from progress in the past decades, it seems that the Digital Age may become to the Analog Age what the Iron Age was to the Stone Age. It will not last as long, and there is no way of knowing what will come after. But experience suggests that the metaphor of low-hanging fruit is misleading. Technology creates taller and taller ladders, and the higher-hanging fruits are within reach and may be just as juicy.

None of this is guaranteed. Lots of things can go wrong. Human history is always the result of a combination of deep impersonal forces, accidents and contingencies. Unintended consequences, stupidity, fear and selfishness often get in the way of making life better for more and more people. Technology alone cannot provide material progress; it’s just that without it, all the other ways of economic progress soon tend to fizzle out. Technological progress is perhaps not the cure-all for all human ills, but it beats the alternative.



In a 2012 Making Sen$e report, Paul Solman examined the future of American workers.


This entry is cross-posted on the Rundown — NewsHour’s blog of news and insight.