“People first began thinking about technological progress as a force in history in the latter half of the eighteenth century, when the scientific discoveries of the Enlightenment began to be translated into the practical machinery of the Industrial Revolution. That was also, and not coincidentally, a time of political upheaval. The democratic, humanitarian ideals of the Enlightenment culminated in the revolutions in America and France, and those ideals also infused society’s view of science and technology. Technical advances were valued — by intellectuals, if not always by workers — as means to political reform. Progress was defined in social terms, with technology playing a supporting role. Enlightenment thinkers such as Voltaire, Joseph Priestley, and Thomas Jefferson saw, in the words of the cultural historian Leo Marx, ‘the new sciences and technologies not as ends in themselves, but as instruments for carrying out a comprehensive transformation of society.’

“By the middle of the nineteenth century, however, the reformist view had, at least in the United States, been eclipsed by a new and very different concept of progress in which technology itself played the starring role. ‘With the further development of industrial capitalism,’ writes [Leo] Marx, ‘Americans celebrated the advance of science and technology with increasing fervor, but they began to detach the idea from the goal of social and political liberation.’ Instead, they embraced ‘the now familiar view that innovations in science-based technologies are in themselves a sufficient and reliable basis for progress.’ New technology, once valued as a means to a greater good, came to be revered as a good in itself. [The quotes above are from an article by Leo Marx, “Does Improved Technology Mean Progress?” published in Technology Review, January 1987. Marx is best known for his 1964 book, The Machine in the Garden: Technology and the Pastoral Ideal in America.]

“It’s hardly a surprise, then, that in our own time the capabilities of computers have, as Bainbridge suggested, determined the division of labor in complex automated systems. To boost productivity, reduce labor costs, and avoid human error — to further progress — you simply allocate control over as many activities as possible to software, and as software’s capabilities advance, you extend the scope of its authority even further. The more technology, the better. The flesh-and-blood operators are left with responsibility only for those tasks that the designers can’t figure out how to automate, such as watching for anomalies or providing an emergency backup in the event of a system failure. People are pushed further and further out of what engineers term ‘the loop’ — the cycle of action, feedback, and decision making that controls a system’s moment-by-moment operations.

“Ergonomists call the prevailing approach technology-centered automation. Reflecting an almost religious faith in technology, and an equally fervent distrust of human beings, it substitutes misanthropic goals for humanistic ones. It turns the glib ‘who needs humans?’ attitude of the technophilic dreamer into a design ethic. As the resulting machines and software tools make their way into workplaces and homes, they carry that misanthropic ideal into our lives. ‘Society,’ writes Donald Norman, a cognitive scientist and author of several influential books about product design, ‘has unwittingly fallen into a machine-centered orientation to life, one that emphasizes the needs of technology over those of people, thereby forcing people into a supporting role, one for which we are most unsuited. Worse, the machine-centered viewpoint compares people to machines and finds us wanting, incapable of precise, repetitive, accurate actions.’ Although it now ‘pervades society,’ this view warps our sense of ourselves. ‘It emphasizes tasks and activities that we should not be performing and ignores our primary skills and attributes — activities that are done poorly, if at all, by machines.

When we take the machine-centered point of view, we judge things on artificial, mechanical merits.’ [from David Norman, Things That Make Us Smart: Defending Human Attributes in the Age of Machines (Perseus, 1993)]

“It’s entirely logical that those with a mechanical bent would take a mechanical view of life. The impetus behind invention is often, as Norbert Wiener put it, ‘the desires of the gadgeteer to see the wheels go round.’ And it’s equally logical that such people would come to control the design and construction of the intricate systems and software programs that now govern or mediate society’s workings. They’re the ones who know the code. As society becomes ever more computerized, the programmer becomes its unacknowledged legislator. By defining the human factor as a peripheral concern, the technologist also removes the main impediment to the fulfillment of his desires; the unbridled pursuit of technological progress becomes self-justifying. To judge technology primarily on its technological merits is to give the gadgeteer carte blanche.

“In addition to fitting the dominant ideology of progress, the bias to let technology guide decisions about automation has practical advantages. It greatly simplifies the work of the system builders. Engineers and programmers need only take into account what computers and machines can do. That allows them to narrow their focus and winnow a project’s specifications. It relieves them of having to wrestle with the complexities, vagaries, and frailties of the human body and psyche. But however compelling as a design tactic, the simplicity of technology-centered automation is a mirage. Ignoring the human factor does not remove the human factor.”

— from Nicholas G. Carr, The Glass Cage: Automation and Us (W. W. Norton, 2014)

Related reading and listening