Pynchon Luddite Essay Definition

In an essay in 1984—at the dawn of the personal computer era—the novelist Thomas Pynchon wondered if it was “O.K. to be a Luddite,” meaning someone who opposes technological progress. A better question today is whether it’s even possible. Technology is everywhere, and a recent headline at an Internet hu-mor site perfectly captured how difficult it is to resist: “Luddite invents machine to destroy technology quicker.”

Like all good satire, the mock headline comes perilously close to the truth. Modern Luddites do indeed invent “machines”—in the form of computer viruses, cyberworms and other malware—to disrupt the technologies that trouble them. (Recent targets of suspected sabotage include the London Stock Exchange and a nuclear power plant in Iran.) Even off-the-grid extremists find technology irresistible. The Unabomber, Ted Kaczynski, attacked what he called the “industrial-technological system” with increasingly sophisticated mail bombs. Likewise, the cave-dwelling terrorist sometimes derided as “Osama bin Luddite” hijacked aviation technology to bring down skyscrapers.

For the rest of us, our uneasy protests against technology almost inevitably take technological form. We worry about whether violent computer games are warping our children, then decry them by tweet, text or Facebook post. We try to simplify our lives by shopping at the local farmers market—then haul our organic arugula home in a Prius. College students take out their earbuds to discuss how technology dominates their lives. But when a class ends, Loyola University of Chicago professor Steven E. Jones notes, their cellphones all come to life, screens glowing in front of their faces, “and they migrate across the lawns like giant schools of cyborg jellyfish.”

That’s when he turns on his phone, too.

The word “Luddite,” handed down from a British industrial protest that began 200 years ago this month, turns up in our daily language in ways that suggest we’re confused not just about technology, but also about who the original Luddites were and what being a modern one actually means.

Blogger Amanda Cobra, for instance, worries about being “a drinking Luddite” because she hasn’t yet mastered “infused” drinks. (Sorry, Amanda, real Luddites were clueless when it came to steeping vanilla beans in vodka. They drank—and sang about—“good ale that’s brown.”) And on Twitter, Wolfwhistle Amy thinks she’s a Luddite because she “cannot deal with heel heights” given in centimeters instead of inches. (Hmm. Some of the original Luddites were cross-dressers—more about that later—so maybe they would empathize.) People use the word now even to describe someone who is merely clumsy or forgetful about technology. (A British woman locked outside her house tweets her husband: “You stupid Luddite, turn on your bloody phone, i can’t get in!”)

The word “Luddite” is simultaneously a declaration of ineptitude and a badge of honor. So you can hurl Luddite curses at your cellphone or your spouse, but you can also sip a wine named Luddite (which has its own Web site: www.luddite.co.za). You can buy a guitar named the Super Luddite, which is electric and costs $7,400. Meanwhile, back at Twitter, SupermanHotMale Tim is understandably puzzled; he grunts to ninatypewriter, “What is Luddite?”

Almost certainly not what you think, Tim.

Despite their modern reputation, the original Luddites were neither opposed to technology nor inept at using it. Many were highly skilled machine operators in the textile industry. Nor was the technology they attacked particularly new. Moreover, the idea of smashing machines as a form of industrial protest did not begin or end with them. In truth, the secret of their enduring reputation depends less on what they did than on the name under which they did it. You could say they were good at branding.

The Luddite disturbances started in circumstances at least superficially similar to our own. British working families at the start of the 19th century were enduring economic upheaval and widespread unemployment. A seemingly endless war against Napoleon’s France had brought “the hard pinch of poverty,” wrote Yorkshire historian Frank Peel, to homes “where it had hitherto been a stranger.” Food was scarce and rapidly becoming more costly. Then, on March 11, 1811, in Nottingham, a textile manufacturing center, British troops broke up a crowd of protesters demanding more work and better wages.

That night, angry workers smashed textile machinery in a nearby village. Similar attacks occurred nightly at first, then sporadically, and then in waves, eventually spreading across a 70-mile swath of northern England from Loughborough in the south to Wakefield in the north. Fearing a national movement, the government soon positioned thousands of soldiers to defend factories. Parliament passed a measure to make machine-breaking a capital offense.

But the Luddites were neither as organized nor as dangerous as authorities believed. They set some factories on fire, but mainly they confined themselves to breaking machines. In truth, they inflicted less violence than they encountered. In one of the bloodiest incidents, in April 1812, some 2,000 protesters mobbed a mill near Manchester. The owner ordered his men to fire into the crowd, killing at least 3 and wounding 18. Soldiers killed at least 5 more the next day.

Earlier that month, a crowd of about 150 protesters had exchanged gunfire with the defenders of a mill in Yorkshire, and two Luddites died. Soon, Luddites there retaliated by killing a mill owner, who in the thick of the protests had supposedly boasted that he would ride up to his britches in Luddite blood. Three Luddites were hanged for the murder; other courts, often under political pressure, sent many more to the gallows or to exile in Australia before the last such disturbance, in 1816.

One technology the Luddites commonly attacked was the stocking frame, a knitting machine first developed more than 200 years earlier by an Englishman named William Lee. Right from the start, concern that it would displace traditional hand-knitters had led Queen Elizabeth I to deny Lee a patent. Lee’s invention, with gradual improvements, helped the textile industry grow—and created many new jobs. But labor disputes caused sporadic outbreaks of violent resistance. Episodes of machine-breaking occurred in Britain from the 1760s onward, and in France during the 1789 revolution.

As the Industrial Revolution began, workers naturally worried about being displaced by increasingly efficient machines. But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.”

So if the Luddites weren’t attacking the technological foundations of industry, what made them so frightening to manufacturers? And what makes them so memorable even now? Credit on both counts goes largely to a phantom.

Ned Ludd, also known as Captain, General or even King Ludd, first turned up as part of a Nottingham protest in November 1811, and was soon on the move from one industrial center to the next. This elusive leader clearly inspired the protesters. And his apparent command of unseen armies, drilling by night, also spooked the forces of law and order. Government agents made finding him a consuming goal. In one case, a militiaman reported spotting the dreaded general with “a pike in his hand, like a serjeant’s halbert,” and a face that was a ghostly unnatural white.

In fact, no such person existed. Ludd was a fiction concocted from an incident that supposedly had taken place 22 years earlier in the city of Leicester. According to the story, a young apprentice named Ludd or Ludham was working at a stocking frame when a superior admonished him for knitting too loosely. Ordered to “square his needles,” the enraged apprentice instead grabbed a hammer and flattened the entire mechanism. The story eventually made its way to Nottingham, where protesters turned Ned Ludd into their symbolic leader.

The Luddites, as they soon became known, were dead serious about their protests. But they were also making fun, dispatching officious-sounding letters that began, “Whereas by the Charter”...and ended “Ned Lud’s Office, Sherwood Forest.” Invoking the sly banditry of Nottinghamshire’s own Robin Hood suited their sense of social justice. The taunting, world-turned-upside-down character of their protests also led them to march in women’s clothes as “General Ludd’s wives.”

They did not invent a machine to destroy technology, but they knew how to use one. In Yorkshire, they attacked frames with massive sledgehammers they called “Great Enoch,” after a local blacksmith who had manufactured both the hammers and many of the machines they intended to destroy. “Enoch made them,” they declared, “Enoch shall break them.”

This knack for expressing anger with style and even swagger gave their cause a personality. Luddism stuck in the collective memory because it seemed larger than life. And their timing was right, coming at the start of what the Scottish essayist Thomas Carlyle later called “a mechanical age.”

People of the time recognized all the astonishing new benefits the Industrial Revolution conferred, but they also worried, as Carlyle put it in 1829, that technology was causing a “mighty change” in their “modes of thought and feeling. Men are grown mechanical in head and in heart, as well as in hand.” Over time, worry about that kind of change led people to transform the original Luddites into the heroic defenders of a pretechnological way of life. “The indignation of nineteenth-century producers,” the historian Edward Tenner has written, “has yielded to “the irritation of late-twentieth-century consumers.”

The original Luddites lived in an era of “reassuringly clear-cut targets—machines one could still destroy with a sledgehammer,” Loyola’s Jones writes in his 2006 book Against Technology, making them easy to romanticize. By contrast, our technology is as nebulous as “the cloud,” that Web-based limbo where our digital thoughts increasingly go to spend eternity. It’s as liquid as the chemical contaminants our infants suck down with their mothers’ milk and as ubiquitous as the genetically modified crops in our gas tanks and on our dinner plates. Technology is everywhere, knows all our thoughts and, in the words of the technology utopian Kevin Kelly, is even “a divine phenomenon that is a reflection of God.” Who are we to resist?

The original Luddites would answer that we are human. Getting past the myth and seeing their protest more clearly is a reminder that it’s possible to live well with technology—but only if we continually question the ways it shapes our lives. It’s about small things, like now and then cutting the cord, shutting down the smartphone and going out for a walk. But it needs to be about big things, too, like standing up against technologies that put money or convenience above other human values. If we don’t want to become, as Carlyle warned, “mechanical in head and in heart,” it may help, every now and then, to ask which of our modern machines General and Eliza Ludd would choose to break. And which they would use to break them.

Richard Conniff, a frequent contributor to Smithsonian, is the author, most recently, of The Species Seekers.

Like this article?
SIGN UP for our newsletter

About Richard Conniff

Richard Conniff, a Smithsonian contributor since 1982, is the author of seven books about human and animal behavior.

Read more from this author

We Recommend

Is a kiss really just a kiss? In this one-minute video, our Ask Smithsonian Host, Eric Schulze, explains why we pucker up.
Ask Smithsonian: Why Do We Kiss? (1:11)
Free from their mother's care, five young lions must fend for, and feed, themselves. Their first challenge: a giant giraffe who refuses to be caught.
Incredible: Five Lions Take Down a Giraffe (2:49)
For six weeks, luna moth caterpillars gorge themselves on the leaves of the marula tree. Then, when they're ready, they instinctively weave giant cocoons around themselves in preparation for their stunning metamorphosis.
Timelapse Footage of a Giant Caterpillar Weaving Its Cocoon (2:34)
The bust of Nefertiti contains one of the most beautiful faces in the world. So beautiful, a mathematical formula was used to sculpt it.
How to Achieve Flawless Beauty (2:54)

Technology will save us! Technology sucks! Where today’s techno-utopians cheer, our modern-day Luddites, from survivalists to iPhone skeptics to that couple that dresses in Victorian clothing and winds its own clock, grumble.

Understanding the former urge is pretty easy: It’s a fantasy of a perfect world. The Luddite impulse, however, isn’t so clear—and we shouldn’t automatically dismiss it as one that scapegoats technology for society’s ills or pines for a simpler past free of irritating gadgets. Rather, today’s Luddites are scared that technology will reveal that humans are no different from technology—that it will eliminate what it means to be human. And frankly, I don’t blame them. Humanity has had such a particular and privileged conception of itself for so long that altering it, as technology must inevitably do, will indeed change the very nature of who we are.

To understand the appeal of being a Luddite, you need only read these words of Leon Trotsky:

To produce a new, “improved version” of man—that is the future task of Communism. Man must see himself as a raw material, or at best as a semi-manufactured product, and say: “At last, my dear homo sapiens, I will work on you.”

This vision, promptly disposed of by Stalin, is so intuitively unappealing that even with the return of authoritarianism to Russia, neither Vladimir Putin nor any of his associates have revived the idea of scientifically perfecting man. Such language brings back bad memories of eugenics, Nazi experiments, Tuskegee, and worse. Yet those fears don’t stop us from using technology to become those new, improved versions of ourselves—from buying up iPads and smartphones and storing the digital residue of our lives in the cloud. In reaction, modern-day Luddism arrives in a variety of forms, many of them vulgar. Writers from Neil Postman to Jerry Mander, in books like Amusing Ourselves to Death and Four Arguments for the Elimination of Television, blame technology for making us brain-dead sheep; the solution, of course, is to eliminate it. (Spoiler: That’s not going to happen.) But more thoughtful writers, like philosophers Wilfrid Sellars and Willem deVries, recognize that squaring our conceptions of ourselves with what science and technology tell us entails some pretty unsettling revelations.

To hold on to family, gender, and other human signifiers, simply because they’re human signifiers, is to be a Luddite.

At its origins in the beginning of the 19th century, Luddism was not about technology’s evils—it was about worker rights and a fear of job losses. Angry English workers marched together and destroyed machinery in what was essentially a vigilante labor movement, only for many of them to be tried and executed by their government. (The history is covered well by E. P. Thompson’s classic The Making of the English Working Class.) Over time Luddism’s definition became more puritanical, boiling down to a worldview that was anti-technological in general. But ruminating on nature (as in the beautiful, evocative books of Robert Macfarlane and Tim Robinson), hating nuclear weapons, or preferring birdwatching to Clash of Clans doesn’t make you a Luddite. While contemporary Luddism fixates on the evils of technology, it’s not driven by the threat of technology supplanting or replacing humanity. Rather, Trotsky’s quote reminds us of the possibility that we will come to see ourselves as no different from machines. Technology doesn’t dehumanize us; it’s the knowledgebehind it that does. Fighting the machine, then, is fighting a vision of the future in which the human is the machine.

Luddism is not nostalgia for the past. There is so much wrong with the past that it’s practically an argument against Luddism in itself. Even the supposed evils of technology too often turn out to be evils baked into the soul of humanity. Hannah Arendt dwelt on the mechanized totalitarianism enabled by the industrial revolution, claiming it had made the Nazis possible. Yet a host of low-tech atrocities, from the Armenian genocide to the Rwandan genocide and countless third-world dictatorships, have shown that organized terror and slaughter can be achieved with no more technology than a radio and a machete. If you truly wish to go backward, you must go back beyond the invention of what a friend termed the most harmful human invention of all time: agriculture. Agriculture not only enabled the exponential growth of the population of suffering souls, but also set the scene for tyranny, slavery, and every other atrocity that recurs throughout human history. But unless you’re willing to go that far, to be a Luddite you only have to advocate against future technology, not for a return to the past.

Consequently, the Luddite impulse is to embrace a certain distinction between human and machine. Thomas Pynchon put his finger on it in 1984 (“Is It O.K. To Be a Luddite?”) when he wrote that the midcentury Luddite impulse, embodied particularly in science fiction, embraced “a definition of ‘human’ as particularly distinguished from ‘machine.’ ” “Humanity” was held up as an incommensurable yardstick: You either had it or you wanted it. In Star Trek, the android Data rose to gain humanity, while those who were assimilated by the Borg ceased to be human.

Sometimes this distinction did not favor the human; in the 1960s, gloomy science-fiction writers started to point the finger of blame not at technology but at humans themselves. But whether humans were better or worse than technology, they were always different from it. It was rare that humans were superseded and made redundant, as in the science fiction of Olaf Stapledon and Stanislaw Lem, or that they ran toward their own negation, as with the J. G. Ballard heroes who deny the human, abet the apocalypse, and have sex with cars. (Ballard is the anti-Luddite par excellence.) Otherwise, the human (or an alien surrogate for humans) tended to remain at the center of the picture. “Feelings,” “values,” “creativity,” “culture,” and other such “human” qualities serve as a barometer to separate us from everything else, whether it’s animals, robots, or textile machinery. And membership has its privileges: Humans are entitled to freedom and equality and fraternity, not to be used as a means to an end. We “dehumanize” those we consider inferior, whether they’re slaves or Jews or those-jerks-across-the-river. The original Luddites objected to capitalists treating them as interchangeable labor; today’s Luddites see technology as threatening the value we assign to each individual life and collapsing them into utilitarian statistics.

For the thinking Luddite, technology becomes a serious threat only to the extent that it threatens to collapse the boundary between human and machine.

This human exceptionalism has its roots far back in Renaissance humanism, when the cosmos was separated into three clear tiers: nature at the bottom, man in the middle, and God at the top. Man’s position was unique, rooted in the possession of a soul and certain unique qualities that flowed from it. Nature might be a clockwork machine, but humans most certainly were not. With the exception of a couple of radicals like 18th-century philosopher La Mettrie, author of L’homme machine (Machine Man), thinkers were generally hesitant to erase that line; even atheist-materialist Karl Marx described humans as an absolutely singular species.

When you start talking about fine-tuning and improving the human, you move toward treating humans as tools and raw materials—in other words, how we treat machines and animals. Unfortunately, once you start down the road of medicine and transplants and heart monitors and antidepressants and biotechnology, it becomes very hard to stop, even with strict ethical guidelines. Tools can be fixed, and damned if most of us aren’t broken, so things like designer babies and Clones-R-Us become simultaneously appealing and horrifying. This unease is where Luddism begins to have its pull on most of us, because it eats at our previously robust sense of the human.

There aren’t too many human universals, but those that exist are powerfully resilient: family and parenting, emotions, language, some kind of morality, taboos, art, some version of gender roles. While the specifics of each vary across time and cultures, the abstract categories have remained fairly robust. Those who strove to aggressively “improve” the human, from Plato to Campanella to Trotsky, often advocated the breakup of the nuclear family. The parent-child bond has remained mostly intact through generations of sweeping changes. Single-motherhood, same-sex marriage, and polyamory are pretty mild changes to the general model. But while you can argue that evolution has aligned us with certain conceptions of family, it’s a naturalistic fallacy to say that they’re necessarily better for us. Rather, these conceptions are markers of humanity. So is gender: Many argue for changing our ideas around gender roles, but few argue for eliminating the very concept of gender. To hold on to family, gender, and other human signifiers, simply because they’re human signifiers, is to be a Luddite. Giving up any one of them—probably with technological assistance—is taking a step toward seeing the human as an empty container for whatever you want to pour into it, no different from a computer running software.

There is one special human universal that deserves attention: death. The process of aging and death is one of the very few absolute constants of human existence and its eradication would in itself spell the end of “the human” as we know it. At the hypothesized point of the Singularity, when we upload our minds to the cloud and leave our bodies behind (or create new ones), the entire notion of an indivisible and distinct human individual will disappear, and we’ll become indistinguishable from software (or hardware). We’ll turn, as the science-fiction writer John Sladek suggested, from oranges into orange juice: “We [will] become ‘etherized,’ in both of Eliot’s senses of the word: numb and unreal.” If that makes you nervous—if it is more threatening than the prospect of inevitable death—then you are a Luddite. (I wouldn’t mind a few extra centuries, but the whole uploading thing spooks me.) Cordwainer Smith took this idea a step further in his own science-fiction stories of the 1950s, in which “The nightmare of perfection had taken our forefathers to the edge of suicide,” and so humanity brings back politics, money, multiple languages, diseases, and death in the ultimate Luddite movement, termed the Rediscovery of Man.

For the thinking Luddite, technology becomes a serious threat only to the extent that it threatens to collapse the boundary between human and machine. I think it very unlikely that this boundary will collapse in practice any time soon, despite the predictions of the transhumanists who cheer on the Singularity. These Luddite fears, however, are going to inform every step that we take toward “improving” the human. Already, psychology has taken a beating from neuroscience and pharmacology, making us wonder how rational and free we actually are. Our latitude to hold on to our traditional ideas—to be an intelligent Luddite, in other words—may lie in the extent to which we remain ignorant of scientific realities about the world and gain the capacity to manipulate them. A philosopher like Patricia Churchland aggressively and convincingly argues for the total replacement of “folk psychological” statements like “I’m stressed out” with a sentence like “My amygdala is overexerting control over my cerebral cortex,” but abandoning our current language of emotions is something many, including myself, see as a real and even dangerous loss. We’re dumb enough to believe that humans are still special—for the time being.

This article is part of Future Tense, a collaboration among Arizona State UniversityNew America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *