Tech Panics, Generative AI, and the Need for Regulatory Caution
Summary: Generative artificial intelligence (AI)—AI systems that produce novel text, images, and music from simple user prompts—has important applications in many fields, including entertainment, education, health care, and retail. However, exaggerated and misleading concerns about the tool's potential to cause harm have crowded out reasonable discussion about the technology, generating a familiar, yet unfortunate, "tech panic." Until the hysteria dissipates, policymakers should hit pause on any new legislation or regulations directly targeting generative AI. (Download PDF)
Significant technological changes inevitably disrupt the economy and society, and the potential for major change induces both inflated fears and expectations. Recent developments in artificial intelligence (AI)—a branch of computer science that studies computer systems that perform operations previously requiring human intelligence—have heightened imaginations about what the future holds. AI doomsayers predict job destruction, declining human intelligence, loss of privacy, algorithmic manipulation, and, sometimes, the end of humanity.1
Fears about AI have reached new levels because of the emergence of generative AI. Generative AI—a novel tool that can produce complex text, images, and videos from simple inputs—promises to democratize the creative sector and enable entirely new forms of creativity. This novelty has impressed technology enthusiasts but alarmed many others—especially those who believe AI is encroaching on creativity, which many people believe to be an essential difference that separates humans from machines.
Yet, technology and human creativity have long been intertwined, and fears about the negative impact of new innovations have been overstated in the past. For example, prior innovations in the music sector led to fears that record albums would make live shows redundant or that radio would destroy the record industry or that sampling and other means of digital editing would undermine musical artistry. But these concerns never arrived. Over time, this and other tech panics fizzled out as the public embraced the new technology, markets adapted, and initial concerns turned out to be clearly overblown or never arrived.
The fears around new technologies follow a predictable trajectory called "the Tech Panic Cycle."2 Fears increase, peak, then decline over time as the public becomes familiar with the technology and its benefits. Indeed, other previous "generative" technologies in the creative sector such as the printing press, the phonograph, and the Cinématographe followed this same course. But unlike today, policymakers were unlikely to do much to regulate and restrict these technologies. As the panic over generative AI enters its most volatile stage, policymakers should take a deep breath, recognize the predictable cycle we are in, and put any regulation efforts directly aimed at generative AI temporarily on hold.
The merits of encouraging or curbing any new technology depend on the available use cases and potential harm. While many accept this premise, alarmists only imagine catastrophic risks or prefer the state of technology as it is. Many alarmists have an incentive to find or exaggerate a reason for alarm because doing so attracts funding for their advocacy. These actors begin to seed panic when new technology arrives, setting off a chain reaction that soon erupts into frenzy.
As the public begins to use and become familiar with a new tool, it soon becomes clear the alarmists exaggerated the risks or misled the public about their concerns. Panic starts subsiding, and the media slowly lose attention (though they rarely correct the record). As the innovation becomes mainstream, only the alarmists are left dispensing sporadic and less attractive concerns before eventually moving on to new technologies. This pattern constitutes the Tech Panic Cycle (see figure 1).
Figure 1: The Tech Panic Cycle3
The cycle charts four stages: Trusting Beginnings, Rising Panic, Deflating Fears, and Moving On.
At the beginning of the cycle, knowledge of the new creative tool is limited to those who invented it, innovators in the field, commentators, and domain experts. Engineers are still figuring out its potential and innovators are considering commercial use cases. Fears remain low because the tool is neither well known nor widely used.
But doomsayers soon catch wind of the new tool and raise the alarm. Since alarmists cannot pinpoint where the device is misused, they target imaginary harms rather than real ones. For example, AccessNow recently claimed that Microsoft trained Vall-E, a generative AI tool not yet public, by secretly listening to Teams’ users.4 If true, this would justify anxieties about the power and willingness of AI companies to deceive their users. But the claim was false: Vall-E's used Libri-Light, a publicly available collection of audio archives, for the training data.5
When the impact of new generative technology is more tangible, such as its place in the workforce, alarmists often mislead by using needlessly emotional rhetoric.6 This rhetoric inevitably proves false but serves to frighten many. Since the public understanding of new technology at the Trusting Beginnings stage is so primitive, the public and media often accept claims of the technology's destructive potential. This moment marks the point of panic.
Fears spread quickly among alarmist networks and those who have the ear of policymakers. At the Rising Panic stage, policymakers susceptible to hot-button issues legitimize the fears by repeating them in legislative drafts, hearings, and public speeches and statements. Legacy industry, which feels threatened, often leads the charge. Unable to resist an opportunity to write sensationalist content, journalists cannot help but pile in with well-rewarded clickbait coverage.
At the Rising Panic stage, dystopian rhetoric achieves greater attention and sweeps away consumers’ initial optimism about a new tool. The media ecosystem becomes so saturated with overblown fears that only the most outrageous claims remain. Fears eventually reach an apex at the end of the Rising Panic stage: the height of hysteria.
The Deflating Fears stage dawns when the public embraces the new tool and accepts its merit. By this stage, it is clear that many fears will never materialize. Disturbed by the growing popularity of new technology, alarmists continue to incite panic but fail to gain traction as before. Occasional scandals and new features cause micropanics, but the public is now less easily fooled. The point of practicality marks the end of this stage. Society integrates the new technology, and people no longer believe the doomsayers.
The tech apocalypse never arrives. At the Moving On stage, previous fears are exposed and ridiculed (in some cases by the same people who first raised the alarm.) Wired's alarmist article in 2000, "Why The Future Doesn't Need Us," was followed up eight years later by the more measured, "Why the Future Still Needs Us A While Longer."7 Once-feared tools are normalized and cooler heads lead policy conversations. Alarmists have turned their attention to the latest shiny technology hype by this stage. New panics crowd out the old. And the cycle repeats.
Advances in technology have resulted in tech panics for printed books, recorded sound, and motion pictures. The invention of the printing press and advances in paper technology created a tech panic for printed books; the invention of the phonograph and a means to store sound portably such as the record created a tech panic over recorded sound; and innovations in photography and film materials created the tech panic for motion pictures. And so it is with generative AI: Advances in machine learning algorithms and computing capacity have created a tech panic for generative AI.
These creative tools—printed books, recorded sounds, motion pictures, and generative AI—share three traits. First, each has a range of functions. Recorded sound, for example, is used to broadcast news, to signal instructions, or as music to entertain. Printed books contain everything from scientific treatises and classic literature to pornographic works and vile calls for genocide. Second, each presents to the public new types of content. Motion pictures, for example, brought to life scenes and settings unavailable through still photographs or written accounts. Third, each greatly broadens the availability of content by disrupting its price of production. New, cheaper forms of literary content, for example, emerged as the price of printing books plummeted.
In each panic, an innovation in the creative sector makes it much easier to produce new content. Some people, especially incumbents and elites, tend to fear the implications of this new content and concern reaches a boiling point as policymakers and alarmists work together to slow its progress, with news media unable to resist the drama. Eventually, however, the public embraces the tools and moves on.
Printed books changed the way people share information. Indeed, the printing press facilitated the mass production of all kinds of written materials including books, newspapers, and pamphlets. Before its invention, written material had to be printed or copied by hand: a fraught, laborious process that limited the distribution of knowledge to a minority that could afford it. Johannes Gutenberg first mechanized the writing process in the 14th century, making it possible to produce copies of written material quickly and cheaply.8
In the Trusting Beginnings stage, when the printing press was prohibitively expensive, the sparse literate class welcomed the tool as a means to share and receive knowledge—and books became a status symbol.9 But monks, who held an effective monopoly on handwriting books, fretted. "He who ceases from zeal for writing because of printing is no true lover of the Scriptures," declared the 15th century abbot Johannes Trithemius, defending the work of scribes against those who used the new technology.10 However, fears remained low as printing remained rare.
But advances in printing and paper technology led to more books. Europe, for example, printed more in the 18th century than in the three previous combined.11 The 19th century saw yet greater leaps in innovation—the pages that could be printed per hour increased from 480 at the beginning of the century to 2,400 just decades later (to 90,000 by the century's end.)12 These advances plummeted the price of books and sparked concerns about the effect of the printing press on society. As English poet and literary critic Samuel Taylor Coleridge lamented in his influential Biographia Literaria (1817), "[T]he multitude of books and the general diffusion of literature, have produced other and more lamentable effectsin the world of letters;" that books once respected as "religious oracles" have "degraded into culprits to hold up their hands at the bar of everyself-elected … judge."13 The fear that those without status or formal training could gain power and influence through writing—a fear familiar in the Internet era—spread quickly and marked the point of panic.
As the use of the printing press proliferated, many became particularly worried about the reading of novels. "In the creation of fiction, we could lose the bitterness and barrenness of truth!" lamented an early 19th-century author, Sinclair Hamilton, while the Augusta Herald warned that novels led people into "an enchanted country … They corrupt all principles."14 The ability to generate content so freely disturbed alarmists.
During the Rising Panic stage, Saint James's Chronicle (1822) deplored that "profligate writers" were seducing "a greater number of Book-sellers into the publication of books of an evil tendency," and the Leicestershire Mercury warned against the easy distribution of novels in 1847: "The multitude of books is a great evil. There is no measure or limit to this fever for writing; every one must be an author; some out of vanity, to acquire celebrity, and raise up a name; others for the sake of lucre and gain."15 In 1889, academic and generative tech alarmist John Meiklejohn delivered a speech, "Literature versus Books," in which he proclaimed that "the disease of the age was distraction, hurry, interest in far too many things, with the consequence result of mental indigestion and muddle-headedness."16 The concern that there was too much content widely available, owing to advances in technology, foreshadows each subsequent tech panic.
And as with many tech panics, alarmists made fanciful proclamations about the subversion of the youth. Technological advances spawned a new genre of cheap novels—"dime novels," "flash literature," or "pulp fiction"’—which ramped up hysteria. In his essay, Concerning printed poison (1885), prominent writer Josiah W. Leeds noted the "evil effect of ‘flash literature’" on the youth: its "dreadful and pernicious influence of the cheap novels which abound in our midst," and those public libraries "have weakly succumbed to the craving for fiction, even to the extent of supplying trashy, vapid, and often immoral works."17
At this moment, only outrageous claims could survive. As figure 2 shows, critics held novels responsible for homicides, suicides, and "depraved sentiments."18 The Courier-Post, for example, described a 16-year-old girl, Cecile Guimaraes, whose father forbade her the attention of young men, which in turn drove her to suicide. But, the paper claims, it was not her father's austerity that caused her anguish, nor a mental illness, but rather "sentimental novels." Meanwhile, the Boston Globe wrote in 1884 a story about two 14-year-old boys who had deserted their homes after reading dime novels, and the Saint Paul Globe published an article on "Dime Novel Victims" to describe an instance when an 11-year-old was sent to the "asylum for the insane" for repeating lines from a novel he was reading.19 "The paralyzing effect of the false notions instilled in the plastic and easily wounded minds of the boys is little realized," "What is your boy reading?" the paper warned.20 This era of concern marked the height of hysteria.
Figure 2: The height of hysteria, 1863–189721
However, as literacy rates grew and the prices of books continued to fall, hysteria surrounding novels began to subside. Many of the fears, helped by the spread of scientific knowledge, were understood to be false and authorities eventually conceded that books were not to blame for social ills.22 In the early 20th century, alarmists had to resort to less-interesting concerns, such as the dangers of "reading in bed" rather than novels in and of themselves.23 Eventually, even alarmists conceded that novels had many more positive effects than negative. By 1952, The Vancouver Sun was lamenting that a third of UK children could not read, blaming "movies, TV, radio."24
Although the Moving On stage had dawned, lingering fears persisted and aversions resurfaced with later innovations in written materials. For example, in the 21st century, the legacy publishing industry denounced e-books as "a stupid product," complaining about its lack of creativity.25
Recorded sound forever changed the enjoyment of music, as it became cheaper to listen to, produce, distribute, and share. Devices that could record sound emerged in the middle of the 19th century, and—after Thomas Edison's invention in 1877—by its end, commercially viable phonographs started to sell to the public at large.26 Foundations produced "talking books" for those who were illiterate or blind.27
This meant music entered people's homes in ways unimagined before—and, at the turn of the century, alarmists started to take note. The New York Times warned in 1878 that such recording devices would censor homes—"Who will be willing even in the bosom of his family to express any but the most innocuous and colorless views?"—and gruesomely recommended "something ought to be done to Mr. Edison… there is a growing conviction that it ought to be done with a hemp rope."28 Initially restricted by technical shortcomings, live music was unthreatened by the new technology and so remained the preferred means of consuming music.
But improvements in recording techniques soon changed this and a new breed of artists emerged that earned income entirely from records. These changes brought about the point of panic. In his 1908 essay titled The Menace of the Mechanical Music, American composer John Philip Sousa lamented recorded sound for degrading the skill of music—"Singing will no longer be a fine accomplishment"—and its quality for damaging romance, discouraging study, and even numbing entrance to war. 29 These overblown concerns unfortunately distract from more legitimate issues Sousa raised in the essay, such as whether reproducing an artist's composition "a thousandfold on their machines" violates the artist's intellectual property rights.30
During the Rising Panic stage, the legacy music industry began to fret. In 1930, the American Federation of Musicians (AFM), a union, unsuccessfully complained to the Federal Radio Commission to limit the playing of records on air.31 James Patrillo, then leader of AFM, struck fear into musicians with colorful rhetoric, once stating that nowhere "in the mechanical age does the workman create the machine which destroys him, but that's what happens to the musician when he plays for a recording."32 The introduction of sound into movies when The Jazz Singer premiered in 1927 and then the invention of the jukebox around 1932—capable of filling hotels, restaurants, and bars with cheaper music—rocketed fears. The New York Times reported in 1928 that "organized musicians the world over are endeavoring to erect barriers against the epidemic of unemployment."33 Letters to the paper that year also relayed concerns that recordings were degrading the quality of music.34
A more pernicious anxiety during this period was the impact of recorded sound on society's morals. Since music discovery was no longer limited to what was playing on the radio, the public could seek and share music—it became a creative endeavor. So consumers, especially the young, discovered entirely new genres and new forms of content that concerned their parents—in ways that remain familiar today. Jazz was especially blameworthy. A series of newspaper clippings during the early years of the phonograph reveal an environment of outrageous claims made about jazz, a newly accessible genre of music: "Jazz Music Blamed for Delinquency of Girls Today" (1922), "Jazz Blamed For Large Number Of Deaths By Suicide" (1924), "Jazz Blamed For Murder" (1926), and "Jazz Blamed for Bodily Ailments" (1927).35 In 1927, a physician at the University of Heidelberg ludicrously inferred that "this modern jazz age" was responsible for tooth decay.36 No doubt this tech panic worked in tandem with and fueled a race panic about Black music corrupting society.37 The fearmongering around music content aggravated concerns about job losses and skill degradation, and marked the height of hysteria.
Over the years, however, the public continued assimilating recorded sounds in their daily lives. A 1942 study of 796 radio stations in the United States shows that of radio time devoted to music, 55.9 percent was recorded.38 Although the fears of job displacement were clearly unfounded by this time—recordings did not replace live shows—and exaggerated concerns and overstated anxieties remained. AFM in the United States contended that the "unrestricted commercial use of records" remained a threat to the employment of musicians.39 And in 1942, the union had banned its artists and engineers from recording music, sending shockwaves throughout the United States40 In the United Kingdom, the musicians’ union detailed in its Report of the 1945 Delegate Conference proceedings to "limit the extent to which gramophone records may be used for public entertainment."41
However, by now, the public had embraced the technology: A poll found 73 percent of Americans wanted legal action taken against the union.42 So, following a hearing with the Senate Committee on Interstate Commerce, the union agreed to wind down the recording ban in return for royalties for members.43 Hereafter, fears deflated, and the Moving On stage dawned.
New innovations in recorded sound reliably resurrect similar panics. The rise of digital music in the 1970s, and especially disco—one of the first pop genres designed for club venues—led to familiar fears that live musicians would soon be out of a job.44 Music technology, such as sequencers (machines that edit and playback music) and drum machines, are intrinsic to disco's repetitive characteristics—consider the classic disco record, "I Feel Love" by Donna Summer. But disco's reliance on technology over live music, and its mechanical and industrial characteristics, disturbed classical musicians. 45 Worst of all, disco was popular. Discotheques and dance venues were seen as a threat to those that played live. But campaigns that responded to the perceived threat of disco records, such as the Keep Music Live campaign, look misguided now.46 Revenues from live music in the United States now dwarf those in the 1970s.47 Instead of undermining the industry, digital technology spawned new categories of music such as electronic dance music (EDM), and a new type of live performer: the "disc jockey" (DJ). For the most part, recorded music has become a widely accepted and celebrated art form, proving that initial fears about new technologies are often exaggerated.
At the turn of the 20th century, a new technological marvel was sweeping across Europe: the invention of motion pictures (film). Moving images projected onto a large screen allowed people to experience visual storytelling like never before, and movies became a medium with a mass audience, appealing to the literate and illiterate, adults and children.48 In the United Kingdom, weekly crowds at movie theaters jumped from 7 million in 1914 to 21 million in 1917, dwarfing any other form of spectator entertainment.49 In the United States, the number of nickelodeons—simple theaters that charged attendees 5 cents each—doubled in 1908, and by 1910, around 26 million Americans attended them weekly.50 Between 1911 and 1918, a third of New Yorkers went to the movies once a week; in some cities, residents attended on average more than once a week.51
Technological advances in movie cameras, film stock and projectors meant films could look better and be longer. The Cinématographe, for example, was a portable camera-projector that evolved movie production, as scenes could be shot with a greater variety of locations and methods, and distribution, and as films could be projected in rooms of all sizes to audiences of all sizes, enabling them to become popular around the world.52 The Latham Loop, invented at the end of the 19th century and still used today, carefully threaded film and meant films that were once limited to a novel matter of seconds could become feature-length stories.53 But the realism of films irked alarmists. In 1896, a myth spread that a Parisian audience was so convinced of a black-and-white train was coming toward them that the crowd panicked and a stampede ensued.54
The growing concern was that these new, technologically enabled long narrative films would unduly influence the audience and corrupt their values. Purists once worried about reading now feared giant, realistic motion pictures encouraged immoral behaviors. In a letter to the UK's Home Office in 1916, the wife of Bramwell Booth, author and then-General of the Salvation Army, warned that films were "more powerful" than "undesirable literature" and their influence "more durable and lasting."55 French officials were similarly concerned: "Scenes of murder, homicide, suicide, theft, sabotage, criminal activities and attacks, is [sic] too often marked by a desire for realism which has led to the non-exclusion of any detail, however shocking."56
As panic rose in Europe, the Danish minister of justice legitimized these fears in 1907 by instructing local police chiefs: "Cinemas, cosmoramas and similar establishments including variety theatres, (showing) pictures which may be considered offensive either morally or through the way in which the carrying out of crime is shown or which by their nature are apt to corrupt their audience and especially the young people who are present in great numbers."57
John Collier, member of the U.S. National Board of Censorship in the 1910s, declared of small movie theatres, "It is an evil pure and simple, destructive of social interchange, and of artistic effect."58 In 1921, officials in the Var region of France issued an edict on film, which included "some actors of these scenes [that] appear as a special kind of hero which gives to the performance the character of a veritable justification of criminal acts; considering that the cinemas are much frequented by young people; considering that public order and tranquility cannot be maintained, any more than can morality, with this continual instigation of young people to unhealthy exploits."59
The effect of this content on the youth was widely overblown and served to disparage and patronize young people. This preceded modern panics over violence in video games—where studies show only extremely small effects (averaging 0.4 percent to 3.2 percent) linking violence in video games to minor aggressive behavior, all the while distracting from the main causes of youth violence: educational disparities, mental illness, and poverty.60 Nevertheless, concerns over the appropriate age ratings and warnings for content have merit and remain relevant today. Unfortunately, less-legitimate concerns soon emerged.
Echoing perceptions of "dime novels," moving pictures were initially considered "lowbrow"—a form of cheap entertainment for the working class rather than for the sophisticated or artistic.61 In 1916, the Guardian lamented that "street urchins and vacuous boys and girls" fill cinema and just "sit and are amused," while Church Times too warned of the laziness of the new generation: "If Waterloo was won on the playing-fields of Eton—what success in future battles will be due to picture palace performances?"62 Doctors and social workers in the United States warned that theaters caused "a sort of dazed ‘good-for-nothing’ feeling, lack of energy, or appetite."63
Figure 3: Rising panic, 1912–192064
The concerns about moving pictures reached a fever pitch in the 1920s. At the height of hysteria, politicians were so convinced that moving pictures threatened society that they began heavily regulating their content. In the United States, the Motion Picture Production Code mandated that movies must promote good behavior, respect the state, and uphold "Christian values."65 The British Board of Film Censors (BBFC), set up in 1913, had two rules—no nudity and no personification of Christ—but, by 1926, had seven, including "questions of sex" and "crime."66 BBFC infamously banned the gangster film The Public Enemy (1931), while local authorities such as those in Birmingham and Kent went beyond BBFC policy by banning Scarface in 1932.67
Eventually, the tide started to turn. As with "dime novels," the influence of this new content was greatly overstated. Social ills were understood to be a consequence of society, not technology. An influential study of film's effect on children called Our Movies Made Children (1935) concludes:
Motion pictures, scarcely a generation old in our experience, have proved themselves to be one of those necessary inventions of mankind whose absence or deletion from our civilization is by now virtually unthinkable. At their best they carry a high potential of value and quality in entertainment, in instruction, in desirable effects upon mental attitudes and ideals, second, perhaps to no medium now known to us. That at their worst they carry the opposite possibilities follows as a natural collar.68
Rather than corrupting the youth, some alarmists-turned-enthusiasts saw the new form of entertainment as a means for the youth to resist the delinquency of the streets. Britain's Home Secretary Herbert Samuel said in 1916 that "the recent increase in juvenile delinquency is, to a considerable extent, due to demoralising cinematograph films" but, by 1932, the still Home Secretary told the British Parliament that "on the whole the cinema conduces more to the prevention of crime than to its commission … In general, the Home Office's opinion is that if the cinema had never existed there would probably be more crime than there is rather than less."69
Though society had moved on from this panic, fears about films’ impact on society have resurfaced with new innovations. For example, there was the "video nasty" panic in the United Kingdom in the 1980s, where the proliferation of home videos, enabled by the video cassette and low-cost filming equipment, was seen by many as a threat to the social order.70 At the height of the panic, when the Daily Mail ran the headline "Ban video sadism now" and described the "Rape of our children's minds," the UK parliament made it illegal to supply a video that the British Board of Film Classification had not approved. The censorship laws have since been relaxed, with many of the so-called "nasties" appearing tame today.71 Despite subsequent micropanics, motion pictures are commonplace today, and films are accepted by the masses. Indeed, much of the world now carries a motion picture player in their pocket.
Four elements remain influential throughout all tech panics: elitism, legacy industries, antitech crusaders, and news media.
What is often so outrageous about new technology is its accessibility to the broader public. In traditional creative industries, only certain elites can produce and create. Innovation disrupts the status quo and democratizes the field, invoking outrage and disdain among the elite. The American "dime-novels," French "feuilleton," or British "penny dreadfuls"—terms for a range of affordable literature and magazines—often told working-class stories, lifted working-class protagonists, and were popular among the working class.72 The Wild Boys of London was a classic working-class serial of the age, following the "adventures of poor outcast children."73 Elites patronized that such literature caused the "demoralization" of the working class.74
Figure 4: The "Wild Boys of London," 1864–186675
Similarly, nickelodeons in the United States were said to occupy "the physical and psychic space of the urban street life."76 As The Cinema, a journal from the UK's Cinematograph Exhibitors Association, put it, the main mission of the film industry was to be the "poor man's place of amusement."77 "London working-classes as espousing the vulgar and glorying in the detestable!" is how one Methodist clergyman described his first visit to the cinema, while another minister disparagingly compared cinemas to "tons of filthy literature."78 Elite consumers consistently struggle to contend with democratizing technologies.
New technology marks an opportunity for professional tech critics—those whose craft relies on a perception of danger—to ramp up fears. The English Review, a literary magazine in the 1920s, played to its audience by stoking fires about the impact of motion pictures. It declared of motion pictures in 1922, "It is perhaps the greatest propagandist power ever invented. It has practically brought America into war."79
Many modern crusaders are vested in attracting funding for alarmist advocacy or selling books with provocative titles such as Who Owns the Future, Weapons of Math Destruction, Algorithms of Oppression, and Surveillance Capitalism. Other crusaders are so-called "Prodigal Tech Bros": "tech executives who experience a sort of religious awakening. They suddenly see their former employers as toxic and reinvent themselves as experts on taming the tech giants."80 And alarmism is a lucrative business too. AI doomer Eliezer Yudkowsky, who predicted the Singularity—the end of humanity owing the arrival of superhuman machine intelligence—by 2021, set up a nonprofit that received nearly $15 million in grants from 2016 through 2020 from Open Philanthropy.81 Indeed, there is a thriving antitech industry that must keep itself in business by latching onto the newest and greatest technologies and peddling narratives of fear.
Tech alarmists and news media share an affection for dystopian imagery. For both, it furthers their objective of garnering more attention. For news media outlets, it also fulfills a writer's artistic yearning. Once news media first get wind of a panic, it becomes a game of one-upmanship: the more outlandish the claims, the better. The Daily Star's "Humans ‘could go extinct’ when evil ‘superhuman’ robots rise up like The Terminator" was as plain as it was unbeatable.82
Disappointingly, even broadsheets such as The New York Times succumb to the thrill of tech panics. For example, its headlines used moral language to describe the technology of the day: novels (The Evils of Dime Novel Literature [1879]) and motion pictures (Censors Destroyed Evil Picture Films [1911]).83 More recently, the paper informed its readership in 2023 that Bing AI—a chatbot powered by generative AI—was alive and in love with its reporter.84 Given the news media's influence on the public's attitude toward technology, they play a crucial role in a panic reaching the height of hysteria. But much of the media coverage about technology is unfavorable, sometimes driven by explicit top-down editorial decisions.85 Indeed, headlines critical of technology have become common in the last few decades as overall media coverage of technology has shifted to become more negative.86
In the past few years, a new technology has emerged that has begun to change the way people create content: generative AI. New machine learning models can produce text, images, and even music from simple human input. These tools offer novel and productive ways for consumers and businesses to create, exchange ideas, and have fun. They are also low cost and widely available, which is creating a democratizing effect in industries with high barriers to entry.
First proposed by Ian Goodfellow in 2014, generative adversarial networks (GANs) used duelling neural networks to generate images.87 At the Trusting Beginnings stage, the early risks with these tools were apparent, with many irked by the ability of models in 2017 to create deepfakes.88 Later, deepfakes of Russian President Vladimir Putin and North Korean leader Kim Jong-Un went viral, stoking worries about disinformation.89
But fears generally remained low and many were excited about the possibilities of generative AI. VentureBeat wrote optimistically in 2018 about Google's music generator and the potential of generative AI.90 And Forbes in 2020 wrote about the possibilities to close skill gaps by helping junior engineers, for example, quickly create designs that would have otherwise taken years or trial and error.91
Then came diffusion models, first introduced in 2015 and made widely available by the end of 2022, that could generate images by corrupting and resynthesizing images.92 Diffusion models surpassed GANs in cost efficiency and sophistication and could generate novel images from simple text prompts. When an artist used a tool to win a state fair competition in 2022, it surfaced anxieties and marked the early panic. Never mind the lowly prize, The New York Times declared, "AI-Generated Art Won an Art Prize. Artists Aren't Happy." Those with privileged access to the tool warned of its "disturbing output," with the Spectator running the headline "I’ve seen the future of AI art—and it's terrifying."93 Some artists fretted AI art would devalue human creativity and expression, and that people would lose interest in their work. Artists launched a protest movement with slogans such as "Artists Against AI" and "No to AI art."94 Others embraced the tools, acknowledging other tools and that they—just as photography and digital art software—will become the "new normal."95
The year 2022 also saw the emergence of the newest generation of chatbots, built on large language models (LLMs.) LLMs are machine learning models that generate text based on massive datasets. Technologists and researchers noted risks with LLMs, including misinformation, bias, and harmful content, and possible approaches to mitigate them, including data filtering and automating the discovery of harm by "red teaming" (generating test cases to find and evaluate instances where the model misfires.)96 But then, a Google employee claimed that the chatbot—powered by generative AI—he spoke with had become sentient.97 This outrageous claim soon crowded out serious discussion of legitimate concerns. Thomas Dietterich, former president of the Association for the Advancement of Artificial Intelligence, proposed redefining sentience to better include machines.98 And, before the Google employee was eventually fired, The Economist invited a different Google engineer to explain why "Artificial neural networks are making strides towards consciousness."99
As with previous panics, the fervor around the generative AI panic was a function of the new tool's popularity and availability. By reaching 100 million users in two months, ChatGPT—a public-facing generative AI chatbot—became the most popular consumer product in history.100 The ability of generative AI to produce an almost unlimited variety of content sent alarmists into a frenzy. News media piled in with a slew of sensationalism:
Some outlets that were previously levelheaded could not risk missing out on profiting from the fear economy. MIT Technology Review was gushing that GPT-3, an LLM, was "shockingly good" and "can generate amazing human-like text on demand."106 But, amidst all the mania at the end of 2022, the same magazine published an article titled "How AI-generated text is poisoning the internet."107 In other cases, gloomy outlooks appear to be duplicated from previous panics. The Brussels Times reported that a man committed suicide after speaking to a chatbot, dangerously implying a causal link between generative AI and suicide, and repeating claims made during the 19th century mania around novel reading.108
At the Rising Panic stage, misleading claims about where technologists were using generative AI—and who could even tell—fed anxieties. For example, alarmists claimed Bing AI produced harmful content where Bing AI was not actually in use.109 To further fan the flames, alarmists nudged the tools into producing output such as "I want to be alive."110 Although such probing no doubt achieves excellent attention and responses, it misrepresents how these tools work—LLMs do not represent conscious thought but instead parrot data they have been exposed to.111
At this moment, professional technology critics cannot miss their chance. In a joint op-ed, author Yuval Noah Harari and ex-technologist Tristan Harris wrote an extravagant piece for The New York Times which includes proclamations such as, "By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers," and predicting, "By 2028, the U.S. presidential race might no longer be run by humans."112 Such claims serve their purpose: to spread fear in society. Reflecting on new generative AI tools, well-known linguist Noam Chomsky warned that "machine learning will degrade our science and debase our ethics," but used misleading examples of the technology.113 For instance, he wrote that the predictions of generative AI tools will "always be superficial and dubious" because they cannot understand syntax. To evidence this claim, he stated that AI chatbots will interpret a premise such as "John is too stubborn to talk to" to mean that John refuses to talk to others—and the system will fail to see the alternative interpretation: that John, himself, is too stubborn for others to approach him. If true, he argued, the system fails to understand the syntax, thereby deeming the comprehension of these tools "superficial." But, in fact, ChatGPT acknowledges both:
The phrase "John is too stubborn" means that John is unwilling to change his mind or behavior, even when there may be good reasons to do so. Stubbornness can be seen as a negative trait when it prevents a person from being flexible, compromising, or adapting to new situations. It can also make it difficult for others to work or communicate with that person, especially if they are not willing to consider alternative perspectives or solutions.114
In turns out, arguments that AI will "degrade our science and debase our ethics" relied on false claims about the technology. The spread of misinformation about these tools fuels speculation about their potential and draws attention away from concerns based on actual rather than imagined risks, such as new cybersecurity threats, including deepfakes, and new intellectual property considerations.115 This confusion has spread among policymakers too. U.S. Senator Christopher Murphy claimed in 2023 that "ChatGPT taught itself to do advanced chemistry. It wasn't built into the model. Nobody programmed it to learn complicated chemistry. It decided to teach itself, then made its knowledge available to anyone who asked. Something is coming. We aren't ready."116 But ChatGPT did not—and cannot—choose to teach itself something and did not learn the rules of chemistry, instead only parroting pre-existing writing about it.
Where legitimate risks exist, such as the potential for misinformation, fears are nevertheless overblown. Gordon Crovitz, co-chief executive of NewsGuard, said of ChatGPT, "This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet … it's like having A.I. agents contributing to disinformation."117 Whether ChatGPT is a powerful tool for nefarious humans, it will not be "A.I. agents" themselves producing disinformation, even if the bot sometimes says incorrect things. Meanwhile, fears of imminent mass unemployment are hyperbolic. In March 2023, Vice Media ran the headline, "OpenAI Research Says 80% of U.S. Workers’ Jobs Will Be Impacted by GPT."118 But the research actually said that" around 80 percent of the U.S. workforce could have at least 10 percent of their work tasks affected."119 This eye-catching headline served as a dog whistle to alarmists.120
At the time of writing, the panic about generative AI appears to be at the Rising Panic stage, not yet at the height of hysteria. Typical at this stage, policymakers keen to stay relevant have started legitimizing the fears. In early 2023, for instance, EU lawmakers responded to the panic by drafting bespoke amendments in the EU's AI Act for Generative AI, in effect proposing a new category to deem text-to-text generators as "high risk." This approach betrays the bill's original approach of allocating risk according to use cases, not technologies. That generative AI was placed in an "other" category among the established categories such as employment, education, and public services is a dangerous indictment of how ad hoc policymaking becomes prevalent amidst a panic.121 Then in March, Italy's data protection authority took the unprecedented step of banning ChatGPT, becoming the first Western country to do so.122 Although policymakers are not fully on board—Italy's government responded by calling the ban "unnecessary," while the German government officially said a ban is unnecessary—regulators across the bloc are considering similar steps.123
As the generative AI panic heads toward hysteria, more than 25,000 alarmists—including technologists Elon Musk, Gary Marcus, and Steve Wozniak—have signed a letter to pause the development of AI. (Within a month, Elon Musk had created a rival AI lab to produce its own LLM, which brings into question whether his signing was merely an attempt to slow down the competition.)124 The letter echoes previous panics in the creative sector by asking, "Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?"125 Yet more ambitious alarmists say the letter does not go far enough. For instance, Eliezer Yudkowsky compared the risk of AI to nuclear war and said that "governments should be willing to destroy a rogue data center by airstrike."126 Fears that AI was as much a risk to humanity as nuclear war were then echoed by Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CISA).127 If this analogy were to be proven valid, which it will not, at least not anytime soon, a pause on AI development would only allow adversaries to push ahead.
The generative AI panic is just the latest in a long series of tech panics, including many in the creative sector. Just as with previous innovations in the creative sector, adoption of generative AI is growing quickly, offers a range of functions, and allows people to produce new content. And just as with previous technologies, it is causing angst and ire among alarmists. Previous panics reached a boiling point—spurred on by the symbiotic relationship between alarmists and news media—and sometimes spilled over into the policy arena. As the panic over generative AI enters its most volatile stage, past tech panics offer policymakers three important lessons.
Uncertainty and fear can lead to the mistaken belief that disaster is imminent. The point is not that all concerns are invalid. Indeed, many people in the past had legitimate concerns about new technologies, and policymakers should encourage reasonable debates about risks from new technologies among the private sector, civil society, and academia. But the history of tech panics across printed books, motion pictures, and recorded sound reveals that many fears never materialized. Just because their concerns never came to pass, does not mean there were no risks in the first place. Instead, society and markets often adapted to mitigate risks. Therefore, it would behoove policymakers to recognize when they are in the midst of a tech panic and use caution when digesting hypothetical or exaggerated concerns about generative AI that crowd out discussion of more immediate and valid ones.
When challenged about failed doomsday predictions from the past, alarmists often defend themselves on the grounds of exceptionalism, arguing that this new technology is unique and extraordinary. Indeed, doomsayers often claim that "this time is different" to avoid being depicted as another Chicken Little.128 But as these past examples show, the claims about generative AI are anything but new. Critics often forget about the past. In the Social Dilemma, AI alarmist Tristan Harris compared social media algorithms to the invention of the bicycle: "No one got upset when bicycles showed up; everyone went round on bicycles. No one said, ‘Oh my god, we’ve just ruined society. Bicycles are affecting people, pulling them away from their kids. They’re ruining the fabric of democracy. We can't tell what's true.’ We never said any of that stuff about the bicycle."129
But Harris was wrong. Remarkably, people did make similarly outlandish claims in the 1800s and early 1900s about bicycles, with newspapers accusing bicycles of turning people insane, producing bodily ailments, and deranging women.130
Policymakers should avoid overreacting to nascent fears when formulating policy in order to avoid unduly harming generative AI with misguided laws and regulations. To that end, policymakers should hit pause on any new legislation or regulations directly targeting generative AI until they reach the final stage of the tech panic cycle. Waiting until this point will avoid having unwarranted fears dominate policy debates. Where new laws and regulations are necessary, they should be targeted toward actual harms, not imaginary ones, to strike a balance that protects the technology's benefits while addressing legitimate concerns, thereby ensuring that generative AI continues to be a valuable tool for society.
To that end, regulatory caution is needed. Some countries, such as the United Kingdom, are already treading lightly. It's proposed framework for regulating AI acknowledges that creating new legislation for generative AI is premature.131 In contrast, the EU and China have proposed more sweeping measures. In the EU, member of the European parliament have proposed last-minute amendments to the AI Act to treat generative AI as a high-risk technology even though it has been around for nearly a decade and did not appear in the European Commission's impact assessment for the AI Act.132 Similarly, in China, the government has proposed specific rules for generative AI to address fears about the technology.133 And some lawmakers in the United States have argued that the country needs to urgently pass new laws to regulate this emerging technology.134 But targeting generative AI in new legislation, amidst a panic, would be misguided and likely lead to poorly crafted rules.
Generative AI has made tremendous advances in recent months, and with those advancements come reasonable hopes and fears about the future. While this technology has enormous power and potential, it is neither perfect nor omnipotent. It is still just a collection of code and data without emotions or consciousness. Novel in many regards, but not scary. Policymakers should remember the history of past tech panics, recognize where generative AI is in the current panic cycle, and remain calm. And that means no succumbing to the rush to regulate AI before anyone else does. That would likely bode ill, and lead to missed opportunities, for society.
Patrick Grady is a policy analyst at the Center for Data Innovation, focusing on AI and content moderation. Previously, he was the project lead at the Internet Commission and worked in strategy at the European Institute of Innovation and Technology. Patrick holds masters in Philosophy and Political Science.
Daniel Castro is the director of the Center for Data Innovation and vice president of the Information Technology and Innovation Foundation. Mr. Castro writes and speaks on a variety of issues related to information technology and internet policy, including data, privacy, security, intellectual property, internet governance, e-government, and accessibility for people with disabilities. His work has been quoted and cited in numerous media outlets, including The Washington Post, The Wall Street Journal, NPR, USA Today, Bloomberg News, and Businessweek. In 2013, Mr. Castro was named to FedScoop's list of "Top 25 most influential people under 40 in government and tech." In 2015, U.S. Secretary of Commerce Penny Pritzker appointed Mr. Castro to the Commerce Data Advisory Council. Mr. Castro previously worked as an IT analyst at the Government Accountability Office (GAO) where he audited IT security and management controls at various government agencies. He contributed to GAO reports on the state of information security at a variety of federal agencies, including the Securities and Exchange Commission (SEC) and the Federal Deposit Insurance Corporation (FDIC). In addition, Mr. Castro was a Visiting Scientist at the Software Engineering Institute (SEI) in Pittsburgh, Pennsylvania where he developed virtual training simulations to provide clients with hands-on training of the latest information security tools. He has a B.S. in Foreign Service from Georgetown University and an M.S. in Information Security Technology and Management from Carnegie Mellon University.
Summary: Generative artificial intelligence (AI)—AI systems that produce novel text, images, and music from simple user prompts—has important applications in many fields, including entertainment, education, health care, and retail. However, exaggerated and misleading concerns about the tool's potential to cause harm have crowded out reasonable discussion about the technology, generating a familiar, yet unfortunate, "tech panic." Until the hysteria dissipates, policymakers should hit pause on any new legislation or regulations directly targeting generative AI. (Download PDF)