Americans feel more concerned than excited about new Generative AI technologies
Four times as many Americans feel concerned than feel excited about Generative AI, but some remain optimistic.
This article is part of my ongoing coverage of the 2023 Generative AI & American Society Survey, a nationally representative survey of Americans conducted by the National Opinion Research Center (NORC) at the University of Chicago. Known for producing some of the most scientifically rigorous surveys in the United States, NORC was my chosen partner for this project, which I both wrote and funded. For more details on the survey, you can check out the project FAQ.
Four times as many Americans feel concerned than feel excited about Generative AI
A November 2021 survey from the Pew Research Center found that 37% of adults were more concerned than excited about AI in daily life. However, this survey was conducted before the release of new Generative AI technologies like ChatGPT and Midjourney.
The 2023 Generative AI & American Society Survey revisited the question posed by Pew, but with an emphasis on Generative AI, revealing a comparable level of concern among Americans. The results were strikingly similar: while Pew’s survey indicated that 37% of Americans were more concerned than excited about AI, the Generative AI survey found an analogous 36% of Americans expressing more concern than excitement about Generative AI.
Just 8% were more excited than concerned, while 28% are equally concerned and excited. More than a quarter, 27%, don’t know; likely because not everyone has been exposed fully to new Generative AI technology.
Given the extensive media coverage and discussions surrounding the potential risks of AI, it is not surprising that more Americans have expressed concern than excitement over these technologies. For instance, a widely circulated open letter endorsed by nearly 34,000 signatories—including prominent tech figures like Elon Musk—has raised alarms about the varied threats posed by new Generative AI technologies. These threats range from job displacement and the proliferation of AI-generated misinformation to the severe, existential risk of humanity’s destruction. This sentiment of concern is further echoed in a survey at the well-publicized Yale CEO Summit, where 42% of CEOs concurred that AI harbors the potential to extinguish humanity within the next decade.
Reasons for concern are diverse
To get a sense of why some Americans feel more concerned than excited, the 36% that expressed concern were asked to select the primary reason for concern from among a list of 10 commonly cited fears (there was also an option to select “other”).1 See the project FAQ for why this was a forced choice selection.
While no single fear dominated public concern about AI, the most cited worry focused on the technology's potential to become a potential risk to humanity. This theme has deep roots in American culture, from Isaac Asimov's Three Laws of Robotics, to iconic films like 2001: A Space Odyssey and The Terminator.
More recently fears about Generative AI’s risk to humanity have been articulated by leading figures in the AI industry. A statement from the Center for AI Safety called for prioritizing the existential risks of AI on a global scale, akin to addressing pandemics or nuclear threats. The statement was signed by high-profile experts, including Sam Altman, cofounder of OpenAI, and Geoffrey Hinton, often dubbed as one of the three “Godfathers of AI,” both of whom emphasized the urgent need for action, as reported by multiple media outlets. The upcoming global AI Safety Summit hosted by UK Prime Minister Rishi Sunak is reportedly expected to focus almost entirely on Generative AI’s potential risk to humanity.
Others in the AI community have criticized the focus on existential risk, worrying it diverts attention from more pressing and grounded harms. “AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype,” reads the title of a recent op-ed in Scientific American by AI researchers Emily Bender and Alex Hanna.
While AI’s risk to humanity was the most frequently cited concern, it made up only 16% of the total. Other, more immediate worries took precedence for many respondents including AI misuse (both intentional and accidental), reduced human interaction, job losses, and AI-generated misinformation.2
For example, 15% were primarily concerned about the irresponsible use of AI causing unintended harm, while 14% feared its deliberate misuse. Follow-up research will be needed to explore what harms and misuse most concern Americans.
A further 12% were worried that AI could diminish human interaction and dilute the uniquely human qualities in art, music, and writing and 9% primarily worry about job losses. These apprehensions have taken center stage in the current strikes by writers and actors unions. As production studios intensify their focus on AI-related job openings, a major sticking point in the negotiations is the worry that studios will use Generative AI technologies to automate script-writing tasks.3 Some actors fear that future iterations of Generative AI could eventually replace actors altogether.4
Rounding out concerns are fears about AI-generated misinformation (9%), risks to privacy (6%), bias within AI systems (2%), and the impact of AI on the environment (1%). (I’ve previously written about both AI-generated misinformation and carbon emissions of AI data centers).
Some Americans remain optimistic
While more than a third of Americans are more concerned than excited about AI, 28% are equally concerned and excited and 8% are more excited than concerned. To understand why some Americans feel more excited than concerned, the 8% that expressed excitement were asked to select the primary reason for excitement from among a list of six commonly cited promise areas (there was also an option to select “other”). See the project FAQ for why this was a forced choice selection.
Before discussing these results it’s important to note that not all AI experts fully buy into the belief around AI’s catastrophic potential. Yann LeCun, Meta’s Chief AI Scientist and another of the three “Godfathers of AI,” has stated that fears of AI’s risk to humanity are overblown. He believes that while more immediate concerns like job losses are possible, the long-term benefits of Generative AI outweigh the risks.
In LeCun’s testimony during the Senate’s U.S. Intelligence Committee hearing he stressed the importance of safety, but also access to transformative Generative AI technology, arguing that:
Having access to state of the art AI will be an increasingly important driver of opportunity in the future for individuals, for companies, and for economies as a whole.
Even OpenAI co-founder Sam Altman, who, as discussed earlier, has expressed significant concerns about the negative potential of Generative AI, ultimately believes that Generative AI will be a net positive. In March of 2023, he explained this optimism with noted tech journalist Kara Swisher.
[OpenAI] will be a participant in this technological revolution that I believe will be far greater in terms of impact and benefit than any before…We will be one of several in this moment, and that is going to be really wonderful. This is going to elevate humanity in ways we still can’t fully envision. And our children, our children’s children, are going to be far better off than the best of anyone from this time. And we’re just going to be in a radically improved world. We will live healthier, more interesting, more fulfilling lives; we’ll have material abundance for people…
The opportunity for improved material standard of living and increased fulfilment were echoed by the 8% of Americans expressing optimism. Topping reasons for excitement were various dimensions of Generative AI’s ability to improve productivity. This included saving time by automating mundane tasks (26%), improving productivity at one’s job (18%), and enhancing personal creativity (13%).
Early research indeed suggests that Generative AI has the potential to significantly boost individual productivity and creativity across various professions:
One study involving 450 college-educated professionals, including marketers, found that using ChatGPT reduced the time needed to complete work-related writing tasks by about 40% and improved the quality of the output.
Another experiment involving 100 software developers showed that those using GitHub’s Copilot completed their tasks in half the time compared to those coding without it, and a subsequent survey of 2,000 developers indicated improvements in both productivity and job satisfaction.
An August 2023 preprint presenting the results of a study involving over 200 writers found that narratives created with AI assistance scored higher on creativity metrics than those crafted without AI, suggesting AI enhancement of human creativity.5
Just a week ago
and colleagues released the results of their experiment using ChatGPT-4 at Boston Consulting Group. “Consultants using AI finished 12.2% more tasks on average, completed tasks 25.1% more quickly, and produced 40% higher quality results than those without,” Mollick reported. See the full write-up on Mollick’s substack .
Consumers and workers won’t have to wait long to see these productivity improvements. Just this month Microsoft announced that it would expand its Bing Chat co-pilot into a unified Generative AI experience crossing its widely used suite of M365 tools like Word and Excel as well as the Windows operating system. Google Duet — a rebranding of its Generative AI offering — is already available for customers of its paid Google Workspace suite, which includes Gmail, a market leading email client. Some features from Adobe Firefly’s Generative AI image platform have already been incorporated into Adobe Photoshop, with more likely on the way.
Among those excited about Generative AI, 23% primarily anticipate improved educational tools as a key benefit. Several major EdTech platforms, such as Khan Academy, edX, Coursera, and Pearson, have already announced initiatives to integrate Generative AI language technologies into their services.6 Mobile applications are also getting in on the action; an early entry is the Hello History app, which uses an AI language model to let users engage in conversations with historical figures.
Improving accessibility to digital tools for people with disabilities (10%) and providing unique entertainment content (10%) rounded out reasons for excitement.
How were these figures determined?
Results from the 2023 Generative AI & American Society Survey came from the National Opinion Research Center’s probability-based AmeriSpeak panel of adults ages 18 and over. The sample size was 1,147 and responses were weighted to ensure national representation. The exact questions posed to respondents about AI excitement and concern are shown below. For more information, please refer to the FAQ page.
Concern or excitement question
The concern question was randomized so that about half of respondents saw “More excited than concerned” as the first option and the other half saw “More concerned than excited” first.
As new Generative AI technologies are becoming more common in our everyday lives, how does this make you feel?
More excited than concerned
More concerned than excited
Equally concerned and excited
I don’t know
Reasons for excitement question
This question was shown to those respondents who selected “More excited than concerned.” Response options were shown to respondents in a random order.
What is the main reason that you are more excited than concerned about the increased use of Generative AI in daily life?
It could improve my productivity at my job (for example, writing emails, creating presentations, doing research).
It could enhance my personal creativity (for example, in music, writing, or visual arts).
It could provide unique entertainment content (for example, stories, music, visual art).
It could increase accessibility of digital tools for people with disabilities.
It could serve as a valuable educational tool, assisting in learning new subjects or concepts.
It could save me time by automating mundane tasks.
Other (please specify)
Reasons for concern question
This question was shown to those respondents who selected “More concerned than excited.” Response options were shown to respondents in a random order.
What is the main reason that you are more concerned than excited about the increased use of Generative AI in daily life?
Job losses due to AI.
Infringements on digital privacy, increased risk of surveillance or hacking.
Reduced human interaction or loss of uniquely human aspects in art, music, and writing.
AI becoming too powerful and becoming a potential risk to humanity.
The irresponsible use of AI leading to unintended harm.
AI being misused to cause intentional harm.
The potential for bias within AI systems.
The energy and resource consumption of AI systems and their potential impact on the environment.
The spread of misinformation or false content created by AI.
The lack of adequate oversight and regulation of AI.
Other (please specify)
If you have additional questions, comments, or suggestions please do leave a comment below or email me at james@96layers.ai. To help advance the understanding of public attitudes about Generative AI I’m making all raw data behind the 2023 Generative AI & American Society Survey available free of charge. Please email me if you’re interested.
Those selecting “Other” had the option of entering specific concerns in a text box. Those that choose to enter a response almost universally wrote some version of “All of these possibilities concern me.”
The dichotomy between AI-based threats to humanity and more mundane, but immediate risks was highlighted in a recent Nature editorial.
For example, see this article in USA Today.
For example, see “AI is a concern for writers. But actors could have even more to fear” from CNN.
For a more detailed discussion see the section “Generative AI: The dual-edged sword in social media” in my article “Dynamics of Generative AI-driven misinformation: Social media's role.”
For sample media coverage see, “Edtech companies jump on generative AI bandwagon” from EdScoop and “New A.I. Chatbot Tutors Could Upend Student Learning” from The New York Times.