Americans broadly favor Responsible AI policies
Even when presented with tradeoffs the majority of Americans favor common Responsible AI policies
This article is part of my ongoing coverage of the 2023 Generative AI & American Society Survey, a nationally representative survey of Americans conducted by the National Opinion Research Center (NORC) at the University of Chicago. Known for producing some of the most scientifically rigorous surveys in the United States, NORC was my chosen partner for this project, which I both wrote and funded. For more details on the survey, you can check out the project FAQ.
Americans broadly favor responsible AI policies
With the rise of Generative AI, the desire for more transparent and responsible use of AI has also increased. Many in the AI community have called for greater accountability, including the labeling of AI-generated content, enhanced checks for bias and fairness, strengthened data security and privacy, and more explainable AI systems.1
Data from the 2023 Generative AI & American Society Survey show that Americans broadly support Responsible AI policies, with agreement ranging from 41% to 66%, depending on the specific policy. The proportion of respondents who are neutral — neither agreeing nor disagreeing — varies between 20% and 31%, while explicit disagreement remains low across all policies.
Measures to encourage Responsible AI are not without tradeoffs. These policies could increase the cost to use AI tools or slow AI innovation, blunting potential benefits. Moreover, integrating safeguards into AI tools to prevent the generation of harmful or offensive content implies that AI companies would have the authority to define what is considered “harmful” or “offensive,” potentially leading to censorship concerns.
These trade-offs were considered when gauging the public’s level of agreement with each facet of Responsible AI. For instance, the wording of the question pertaining to content safeguards is presented below. The complete phrasing for each Responsible AI question can be found at the end of this article or on this page.
Idea: AI tools generating content should have built-in safeguards to avoid creating harmful or offensive content.
Benefit: This could result in content that is more respectful and less likely to cause emotional harm.
Tradeoff: This could mean that companies define what is considered “harmful” or “offensive”. This might lead to censorship issues.
Responsible AI is already making its way into various U.S. policy proposals. Senator Chuck Schumer recently proposed his SAFE Innovation Framework for Generative AI at a speech at the Center for Strategic and International Studies. Schumer hopes to sponsor and pass his framework as Congressional legislation.
One proposal from the SAFE Innovation Framework involves measures to address bias, a topic that has been widely reported on. Schumer’s home state of New York has already passed legislation requiring AI systems used in hiring decisions to undergo annual audits for bias. Even when presented with tradeoffs like the delayed release of AI systems or less creative AI, 58% of Americans agreed that AI systems should undergo thorough checks for bias and fairness before launch, with just 8% in disagreement.
However, New York’s law does not mandate so-called “AI explainability,” the concept of decoding the specific inner workings of AI systems to understand how they make decisions. The idea of AI systems that explain their decision making received the support of 55% of Americans, even if it makes AI systems harder to build and increases costs to use them. Only 6% disagreed with the policy idea.
In June of 2023, U.S. Senator Michael Bennet, a Democrat who is active in artificial intelligence issues, urged major tech firms to impose labels on AI-generated content to help curb the dissemination of material designed to mislead users. The 2023 Generative AI & American Society Survey reveals that 63% of Americans agree with labeling AI-generated content, even considering potential tradeoffs like the unnecessarily stigmatizing educational or useful AI content. Public disagreement with labeling is at a mere 4%.
Others in the AI community have called out security vulnerabilities of new AI language systems, such as prompt injections. Nearly two-thirds of Americans (63%) support AI systems prioritizing user security and data privacy, even if it makes the AI system harder to use. Only 4% disagree.
Tech companies are promoting their Responsible AI practices
Major technology companies have already begun to tout their increased focus on Responsible AI. Microsoft CEO Satya Nadella and a group of Microsoft Executives underscored the company’s Responsible AI practices in their M365 Copilot announcement in May of this year. Just a week later Google CEO Sundar Pichai authored an op-ed in the Financial Times titled, “Building AI responsibly is the only race that really matters.” That same month OpenAI co-founder and CEO Sam Altman urged U.S. Congress to increase AI regulation in his Senate testimony.
In June, Microsoft, Google, OpenAI and four other major AI companies agreed to a series of Responsible AI practices proposed by the Biden administration. Earlier this month the White House announced that eight more companies had signed on.
Some remain skeptical that companies’ voluntary commitments will provide the necessary measures to reduce AI harms. Emily Bender, an AI researcher and outspoken critic of Big Tech, argued in July that effective legislation must be imposed from outside the tech industry. Another critic, AI expert and tech founder Gary Marcus, penned an op-ed in Time titled, “In the Rush to AI, We Can’t Afford to Trust Big Tech.”
Meanwhile, Senator Schumer’s SAFE Innovation Framework is moving forward. Elsewhere in Congress, a bipartisan group, led by Ted Lieu of California, has proposed additional regulation through a National Commission on Artificial Intelligence. Whether critics support the approach or not, Big Tech companies will likely be a part of the regulation decision-making process.
Higher levels of education correlate with agreement on Responsible AI; Some remain neutral
There is moderate demographic variation in support for Responsible AI policies when agreement is averaged across the eight distinct areas of Responsible AI. Individuals with higher educational attainment tend to exhibit more support. Specifically, 68% of those holding a bachelor’s degree or higher express support for Responsible AI policies, contrasting with the 40% support among those without a high school diploma. However, this disparity largely stems from individuals with lower education levels expressing more neutrality toward the policies rather than outright disagreement. Those with less educational attainment are also less familiar with Generative AI technologies and so may not fully appreciate the implications of Responsible AI policies.
Similar trends are observed in other demographic categories. Older Americans and those with higher incomes are more inclined to support Responsible AI policies. Regarding ethnic differences, Black individuals are less likely to agree with the policies; but again this is mostly due to their high rates of neutrality rather than substantially higher rates of disagreement.
In fact, disagreement with Responsible AI is consistently low across all demographic groups, with disagreement ranging between 6% and 11% depending on the specific demographic considered.
A majority of Americans support AI antitrust regulation
The majority of Americans also support regulation to prevent AI market concentration, even if it slows the pace of AI innovation.
Market concentration has been a concern for some in the AI community. For instance, the AI NOW Institute submitted a public comment to the U.S. Federal Trade Commission after they invited input on regulating cloud computing, which serves as the infrastructure for most AI technology. The Institute wrote:
[The large datasets needed to create AI] deeply entrench the infrastructural and economic power of the few firms that retain control over the key components to building AI, with detrimental effects on competition in the AI industry. This also contributes to consumer injury in many forms, including harms to privacy and security, encouraging the spread of false and misleading information, perpetuating patterns of inequality and discrimination, harmful effects on workers, and environmental harms.
How were these figures determined?
Results from the 2023 Generative AI & American Society Survey came from the National Opinion Research Center’s probability-based AmeriSpeak panel of adults ages 18 and over. The sample size was 1,147 and responses were weighted to ensure national representation. The exact questions posed to respondents about Responsible AI are shown below. For more information, please refer to the FAQ page.
Responsible AI questions
A total of 13 policy questions were presented in a grid format, grouped into three thematic categories. Each question allowed respondents to choose their level of agreement. The order of the 13 questions presented to respondents was randomized. Each policy question included an idea along with potential benefits and tradeoffs. The 8 Responsible AI questions were among this broader set of 13 policy questions.
Theme 1 - Openness and responsibility
The following are some ideas to make AI systems more open and responsible. Each idea has benefits, but there could also be downsides, or "tradeoffs". We've included potential benefits and tradeoffs for context, but there might be more that we haven’t mentioned.
Indicate how much you agree or disagree with each idea. Keep in mind, your opinion is only about the idea itself. The benefit and tradeoff might help guide your opinion.
Question 1
Idea: AI systems should clearly explain how they make decisions.
Benefit: Clear AI explanations could help users make smarter choices.
Tradeoff: This could make AI systems harder to build so they might cost more to use.
Question 2
Idea: Companies should tell us how their AI models work and what data they use.
Benefit: This approach could foster trust and allow users to pick the AI systems they feel comfortable using.
Tradeoff: This could give away AI company secrets and let AI creators in other countries get ahead. In addition, knowing they have to disclose details might make companies less eager to innovate.
Question 3
Idea: AI companies should tell users when they're talking to an AI, not a person.
Benefit: Knowing they're talking to an AI could help users set clear expectations about the interaction.
Tradeoff: This could break the feel of a real chat and make AI systems less enjoyable or easy to use.
Question 4
Idea: Content created by AI should have a clear label saying it's AI-generated. AI-generated content includes images, videos, stories, and music.
Benefit: This could enable users to understand content origins and make informed decisions.
Tradeoff: People might engage with content less if they know it's made by AI even if the content is useful or entertaining. That's because it could seem less genuine or appealing.
Theme 2 - Changes to operations
The following are some ideas for potential changes to how Generative AI systems and companies operate. Each idea has benefits, but there could also be downsides, or "tradeoffs". We've included potential benefits and tradeoffs for context, but there might be more that we haven’t mentioned.
Indicate how much you agree or disagree with each idea. Keep in mind, your opinion is only about the idea itself. The benefit and tradeoff might help guide your opinion.
Question 5
Idea: AI systems should undergo thorough checks for bias and fairness before their launch.
Benefit: If AI is fair, it could mean everyone gets an equal shot, especially when big decisions are made.
Tradeoff: This could delay the release of the AI system or make the system less creative.
Question 6
Idea: AI systems should be designed to prioritize user privacy and data security.
Benefit: This could help keep personal data out of the wrong hands.
Tradeoff: This might make the AI less useful or harder to use.
Question 7
Idea: AI tools generating content should have built-in safeguards to avoid creating harmful or offensive content.
Benefit: This could result in content that is more respectful and less likely to cause emotional harm.
Tradeoff: This could mean that companies define what is considered “harmful” or “offensive”. This might lead to censorship issues.
Theme 3 - Government regulation
The following are some ideas for how governments might handle Generative AI. Each idea has benefits, but there could also be downsides, or "tradeoffs". We've included potential benefits and tradeoffs for context, but there might be more that we haven’t mentioned.
Indicate how much you agree or disagree with each idea. Keep in mind, your opinion is only about the idea itself. The benefit and tradeoff might help guide your opinion.
Question 8
Idea: The government shouldn't let AI assist in running crucial systems. These include power grids, transit, telephone lines, or the internet.
Benefit: By excluding AI, we might reduce risks linked to AI errors or cyber attacks in these systems.
Tradeoff: Lack of AI in these systems might reduce automation and increase user costs. For example, your power bill could be higher than if AI were used.
Market concentration question
Idea: Regulations should stop a few technology companies from controlling the entire AI market.
Benefit: This could boost competition. It could allow for more diverse AI solutions. It could also prevent one company from getting too powerful.
Tradeoff: This could slow down the pace of AI development and innovation.
If you have additional questions, comments, or suggestions please do leave a comment below or email me at james@96layers.ai. To help advance the understanding of public attitudes about Generative AI I’m making all raw data behind the 2023 Generative AI & American Society Survey available free of charge. Please email me if you’re interested.
See examples later in the article.