The Algorithmic Knowledge Gap Within and Between Countries: Implications for Combatting Misinformation
While understanding how social media algorithms operate is essential to protect oneself from misinformation, such understanding is often unevenly distributed. This study explores the algorithmic knowledge gap both within and between countries, using national surveys in the United States (N = 1,415), the United Kingdom (N = 1,435), South Korea (N = 1,798), and Mexico (N = 784). In all countries, algorithmic knowledge varied across different sociodemographic factors, even though in different ways. Also, different countries had different levels of algorithmic knowledge: The respondents in the United States reported the greatest algorithmic knowledge, followed by respondents in the United Kingdom, Mexico, and South Korea. Additionally, individuals with greater algorithmic knowledge were more inclined to take actions against misinformation.
The Origin of Public Concerns Over AI-Supercharging Misinformation in the 2024 US Presidential Election Support
Researchers and the media have highlighted the potential adverse effects of artificial intelligence (AI) on the 2024 US presidential election. To measure baseline public perceptions of this issue going into the election, this study surveyed 1,001 Americans and found that four out of five expressed concerns about the use of AI to spread election misinformation. Further analysis shows that direct interactions with Generative AI tools such as ChatGPT and DALL-E have a negligible impact on alleviating these concerns. Education levels and work experiences in STEM fields also showed no significant associations with concern levels. In contrast, learning about AI through news, particularly TV programs, significantly correlates with increased concern. The results suggest that more ubiquitous use of the tools will not necessarily make the public more critical of AI misinformation risk; rather, the data point to the vital role news will play in shaping public understanding of AI risks.
Poll: How Americans see AI: Caution, skepticism, and hope
In the year since the release of ChatGPT, the volume of news and information about artificial intelligence has skyrocketed. AI dominates headlines in traditional media and fills social media chatter. It’s the subject of podcasts, TikToks, and YouTube videos, highlighting deep dangers and broad promises. What are people taking away from these stories, and how does that connect with reality?
The AI Literacy Lab at Northeastern University was launched to advance the public’s understanding of artificial intelligence. Our goal is to help people transcend the hype, exuberance, and fear and embark on meaningful discussions about managing and integrating this new technology into society. To encourage responsible conversations about AI in the future, we need to understand the current state of the conversation.
We conducted a poll of 1,000 Americans 18 and older to gauge their feelings and attitudes about AI. We found that AI has caught the public’s attention — more than three-quarters of Americans consume news about it at least weekly — and created a deep sense of caution and skepticism. Our survey also pinpointed some demographic factors that affect people’s attitudes toward AI. People with STEM experience are more likely to feel optimistic about the technology, suggesting that more technical knowledge could diminish anxiety and prompt more nuanced discussions. Women are more likely than men to be skeptical of AI and its stewards, raising questions about whether some uses of AI seem disproportionately harmful to children and families.
This poll of 1,000 adults 18 and older was conducted online by the research firm Dynata between August 15 and 29, 2023. The panel was weighted to reflect U.S. demographics in age, gender, household income, and race. The poll has a margin of error of +-3%. Survey analysis by Garrett Morrow and John Wihbey, Northeastern University.