Illuminating AI

Promoting an informed society around the uses and consequences of artificial intelligence. We study the intersection of media and AI.

Communication, understanding, and impact in the age of AI

The AI Literacy Lab at Northeastern University, affiliated with both the School of Journalism & Media Innovation and the Internet Democracy Initiative, is a project focusing on media and public knowledge relating to artificial intelligence. 

Our goal is to foster a well-informed global society that can grapple with the promise, pitfalls, and consequences of emerging technology.

If you’re a technologist in search of mission-driven projects, a journalist looking for guidance about AI, or a global citizen interested in using technology to enhance communication and democracy, please reach out — we want to work with you.


Recent work from the Lab:

The 2024 Computation + Journalism Symposium at Northeastern University

Our faculty are hosting this important annual gathering of technologists, researchers, and media practitioners. The conference features a variety of panels and workshops that relate to the intersection of AI and media.

The Origin of Public Concerns Over AI-Supercharging Misinformation in the 2024 US Presidential Election

In a 2024 preprint paper, lab members Harry Yaojun Yan, Garrett Morrow, Kai-Cheng Yang, and John Wihbey explore the origin of public concerns over AI-generated misinformation in the upcoming 2024 US presidential election. Surveying over 1,000 Americans found that four out of five expressed concerns about using AI to spread election misinformation. Further analysis shows that direct interactions with Generative AI tools such as ChatGPT and DALL-E have a negligible impact on alleviating these concerns. Education levels and work experiences in STEM fields also showed no significant associations with concern levels. In contrast, learning about AI through news, particularly TV programs, significantly correlates with increased concern. The results suggest that the ubiquitous use of these tools will not necessarily make the public more critical of AI misinformation risk; rather, the data point to the vital role that news will play in shaping the public’s understanding of AI risks.

Epistemic Risk for Democracy

In a 2024 conference paper, Prof. John Wihbey examines the concept of “epistemic capture” or lock-in of public knowledge and a corresponding loss of autonomy as AI mediates the information environment. The paper analyzes three example domains – journalism, content moderation, and polling – to explore these dynamics. It argues that the pathway forward for achieving any vision of ethical and responsible AI in the context of democracy means an insistence on epistemic modesty within AI models, as well as norms that emphasize the incompleteness of AI’s judgments with respect to human knowledge and values.

Listen to the Tech Policy Press podcast about the paper.

Social Media’s New Referees?: Public Attitudes Toward AI Content Moderation Bots Across Three Countries

Based on representative national samples of ~1,000 respondents per country, we assess how people in three countries, the United Kingdom, the United States, and Canada, view the use of new artificial intelligence (AI) technologies such as large language models by social media companies for the purposes of content moderation. About half of survey respondents across the three countries indicate that it would be acceptable for company chatbots to start public conversations with users who appear to violate rules or platform community guidelines. Persons who have more regular experiences with consumer-facing chatbots are less likely to be worried in general about the use of these technologies on social media. However, the vast majority of persons (80%+) surveyed across all three countries worry that if companies deploy chatbots supported by generative AI and engage in conversations with users, the chatbots may not understand context, may ruin the social experience of connecting with other humans, and may make flawed decisions.

US Survey Report: AI and Trust

In August 2023, we conducted a poll of 1,000 Americans 18 and older to gauge their feelings and attitudes about AI. We found that AI has caught the public’s attention and created a deep sense of caution and skepticism. Americans doubt that the people who developed generative AI will be the best stewards of the technology. They favor regulation and ethical guidelines, even if they’re not certain who should be making the rules. And they are thinking deeply about which potential uses of AI are appropriate or helpful to society, and which seem to cross a moral or ethical line.

Read our report

Read the topline data


Past programming

In the news