Illuminating AI

Promoting an informed society around the uses and consequences of artificial intelligence. We study the intersection of media and AI.

Communication, understanding, and impact in the age of AI

The AI Literacy Lab at Northeastern University, part of the Internet Democracy Initiative, is a project focusing on communication and public knowledge relating to artificial intelligence. 

Our goal is to foster a well-informed global society that can grapple with the promise, pitfalls, and consequences of emerging technology.

If you’re a technologist in search of mission-driven projects, a journalist looking for guidance about AI, or a global citizen interested in using technology to enhance communication and democracy, please reach out — we want to work with you.

_____

Recent work from the Lab:

Epistemic Risk for Democracy

In a 2024 conference paper, Prof. John Wihbey examines the concept of “epistemic capture” or lock-in of public knowledge and a corresponding loss of autonomy as AI mediates the information environment. The paper analyzes three example domains – journalism, content moderation, and polling – to explore these dynamics. It argues that the pathway forward for achieving any vision of ethical and responsible AI in the context of democracy means an insistence on epistemic modesty within AI models, as well as norms that emphasize the incompleteness of AI’s judgments with respect to human knowledge and values.

Listen to the Tech Policy Press podcast about the paper.

Social Media’s New Referees?: Public Attitudes Toward AI Content Moderation Bots Across Three Countries

Based on representative national samples of ~1,000 respondents per country, we assess how people in three countries, the United Kingdom, the United States, and Canada, view the use of new artificial intelligence (AI) technologies such as large language models by social media companies for the purposes of content moderation. About half of survey respondents across the three countries indicate that it would be acceptable for company chatbots to start public conversations with users who appear to violate rules or platform community guidelines. Persons who have more regular experiences with consumer-facing chatbots are less likely to be worried in general about the use of these technologies on social media. However, the vast majority of persons (80%+) surveyed across all three countries worry that if companies deploy chatbots supported by generative AI and engage in conversations with users, the chatbots may not understand context, may ruin the social experience of connecting with other humans, and may make flawed decisions.

US Survey Report: AI and Trust

In August 2023, we conducted a poll of 1,000 Americans 18 and older to gauge their feelings and attitudes about AI. We found that AI has caught the public’s attention and created a deep sense of caution and skepticism. Americans doubt that the people who developed generative AI will be the best stewards of the technology. They favor regulation and ethical guidelines, even if they’re not certain who should be making the rules. And they are thinking deeply about which potential uses of AI are appropriate or helpful to society, and which seem to cross a moral or ethical line.

Read our report

Read the topline data


Past programming

In the news