The AI Literacy Lab’s poll about American perceptions of AI unearthed a lack of public trust in AI and the companies that create it. On October 17, the AI Literacy Lab hosted John Wihbey, a Northastern University professor of media innovation; Cansu Canca, director of responsible AI practice at the Institute for Experiential AI; and Nikita Roy, host of the podcast “Newsroom Robots” for a wide-ranging conversation about the poll, moderated by executive director Joanna Weiss. What follows is an edited and condensed transcript of the conversation.

Joanna Weiss: I have always suspected that concerns about AI are partly motivated by what happened with social media, which developed without a whole lot of oversight. The algorithms came, they changed our society in fundamental ways that we are still grappling with. And I think some of the anxiety we see is driven by a desire not to do that again. 

John Wihbey: Machine learning technologies have been commonly used for the past decade in social media. And I don’t think the public has quite grasped what’s going on. We started to hear about an algorithm or the power of the algorithm. I’m not sure anyone really knew what that meant. The advent of generative models and LLMs have suddenly put in the mind’s eye something that was seemingly magical and incredible, but also scary. Now more polling data suggests that they’re putting two and two together. They’re realizing that in some respects, we’ve been the proverbial frog in the boiling pot. 

Joanna Weiss: Six months ago, there was an open call from a lot of technologists to pause AI development. There was no pause . There has been, instead, a rush to integrate AI into a ton of products. I logged onto LinkedIn today and it asked me whether I wanted to use AI to generate my post. It’s everywhere. Our poll suggests some discomfort with the speed of that. Cansu, is there one thing that these companies could do to convince the public that they are rolling out this technology, not just with speed in mind, but with ethics in mind?

Cansu Canca: I’m going to give a philosophical answer. I don’t think they should be convincing the public. The thing they should do is actually make sure they develop the systems responsibly. We have this conversation a lot: “trustworthy AI,” “responsible AI,” what is the right word to use? And I always find this focus on trust very worrisome. I don’t think the goal of our work should be fostering trust. It should be about structuring AI development in a way that the trust comes naturally. They shouldn’t try to convince anyone. And I would be worried if they tried, because probably they’re going to circumvent ethics and go for PR.

Joanna Weiss: I was surprised, from the poll, at how few people have actually used generative AI programs: 68% haven’t used a large language model like Chat GPT, and 85% haven’t used an image generating program like Midjourney. I’ve played with both, but I’ve also started to feel precious about it — like, I shouldn’t be using this to supersede my human creativity. Nikita, you have trained journalists on how to use Chat GPT and other programs. How do you encourage people to both respect their own humanity and be open-minded toward this technology?

Nikita Roy: The poll is completely in line with what I’ve been seeing at workshops I’ve been doing. People know about it and they see the news articles and the hype around AI, but they have never physically touched it.  Technology companies are very good at creating products, but [not at] helping people understand what the exact use case is. It’s about thinking of AI more of a tool that could help them in their work. I use it. It is a dependency for me. I use it throughout my day every day, [with] multiple different apps. But I think it starts with taking baby steps. 

Leaders of organizations need to have somebody who is dedicated on AI operations, thinking about AI, constantly keeping on top of what’s happening in the industry and what are the use cases for people in my own company. And then helping educate everyone in your newsroom or your company. Jobs are going to change, but it’s up to the leaders in the industry to make sure that they’re helping upskill their own employees and bringing them along in this journey, too. 

Joanna Weiss: Our poll results show that people who have training in STEM fields and backgrounds in STEM are generally inclined to be more positive to the technology. What implication does that have for training the public to not fear technology, and not see technology as the enemy? 

John Wihbey: I’m obsessed with the information environment and social media, but from what I understand, many of the important use cases are in biotech and health sciences. So if we end up with a public that is really polarized on AI generically, I think we’re going to lose out on a lot of interesting innovation and possibilities. 

Cansu Canca: There are a lot of benefits of AI systems, and we want people to understand that we are not trying to sort of hinder this innovation. We are not trying to roll things back. But it also worries me to think that if you come from engineering, you’re more confident. That has been historically the case: Let us first create this and then put it in the market and then get the response from the market, and then we’ll think about the ethics. And of course, by that time, it is very difficult to think about the ethics, very difficult to create the system in such a way that is ethical. 

Joanna Weiss: John, you’ve studied how social media companies are trying to mitigate misinformation. Our poll found that 83% of Americans are worried about misinformation leading into the 2024 presidential campaign, but 62% of Americans support the use of AI to find false information on social media sites. So is that a viable solution to a very broad problem? Can good AI help root out the bad AI?

John Wihbey: It suggests that the logical conclusion is just a bunch of bots — adversarial bots being yelled at by company bots, and the humans just standing on the sidelines, which is maybe a possible future. It all depends on creating different incentives for companies, and I think we need some kind of sensible regulation. What they’re doing in Europe is the beginning of something important, with the Digital Services Act. 

Generative AI creates a lot of interesting possibilities for detection and classification and governance, and it could be used in ways that are positive. But you need a huge number of humans in the loop, and I think the only way to get that is to force the companies to start really resourcing this — in a way that Europe is trying to get American technology companies to step up and do. 

Joanna Weiss: Nikita, you’re a journalist and also confident in technology as a data scientist. Not all journalists can say that. Are there tools you have taken from journalism that computer scientists could use to help communicate the technology they’re creating and the work they’re doing?

Nikita Roy: It’s very interesting when I go from my journalism bubble to my tech bubble. The conversation in journalism always starts with ethics. Everywhere we go, we are keeping in mind: What consequence would this have directly on the end user? If we report this, how is it going to affect our sources? How is it going to affect the people we’re reporting to? How do we serve our community better? 

We’ve always been so focused in technology on getting things, creating, building, shipping things out quickly. It’s competition, it’s hardcore, right? You want to have the best product, have it quickly.  Something that would be great for people in tech programs is a deeper understanding on the ethics, and understanding: What they build, how is it going to affect the end user? Should we be doing this? We wouldn’t have probably gotten to this position if those questions were being asked early on. 

Joanna Weiss: John, your career has come in the opposite direction: You started in journalism and discovered data. So I’m curious what you’ve learned from data that informs journalism.

John Wihbey: There’s a really robust community, based here and in other places like Northwestern and Stanford, around computation and journalism. And that intersection is extremely rich. We have some of the leading researchers in the world who study algorithms, study platforms like Amazon and Uber, and are doing watchdog, accountability-style research. 

As algorithmically powered platforms start to govern more and more of our world, whether it’s pricing or it’s how goods and services and products are distributed, that the black box of algorithmic behavior needs to be interrogated more and more critically. Ethical computer scientists, of which there are many here, are interested in providing that kind of accountability function, which is very much a journalistic function. And so I am keen to figure out how we can unite journalists and [computer science] and data science folks to provide that kind of watchdog function collaboratively. I think we can find some common ground holding power to account. 

Joanna Weiss: In the poll, people showed particular concern about AI’s effect on jobs in the arts. People think it’s OK to use AI to illustrate a magazine cover — which it can do, I think, beautifully with some of these new programs. And that does wipe out a certain line of work for a certain number of people. Should companies be drawing any ethical lines around artistic uses of AI? That feels like maybe where the rubber hits the road most explicitly in humanity meeting technology.

Cansu Canca: I think there are, again, different uses that require different approaches. So when we think about drawing or writing, you can think of it as the AI is aiding the writer or aiding the painter, the artist. And you can also think about other technologies we had in the history of arts, like photography [and] what did it do to painting. And one could imagine that this is going to change the artistic landscape, but it is not going to wipe out a certain type of art form.

 But I don’t think this applies to all types of art. For example, in terms of acting, what does it mean for AI systems to recreate the image and the voice and basically copy an actor? Where does it leave the acting or where does it leave the actor? There might be others where, no, this is a real shift, maybe even an extermination of certain types of art form. And if that’s the case, we should ask the question: Is this an art form that we think we must protect? Is this an art form that we think should change drastically? 

Nikita Roy: I was having a conversation with a lawyer recently about this issue and we were thinking about [how] generative AI is also just mimicking the human brain. When you [read] different writers that you like, you start to mimic their writing styles. How does that differentiate from generative AI that is trained on all of these different works? One interesting point that I took away was when you ask generative AI to do something in the likeness of say, van Gogh, and then it goes specifically to all of the paintings of van Gogh and trains, that’s an infringement of the copyright [if it were still in effect for van Gogh’s work]. Whereas maybe if you tell it to, hey, just draw a picture, do an oil painting, something very general, that could possibly avoid that copyright infringement, even if it was trained on say, van Gogh’s paintings. How do we decide where the line is, where you’re mimicking something, whether it’s generating something? 

Joanna Weiss: Another thing the poll showed was a statistically significant gap in trust between women and men. Women were more skeptical of AI. They were more likely to say that they did not understand the benefits of AI or they didn’t see the underpinnings of AI. Anyone want to wager a guess as to what’s going on there?

Cansu Canca: Usually when you ask, “Are you confident about your math skills,” even if they are equally good or bad, women tend to say, “Well, maybe not.” And men are like, “Yes, I got it, I got this.” It sounds like a similar trend. “Oh yeah, of course I understand AI and I trust AI because I understand it.” That’s what I thought when I saw the results.

Nikita Roy: I’m totally in agreement. You have all of these polls that come out that women don’t apply to a job, unless they’re 100 percent meeting the requirements. Maybe it’s just because women, unless they know everything about AI, they wouldn’t have said they were sure they understood AI.