The artificial intelligence research organization OpenAI unveiled a stunningly realistic text-to-video tool on Thursday. It’s difficult to understate the reaction from AI enthusiasts, researchers and journalists. A few representative headlines:
CBS News: “OpenAI’s new text-to-video tool, Sora, has one artificial intelligence expert ‘terrified’.”
ABC News: “OpenAI video-generator Sora risks fueling propaganda and bias, experts say.”
The New York Times: “OpenAI Unveils A.I. That Instantly Generates Eye-Popping Videos.”
On Monday, I called up Tony Elkins, Poynter faculty and a founding member of the News Product Alliance, and Alex Mahadevan, director of MediaWise at Poynter, to get their takes on the development. Elkins and Mahadevan both meticulously track the evolution of AI and test new models in their roles at Poynter. This conversation has been edited for brevity and clarity.
Ren LaForme: We’ve seen the breathless reports about OpenAI’s new text-to-video tool, Sora. There are a lot of unknowns. But I thought I’d start by asking you if you could tell me what we do know about it.
Tony Elkins: It is a fairly significant out-of-the-box demo. It looks really good for their first try. From where AI video existed a year ago — and even some tools I just started testing like Pika — the jump between that and this is just ridiculous.
Did you see the video with the woman in bed with the cat? It’s very realistic at first glance, but when she rolls over there’s no arm there, and then the cat has an arm that comes out of nowhere. But it wasn’t super jarring. You had to really pay attention to know it was AI.
To me, the most significant part is that this is a demo. What’s the second release going to look like? It took several versions of DALL-E and Midjourney to produce realistic images.
Alex Mahadevan: I agree. I was very impressed. I saw the cat one as well and the physics of the cat batting the woman’s face and the comforter rolling over. There’s another video I saw of the grandmother who’s showing her hands and then preparing some gnocchi. Her hand turns into a spoon.
Clearly in a lot of these videos, there are absurdities that are comical and quite scary. And that highlights major weaknesses in this technology.
So I am very impressed, but I’m not totally buying the hype. I want to wait until regular people can use the tool because right now it’s a very curated look we’re getting at Sora. The biggest videos have been shared by OpenAI CEO Sam Altman himself and OpenAI in its press releases. They gave a small group of, again, curated users access to it. We don’t know if these videos will be as good when we have the tool in hand.
But, what worries me — from the media literacy and misinformation standpoint — is that it looks to be easy to just generate plausible shaky cellphone footage. It really complicates things for reporting on war zones or verifying any user-generated content because now users can generate anything they want.
So the UGC that we’ve relied on as digital journalists is now going to require an extra step of verification. We already spent a lot of time verifying war zone user-generated content. Now we have to figure out a new way to do that with the release of Sora and text-to-video.
Elkins: I’m glad you brought that up, the user-generated content. The whole world we live in now is post-truth. You could fabricate a false narrative in a story but images were harder because you had to have a working knowledge of Photoshop or similar software. There was a barrier to entry with it.
Video is a whole other production level above that. It took a lot of time, expertise and money to create fake videos. Now you can just type it in and get it.
I love the point you made about cellphones, too. We’re usually looking at a small screen while we’re doing three other things. Are we taking the time to look at something and see if it’s real? I wonder how many people have the training to do that. How do you know what you’re looking at? It used to take Lucasfilm budgets to create fake videos that look this real. With these types of tools, type a prompt in and you’re done.
LaForme: Even the fact that this exists makes it easier to spread misinformation because of this thing called the liar’s dividend, where even if a video is real and you have no reason to suspect otherwise, you still kind of have to question if it’s real. I’m thinking about whether the “Access Hollywood” tape would have made as big a splash as it did — even though I guess it ultimately didn’t make a huge one — if it was released a couple of months from now, after this tool becomes public. What’s your take on that?
Mahadevan: We are already seeing a liar’s dividend-esque spread of misinformation online. It’s meme-y and it’s jokey right now — like sharing clips from old movies and the prompt that you quote-unquote used to create it. For example, I saw one that showed the classic Rick Astley “Never Gonna Give You Up” video and the prompt was something like: “Young man with cool haircut sings in a trench coat.”
“A young man, ginger hair, sings a song in front of various urban backgrounds, 80s hairstyle and outfit, wild dancing gyrations, background dancers, 80s video resolution, photorealistic, pop video.” pic.twitter.com/RzGVbSpzty
— Bojan Tunguz (@tunguz) February 16, 2024
These are all jokes but that is essentially how the liar’s dividend is going to work. People are going to say, “Oh, that is actually AI-generated.”
The other meme we saw right off the bat: “Me in trial watching video evidence of me committing a crime I didn’t commit.” And so you’ve got major concerns about the ability to put people in places where they might not have been.
The liar’s dividend is already happening in other countries with audio deepfakes. There are politicians in India who have real audio out there and they’re saying, “No, this is deepfaked audio. I didn’t actually say that.”
The researcher Claire Wardle has been saying this for years — before the height of the deepfake craze when it started in 2019 — that the biggest threat is going to be the liar’s dividend and people saying, “I didn’t actually say that.”
Elkins: I remember when everyone was blown away by the Obama deepfake. That took a whole team of researchers to create in 2017. If the AI platforms allowed it, you could create that instantly.
Now we have to ask what’s real. We have to do that for photos. We have to do that for text. We’re gonna have to do it for videos. And it creates so much responsibility on the consumer that was never there before.
LaForme: We jumped into misinformation — which I think makes sense since, Alex, you run MediaWise and Poynter is also home to PolitiFact and the International Fact-Checking Network — but when we talked earlier, you mentioned a couple of other ways that this could impact the journalism world. Can you share some of your thoughts again?
Elkins: I think we have to discuss the ethics of how we’re going to use these tools. What policies, ethics and guidelines need to be in place as we start experimenting?
We have to invest in our own skills as an industry about how to judge where content comes from, how to determine if content is real, how these models are created and deployed.
I don’t know if I can say how it’s going to change journalism because I don’t know what these tools are going to be capable of and I’m afraid it’s going to catch us all off guard.
Mahadevan: I’ve been thinking about the worst-case scenarios that this has put into my head. So you have these AI “blue checks” online and all of them are fantasizing about a world in which they can watch movies and insert people that they want to see in the movies and they can put themselves in the movies and — it’s the death of creativity if some of this stuff comes to pass.
Movie watching experience:
2005: go to a movie theater
2015: stream Netflix
2025: ask LLM + text-to-video to create a new season of Narcos to watch tonight, but have it take place in Syria with Brad Pitt, Mr Beast and Travis Kelce in the leading roles
— Matt Turck (@mattturck) February 15, 2024
I was thinking about how this could change journalism. This will give influencers the ability to generate more content than they’ve ever created in the past. I just think you’re gonna see this explosion in AI-generated influencer content that is going to completely crowd out legitimate news and media.
News organizations are going to be basically competing with this AI slime, whether it’s text via pink slime news sites, or the slime from AI-generated videos that we’re already seeing on YouTube and TikTok.
Right now, news organizations have to be figuring out how to put this to use to compete with what is to come.
Elkins: I want to interject and add a real-life example of what Alex was talking about. This exists now. There’s consumer-grade software that’s available where I could go in, I could wear six or seven outfits, put a green screen behind me and I could upload 10 seconds of video for each of those outfits and my digital avatar suddenly exists that I can make say anything in a video. So instead of me being in front of a camera, writing scripts, putting the time in to record and edit and upload, I can do all of this once and I’ve got Video Tony on demand.
I can just start cranking out videos to upload to any of the video services or social media services. I could sit here and produce 10 videos in an hour, without ever touching a camera or editing software.
LaForme: That’s entirely too much Tony.
Elkins: Yeah, it is way too much Tony. No one needs that in their life.
LaForme: Knowing all of this, what does the average journalist or even the average news consumer — I guess it doesn’t even matter if you’re a news consumer, it’s just everybody — what do you do right now to prepare for this potential oncoming onslaught?
Elkins: I don’t think you can. So here’s the thing, and maybe Alex will back me up on this, but I don’t think a lot of people are necessarily paying attention the way we are.
Also, it’s hard because the technology is changing so fast.
Mahadevan: I hate when I say, “How should you prepare?” I hate that we are in a situation where we have to put the onus on consumers to prepare for something that these companies should be putting safeguards in place for.
Right now, news organizations need to demand better of AI companies and demand that they be putting safeguards in place. For example, Anthropic put in some election safeguards that when you try to create election content or ask election questions, it will push you to legitimate election information. That should be standard across all of these. There should be safeguards in place — as they’re trying to do — about generating other people’s likenesses, celebrity likenesses, but really any likeness.
So like TV’s adoption had this 45-degree line in terms of the amount of people that adopted color TV. And then the internet was adopted by 20% of people and then suddenly 90% of people and the line was much more vertical. With AI, we’re going to go from zero to 100 very, very quickly.
News organizations need to start experimenting with these tools before they get left behind. They need to figure out how they can fit into your workflow, make you more productive, and amplify and enhance your reporting — not replace your reporting.
And news and consumers themselves need to double down on being active consumers of information because AI is going to be so dangerous because everyone scrolls. That’s what you do. Being able to stop and check someone’s bio can probably catch 99% of AI content that you’re going to see. All you have to do on TikTok is click their name and check their bio and you can find out if they are legitimate or not. It’s very quick to go from a passive consumer to an active one.
The thing that I also want to stress for news organizations is to just keep doing good work because what is going to be the most important thing is trust. And I think you kind of nailed it, Tony, when you said that nobody’s going to know what or who to trust anymore because everyone is going to look at content they see online with skepticism because anything could be AI-generated. So it is really important for news organizations to continue being very transparent, engage their audiences, make sure that they remain relevant as a trustworthy source of information. That’s the only way to survive.
Or, eventually when we get to the point where you can license Mr. Beast’s likeness, news organizations should be licensing influencer likenesses so then they can deliver AI-generated Mr. Beast news to people. Someone in Brooklyn can get news about development from a Mr. Beast or Charlemagne Tha God who’s been AI-generated.
Elkins: I want to get to something that you pointed out, Alex: We absolutely cannot just turn over control and power to the tech companies again. We went through this whole thing with social media. I don’t think we understood our own value or the value they were getting from it.
I feel like we’re already behind because we don’t understand how they trained the models, or what content was used. I saw just the other day Reddit signed a deal where their content, user content, is all going to train AI. So everything you’ve posted to Reddit is going to be used to train an AI. So you’re part of it already. You have no purchase. You have no say over what your content is being used as.
And it scares me that we have already been backed into that corner. I don’t see a way out of that.
But I think the better question is, how do we allow these tech companies to use all this content, train on all this content, without licensing deals or consent? Who created these things? What are the biases that are built into them? Should something that’s going to just drastically change society be more transparent? These are all questions that everyone should be asking right now.
LaForme: I was planning to close by asking if there was a chance that all of those headlines were a bit too doomsday and maybe things weren’t so bad. But I think I’m going to skip that one and ask you guys how you sleep at night — assuming that you do and I’m not just already speaking to AI avatars of you.
Mahadevan: I sleep a night. OK, so I do not think it is doomsday because, as I said, the caveat I wanted to include is we don’t know how good Sora is yet.
I do think that we have to be careful of getting caught up in the hype, whether it’s too pro AI, like AI is gonna save journalism.
It’s not going to save journalism. It’s also not going to completely decimate the information ecosystem and lead to a post-truth world.
I think it’s somewhere in between.
And if news organizations are really diligent about holding these companies accountable, reporting on what they’re doing, doing all the type of reporting we did on Facebook too late — what if we do that reporting at the front end and hold these companies accountable now? I think there’s already really good journalism that’s coming out and doing that. Tony, you mentioned 404 Media before we started recording. I think people can emulate 404’s reporting. They do a lot of red teaming of these models and products so you know people know how dangerous they are.
I see a lot of good use of AI in newsrooms. I think it can really enhance under-resourced local news outlets.
I talked to a guy in a really small paper in North Carolina. He was able to report on basically every local government meeting because could download the transcripts of all of the meetings and run them through ChatGPT and produce news articles that are based on the local happenings. It’s a very small town and obviously the ethics of what you should be showing audiences are still being worked out.
But I do think it’s going to allow local news organizations, small local nonprofits, to be able to compete a little bit more by making it cheaper to do the reporting that they want to do. These days, a reporter has to cover five school boards across five counties. And that’s just not possible. And I think It can be really helpful in expanding coverage areas.
So I want to say I’m bullish on AI and the news. Tony probably has a different view, which is good. We need that.
Elkins: I think I agree with all the points that Alex makes. I sleep at night knowing there’s a lot of smart people in journalism doing some very deep, heavy thinking about the subject. So I don’t feel like we’re going to get caught again by this onslaught of Silicon Valley. We are more prepared for it.
One of the ways that we address it is what Alex mentioned earlier: You need to be experimenting. I don’t know if I’d be publishing anything right now, but there needs to be someone in every news organization tasked with understanding AI.
Where I do depart a little bit and I am less bullish and more doom is seeing the effects that it has had, not in the media industry, but on society. There’s already stories out there where AI is being heavily weaponized against women. The Washington Post has written about how there’s been some software used to harm teenagers in schools. That is deeply scary.
I’ve seen some software that animates people uploaded from images. So you can upload a photo of someone and have it create a video. I’m more scared about how stuff like that can be weaponized against people.
And when you think about whether it’s good or bad for journalism, there’s a lot of smart people thinking about that. But I worry on the larger scale, we’re already falling behind about how to stop this just massive wave of content.
And that’s what scares me more than anything. It’s the massive wave of content and how it can be used to the detriment of society.
LaForme: Thanks for both of your time. I’m sure we’ll have continued thoughts on AI, especially as Sora is released or whatever eventually happens with it. Is there a good place for folks to follow you? To hear those continued thoughts? Perhaps right here on Poynter.org?
Elkins: Poynter.org and LinkedIn. I’m posting way more there.
Mahadevan: Yeah, Poynter. I mean, I’m still on Twitter, unfortunately, but I’m still holding out. You can find my thoughts there.
LaForme: You and the blue checks.
Mahadevan: Hey, you’ve got to be among them because they are the ones who are using these tools, like it or not.