SEOUL, South Korea — Uncharted territory.
Again and again, as Yoel Roth, Twitter’s former head of trust and safety, talks about his work at the social media platform, both before and after Elon Musk arrived, that’s the idea that comes to mind.
Determining whether to restrict the tweets of a sitting president of the United States. Making a decision, later reversed, to ban a news story about Hunter Biden’s laptop. Even figuring out whether he reported to Musk and whether he still had a job.
Roth spoke about the unique challenges of Donald Trump, efforts to deal with misinformation and online harassment, and the whirlwind first two weeks after Musk’s arrival to 400 in person and 300 online attendees of the GlobalFact 10 fact-checking summit in Seoul, South Korea. GlobalFact, hosted by the International Fact-Checking Network of the Poynter Institute and SNUFactCheck of South Korea on the last day of the conference Friday.
At Twitter, Roth led a 220-person organization responsible for Twitter’s content moderation, integrity and platform security efforts, and designed the strategy for the platform’s work to combat harmful misinformation. After leaving Twitter, he became a technology policy fellow at the University of California, Berkeley’s Goldman School of Public Policy.
Roth took GlobalFact10 participants behind the scenes and spoke candidly about how social media companies are faring in their fight against mis- and disinformation. And, he explained why he no longer drives a Tesla.
Here is his conversation with Aaron Sharockman, executive director of PolitiFact and vice president for sales and strategic partnerships at Poynter. It has been edited for length and clarity:
Aaron Sharockman: There’s so much to talk about in regards to Twitter these days, and I think Yoel Roth has plenty to say. I think we now recognize as a community that its current owner, Elon Musk, has actively spread misinformation and used false information to attack many of the fact-checking organizations in this very room. Community Notes is now the only means to stop or slow false attacks or misinformation on the platform. And I think this community would say it’s not going great.
On October 27, 2022, Elon Musk completes his $44 billion purchase of Twitter. The next day come a series of key firings, including the chief executive, the CEO, and the policy head. Not only do you stay, but Elon publicly supports you, kind of comes out and says that he supports the work you’re doing. You last two weeks. You quit. Tell me what those 13 days were like.
Yoel Roth: Whirlwind doesn’t really begin to describe it. If you haven’t been through a $44 billion acquisition of a once public company, it’s not pleasant. There’s all sorts of stuff that happens in connection with that, from different security procedures to even figuring out who your boss actually is that nobody prepares you for. I feel like I spent those two weeks mostly figuring out, like, is Elon Musk my boss? If so, what does that mean? What does he want me to do? Am I okay with doing what he wants me to do? And then, by the way, there’s thousands of employees at this company who are all trying to figure out who their boss is. And what is the company that they are working at going to be doing as things move forward? In a word, it’s ambiguity.
And I think in the face of that ambiguity, Twitter also had to deal with all of the challenges that had come up for 15 years that go along with social media trolling campaigns that were posting sort of crazy volumes of racist content, major elections in the United States and in Brazil. All of that was ongoing, as there was this tremendous corporate ambiguity.
I told myself going into the acquisition that of all of the things that I wanted to make sure I tried to survive through this if I had the chance, there were the U.S. midterms, which obviously were a big deal, but more than that, I was really worried about the elections in Brazil. Looking back at what we had learned over years about election security and about the possibility of violence created by social media, I worried about what things would look like in Brazil. I worried about the runoffs. I worried about what would happen if people called for protests. What if those protests, like given (then president Jair) Bolsonaro’s use of Twitter and other social media channels to attack the electoral courts, all of those were factors. And all of this is happening as Elon Musk is buying the company.
An interesting thing that happened, actually, in my first conversation with Elon Musk was he understood some of those challenges. Even after everything that’s happened and what he’s done to me personally, I don’t want to undermine the fact that when it came to some of the risks of electoral violence in Brazil, in particular, even Elon understood what might happen. And he articulated readily that he didn’t want Twitter to be a cause of violence in Brazil, which completely caught me off guard and sort of underscored this really ambivalent relationship that he seems to have with misinformation, with security work, and with trust and safety more generally.
Sharockman: So was he your boss?
Roth: I’m pretty sure. I haven’t been able to get anybody who works at Twitter to answer this yet, but immediately after the acquisition, Elon instructed Twitter employees to turn off the corporate directory. And part of this was a technical reason, because they just fired all of the executives. And so at that point, the system stopped working, but then they never turned it back on. And the best explanation I was able to get was, so many people were coming and going that they didn’t want information about that to be readily available to employees who might leak it to the public and leak it to the press.
Sharockman: The trust and safety team was about 200 people?
Roth: Yeah. Calculating how many people are doing trust and safety work is complicated, because there’s people who write the rules and the policies, and that’s a tiny fraction of the overall universe of people who do this work. So there’s product managers, engineers, designers, researchers, and of course, operations staff, some of whom work for Twitter directly, but many of whom work for contracting firms around the world who actually do the incredibly difficult, painful work of frontline content moderation. And those folks at Twitter number in the thousands. And so by different estimates, you could hear anything from 200 to several thousand, depending on who you ask and how expansive they are in that.
Who is left from trust and safety?
Sharockman: You said you had 12 direct reports. You believe one person is still there?
Roth: Yeah. A vanishingly small number of those 200 folks who either reported to me directly or indirectly are still at the company. Of my sort of core team of policy folks, investigators, only one is still at the company. So the cuts have been pretty severe.
Sharockman: So if you had to describe what Twitter’s trust and safety program or activities look like today, what would you say? What is it?
Roth: Nonexistent. And that’s painful to say. I joined Twitter for the first time in 2014 while I was in graduate school, and then rejoined full time in 2015. And I saw the painful progress that the company made in addressing. imperfectly. issues of disinformation, of harassment, of abuse. We didn’t solve these problems, but we made painful small amounts of progress against them. And to then see those teams eviscerated and that work rolled back so comprehensively in such a short period of time is really a stunning transformation.
Sharockman: Show of hands, how many followed the news reports of the would-be Russian coup through Twitter? Was that your primary news source, would you say? Wow. Very few. I think what Twitter was great at was covering breaking news events. And I think probably for a lot of journalists and fact-checkers in this room, the Twitter that we knew, that you worked for seven, eight years is kind of all washed away.
Roth: It’s hard to imagine a breaking news event not happening on Twitter. When Jack Dorsey came back to Twitter as CEO in 2015, one of the first things that he did, and a decision that I really admired at the time, was moving Twitter from the social networking category in the app store to the news category. And he got up at a Twitter all-hands and did sort of a thoughtful, beard stroking thing, and he explained this decision by saying, the role Twitter occupies in the world is not where you talk to your grandparents, and it’s not where you post what you had for lunch. Although that happens too. Twitter is what’s happening now. It is the breaking news platform, and that’s right. But it’s also built on an ecosystem of people who contribute the news and help make it happen.
It’s built on open APIs that make data about the news available to people who want to understand it. And so much of that has been eroded so quickly that you see a major global development and an aborted coup in Russia that hardly, if at all, plays out on the breaking news platform. I think Twitter’s role in the world is fundamentally different now than it has been for the last 15 years.
Sharockman: So in the conversations or the communications you had with Elon in those 13 days and what you’ve seen and read after, how does it succeed as a business?
Roth: Allegedly the latest is I think they’re trying to pivot to video, which we’ve seen a few times already in this industry, and it didn’t work any of those other times, but apparently they’re trying to do video now, so we’ll see how that goes.
Sharockman: I want to talk about the decision to block or ban Donald Trump’s account. So Facebook, I think, went first and blocked his account on January 6. I think you followed on the 8th. I think for a lot of people, probably that was four years too late or five years too late or two years too late. Did Twitter have what amounts to a Trump war room or a tweet room? You’re a team of referees in some ways, and everyone wants to work the ref. So I’m curious what that process is in a really kind of unprecedented situation.
Roth: It’s worth noting that Donald Trump stymied Twitter in a lot of ways. There’s been some speculation that was for financial reasons, that the company believed that he was responsible for so much revenue and user growth that it couldn’t ban him and it couldn’t moderate him. That wasn’t my experience. My experience was that for Donald Trump, Twitter was caught between its vision of being a platform that is where news and current events happen and where there’s a compelling public interest in access to content and the realities of how somebody that powerful speaking in public can do great harm. And the company was stuck in the middle of that dynamic, trying to juggle its desire to mitigate harm against its desire to protect this content in service of some vision of public interest.
That was really hard for the company to reconcile. For several years of the Trump presidency, the company did next to nothing to moderate his content, in part because there wasn’t a suitable approach for moderating things that took into account this notion of the public interest. In 2018 or 19, the company introduced a public interest policy that said, basically, in certain situations, we won’t remove this content, but we’ll put it behind a warning message that makes clear that it’s bad, but leaves it up for accountability purposes. And then the company still didn’t use this public interest capability because frankly, we were terrified. We didn’t know what would happen, and some of that fear was justified.
So the first time Twitter moderated Donald Trump was in May 2020, when he posted a series of messages attacking Governor Newsom of California and the decision to send out mail-in ballots to all eligible voters in California. The former president attacked this decision, saying that it would be sort of the gateway to electoral fraud. And for the first time, Twitter applied a fact-checking label to one of his tweets. I did that myself. And the response to that was a sort of immediate attack, not just on the company, but on me personally. I was on the cover of the New York Post, which is not fun.
The former president held a press conference in the Oval Office, where he signed an executive order decrying social media censorship, where held up the New York Post cover with me on it, and said that I was emblematic of Silicon Valley bias and the attempts by Silicon Valley to undermine his presidency and his campaign.
And I think there was a strategy behind that. The strategy was to try to make it so that the company thought twice before it ever moderated him again, because the consequences were not just the company suffering, but its employees suffering as well. Turns out that wasn’t effective. And really, that first fact check was like a dam breaking within the company.
And between Election Day 2020 and January the 6th, we moderated more than 140 posts just from the @RealDonaldTrump account that violated the company’s civic integrity policy. And so there was a lot leading up to the decision, first to restrict the president’s account on January the 6th, and then ultimately to ban him on the 8th. But it’s worth putting it in this broader context of it took a long time for Twitter to figure out what its moderation approach here would be, and it was painful. Every single step of the way was.
Decision-making ‘never Twitter’s strong suit’
Sharockman: On the restriction and the ultimate suspension, unanimous? So what does that look like?
Roth: Making decisions was never Twitter’s strong suit. But the point of these content moderation decisions is that they should never be one person’s call. There should be a process that informs them. And for months, we’d had a decision making process related to Donald Trump in particular. That was not just me making a call or my boss making a call, but a team of folks who would review the posts against our written policies and standards, make a recommendation about that in writing. That recommendation would be reviewed by multiple different people, and then ultimately that decision would get implemented. When you’re talking about moderating a sitting president, obviously you take those decisions very seriously.
But on the 6th, for example, there was debate about whether to suspend the president’s account immediately or to take a milder step, my personal viewpoint was that we should have banned the president’s account on January the 6th on the basis of his content. There was disagreement, and I was overruled. And ultimately, we restricted three of the president’s posts and required him to delete them. That was the first time we’d ever done that before. And then after the president served a timeout and returned on January the 8th and continued to post violative content, ultimately we made the decision to ban him. But each of those decisions involved a lot of debate and a lot of ambiguity.
Sharockman: I think a lot of the folks in this room have political leaders, presidents, prime ministers, key members of parliament who have done just as much to agitate (as Trump). I think there was a report saying that why not ban the president of Ethiopia or India or Nigeria, or the … accounts coming out of Iran? Bolsonaro is another example.ow do you think about, by doing this, aren’t we now obligated to look at all these other accounts? Or how does that work?
Roth: I mean, a lot of it comes back to not scrutinizing these accounts as special cases, but as having a set of written, established policies that you stick to. And the upside of that is it makes a lot of these decisions more straightforward because you just evaluate them against a rubric. But on the other hand, if your policies aren’t perfect and Twitter’s were not, you can end up in situations where there’s clearly harmful conduct on your platform that you’re left unable to deal with.
This is a constant struggle, especially when you’re talking about elected officials, and especially when you’re talking about heads of state. When Twitter banned President Trump’s account, Angela Merkel came out and said that it was a terrifying amount of power for a social media company to have. And I agree with her. I agree that these decisions can’t be taken lightly, and that you have to weigh newsworthiness and public interest against harm reduction. But as we saw in the case of Trump … sometimes you have to promote public safety and prevent the incitement of violence and treat that as a more important value than public interest or newsworthiness.
Sharockman: I want to ask a little bit about some of your content moderation policies if you could talk a little bit about the approach and how you thought of using fact-checkers. Ultimately, you did bring the AP in, and you were bringing AFP in until you weren’t, until Elon bought it.
Roth: I think there were a couple of factors here. The first one is financial. When we talk about social media, we generally are talking about Facebook, or to a lesser extent, Google or YouTube and the financial realities for those companies are vastly different than the rest of the industry. Twitter was probably the biggest of the small companies, and it certainly punched above its weight in terms of public impact, especially for journalists and for politicians. But Twitter never had the resources that Meta does or that Google does. And so when in 2020, we started developing sort of a broad strategy for how to address misinformation, it wasn’t feasible to implement it the same way that Facebook did. It simply was not practical for the company. I couldn’t get the budget to do it. Nick Pickles couldn’t get the budget to do it. It wasn’t workable.
The other consideration beyond finance was who is responsible for making these decisions? Something about Facebook’s decision-making structure that I’ve always found interesting is that a lot of the decisions about when a post gets labeled as misinformation or fact-checked are made by the fact-checkers. And Meta occupied a very comfortable role as a platform. They say, these aren’t our decisions, we’re just hosting them. And they can sort of wash their hands of responsibility for the application of those labels. That has a lot of benefits when you’re a company in Silicon Valley, because you’re insulated from criticism about those decisions, and you can sort of pass the buck to some of your partners. I can understand why if you could do that, you would do that.
And I know there’s folks from Meta in the room, and so I’ll say, like, I’m not attacking the company’s decision to do this. I just think it’s worth noting why, from a company’s perspective, that’s desirable and worth spending money on. Twitter took a different stance. We wanted to own those decisions. We were approaching misinformation as a part of our terms of service and said, if we are going to intervene on something, if we are going to apply a label to it or remove it, we’re going to be the owners of that decision. If people are going to criticize that decision, they can criticize Twitter, not our partners.
And the result of that was that we took a lot of criticism for some of those decisions, but we didn’t feel that it was appropriate for us to pass the buck on that responsibility to others who might not be as well protected from some of the consequences of those choices.
A subpoena from Congress
Sharockman: What is it like to receive a subpoena from Congress? Does that just come in the mail? Did you open it was like electric bill, news catalog, news magazine? Oh, subpoena.
Roth: The whole process involves a lot of lawyers and is very expensive because all of them bill by the hour, including when they receive your mail. But it’s amazing how so much of this happens without you being involved in it. You sort of find counsel to represent you. And an interesting challenge of having worked at a big tech company like Twitter is all of the big law firms work with all of the big tech companies, which means they have a conflict of interest if you ever need them to represent you individually. And so I quit Twitter, and I spent the next two months desperately trying to find an attorney to represent me.
And once you do, and they start billing you – by the hour at sort of these eye watering figures – then word sort of trickles out amongst the lawyers that they are representing you. And so my subpoena never arrived to me at all. It just went to my lawyer and it was all handled by them. I think somebody emailed me a PDF at some point, but it was just a formality by that point.
Sharockman: What was the purpose of them calling you to Washington?
Roth: I believe it was to “hold me to account for having run the censorship team at Twitter.” The first communique was not the subpoena. The first was a letter that I received from a staffer on the House Oversight Committee. And the staffer said, you played a central role in running the censorship team. We have questions for you about Hunter Biden and will you appear at a hearing voluntarily? And my very expensive lawyers advised me, no, wait, don’t respond to that. And then everything sort of played out from there.
Sharockman: In your testimony, you said that the decision to restrict the New York Post story related to Hunter Biden was a mistake. You said in your testimony, “I think the decisions here aren’t straightforward and hindsight is 2020. It isn’t obvious what the right response is to a suspected but not confirmed cyberattack by another government on a presidential election.” You added that Twitter erred in this case because we wanted to avoid repeating the mistakes of 2016. The case is the removal of a New York Post story with the headline “Smoking gun email reveals how Hunter Biden introduced Ukrainian businessman to VP dad.” But most people know that story as the Hunter Biden laptop story. The key question, I think, in the original Post story was whether the Ukrainian businessman, a guy named Vadim Pazharsky, met with Joe Biden. And then even if they met, we had to figure out what that meant. But did they meet? The email didn’t prove that. The New York Post story didn’t prove that. And Biden at the time, through his campaign, denied it. Fact-checkers, like my colleagues at PolitiFact, reported that first Twitter actually made the decision to restrict the story based on policies regarding hacked materials. And I wanted you to talk a little bit about that decision and your role in it.
Roth: Yeah. So to tell the story of the Hunter Biden laptop, you have to go back to 2015 and the hack of the DNC and of John Podesta’s email by Russian military intelligence, a thing that without a doubt happened. The intelligence about this has been declassified by the U.S. intelligence community and we now know that there was a robust operation to first, hack John Podesta’s emails. Second, obtain those emails. Third, leak them, first through several social media accounts and then through WikiLeaks in a way that was consistent with the interests of the Russian government as part of their campaign to interfere in the 2016 election. We know that all of that happened.
We also know that the entire social media industry, Twitter included, completely screwed that one up. In 2015 nobody took action on this. Not Facebook, not Twitter, not Google, nobody. Because we didn’t know what to do when a government hack and leak operation targeted the campaign of a candidate for president of the United States. And so this content circulated freely. And while there’s a lot of ambiguity about whether other Russian efforts actually influenced the 2016 election, most of the empirical studies suggest that if voter behavior changed because of what Russia did, it was because of the hack and leak. And so 2016 happens, and within Twitter, we start asking ourselves, okay, what did we get wrong here? And what do we do next? So we roll out a series of policies about troll farms and the Internet Research Agency. I build a team that’s focused on combating that type of disinformation.
But we also look at the hack and leak, and we ask, what do we need to do to address this in the future? I write a policy called the Distribution of Hacked Materials Policy that basically says if we find evidence that material is being distributed on Twitter that we know was the product of a hack, we will remove it. Straightforward policy. Is it being leaked and did it come from a hack? That was the policy. And so in the 2020 election, we see that this weird material comes out and something that everybody kind of forgets about the early days of the New York Post story was just how strange all of it was. There was the laptop repair guy who got this computer out of nowhere and then brought the hard drive to Rudy Giuliani for some reason.
And the guy gave a weird interview where he sort of didn’t seem to recall why he had this laptop or what was going on with it, or why he went to Rudy Giuliani. And this material is circulating widely, and cybersecurity experts are speculating, not unreasonably, that this might be the product of a hack and leak. But that morning, as Twitter was trying to figure out what to do, I went back to a very simple policy that I had written. So arguably, I’m the subject matter expert on it, and it’s, Is there a hack, and was it leaked? And what we saw here was that the second part of the policy was true, there was a leak. But for me, I didn’t feel that there was enough evidence of a hack. I don’t think we knew. There was this weird laptop situation, but the evidence was unclear.
My recommendation to my boss was, we shouldn’t take action on this. And as happens when you work at a company, sometimes your boss disagrees with you and makes a different decision. And that was what happened. But this was one of those classic content moderation moments where different people can evaluate the same set of facts. And for me, it didn’t meet the bar of the policy. For other people, it did. Twitter made a decision, and 24 hours later, it reversed that decision. But the rest, as they say, is history.
Sharockman: I want to ask, in a moment like that, which is kind of a crisis, do you feel pressure based on what the other guys are doing? And are you picking up the phone and calling your contemporaries and saying, Hey, how are you thinking about this? What’s happening behind the scenes that we may not see?
Roth: Pressure is an interesting word. One of the things that I loved about working at Twitter was that despite not being the biggest tech company, it had pretty outsized influence on the industry. I thought of us sometimes as the tail that wagged the dog of the social media industry. When we would make a move on something like transparency or a new policy, we would often see much larger companies respond to that move and usually do some variant of what we had done. In a way, we kind of softened the landing for when, a couple of days or weeks later, Meta did what we did. And so that day, all of the companies were grappling with what to do about this story.
But I would note, and this is now the subject of some investigation by Jim Jordan in the House of Representatives, the fact that the companies talk to each other does not mean that companies coordinate content moderation decisions. Every company has its own set of policies and also very different products, and I think that’s a good thing. The law professor Evelyn Douek has written about what she calls content cartels. This idea that the more social media companies collaborate, the more they start to behave like a cartel that results in a pretty homogeneous Internet. I agree that’s a problem and a risk. And so, yes, as a matter of course, the tech companies talk to each other. They compare notes on issues. In some cases, especially when we’re talking about security, we exchange threat information with each other.
But that doesn’t mean that these decisions are made in a coordinated fashion. And I would say if that starts to happen, I actually think that’s bad for the Internet. We shouldn’t have an Internet that looks totally homogenous across platforms. We should have companies that make different decisions so that consumers can decide which company they want to give their attention to.
Sharockman: Social media companies and their efforts to combat misinformation have really helped fact-checking organizations (financially) do some of the important work that they’re doing. The existence of many of these fact-checking organizations might be tenuously related to what social media platforms want to do when it comes to mis and dis. What do you think?
Roth: That’s terrifying. Casey Newton recently wrote a piece in his newsletter Platformer, where he speculated that trust and safety is a zero interest rate phenomenon. This idea that companies will park their money in trust and safety when the money is cheap, but as soon as interest rates go up and venture capital funding dries up a little bit, they start to pull back. And you see that in some of the layoffs that have affected even the really large companies like Meta and Google. You see that certainly in the evisceration of Twitter’s trust and safety efforts. And I worry a lot about the sustainability of an ecosystem that exists at the whims of profit seeking companies. I worry about that a lot because this work is too important to leave up to something like that. On the other hand, it’s hard to understand where the funding comes from in a different model.
Sharockman: One of the big topics at this conference over the past three days has been online harassment. Fact checkers and journalists and really everyone, and you included, have certainly faced a rise in online threats and harassment. Elon aside, are there things platforms can be doing that they’re not? Is this hopeless? Is this just what life is like on social media today?
Roth: I sure hope not. I mean, look, I think the original sin of social media is a failure to address harassment. We have known about the challenges here, going back at minimum to Gamergate in 2015 when we saw the power of mobs to harass and intimidate and silence people. And I believe one of the most significant failures of all of content moderation and all of social media policy is the failure to address that. But harassment is really hard to address. Imagine you’re writing a policy at a social media company, and your policy is about abusive behavior. People saying mean things or insults or something like that. Imagine that an account posts 10 insults directed at the same person. That clearly seems like it’s over the line. That account gets banned. That’s a lot like what social media policies look like today.
But now instead, imagine that instead of one account posting 10 things, you have 10 accounts, each posting one thing. What’s the evidentiary standard for what you should do there? Do you take action on all 10 of them, even though they’ve maybe each individually only said one mean thing? You could imagine a social media company that takes that position, but that’s a pretty censorious policy. And why companies have struggled with harassment – and I’m not justifying the failure, I’m just explaining it – why companies have struggled so much is that it’s hard to establish with clear evidence when something is an organized harassment campaign and when it’s just somebody saying something mean.
We’re starting to see the tide turn on this a little bit. One of the policies that I’ve been the most excited about from any company is Meta’s coordinated social harm policy that they introduced, I think, 12 or 18 months ago. And it takes some of their approaches to coordinated, inauthentic behavior and applies it to the domain of harassment and essentially says that the company will take action when they see evidence that accounts are using Meta’s tools to organize harassment and societal harm. I think that’s the right way to go. Harassment is not just about individual abusive tweets. It’s about networks. It’s about the digital flies that harassed Jamal Khashoggi. That’s the phenomenon we have to deal with. And it’s not about the posts or the tweets. It’s about the coordinated behavior across those accounts. And that’s a hard problem to solve, but it’s actually one where companies have invested really heavily in it since 2016.
And if companies bring that same technology to addressing harassment more broadly, I think there’s an opportunity to make progress here. Unfortunately, we didn’t have a chance to do that work at Twitter before Elon Musk bought the company. But it’s one of the areas that I hope social media continues to invest in going forward.
Audience Q&A highlights
Q: Knowing what you now know about Elon Musk and his relationship with engineers and his leadership skills, would you buy a Tesla and if so would you ever trust the full self-driving mode?
Roth: Funny story. So I used to own a Tesla, actually, or more accurately, I leased a Tesla. And right around the time that I left my job at Twitter, the lease on my Tesla was ending, and I got another car – spoiler, not a Tesla. I was trying to return my Tesla in accordance with this contract that I had signed with them that said I’ve leased this car for X number of years, and now I need to give it back. And I spent two months trying to get somebody at Tesla to answer an email to take back their car. It’s financially in their interest to take this thing back because it’s their asset and it’s depreciating on my watch.
But nobody at the company could do the basic work of taking their own car back. So there’s a problem there. But while I had my Tesla … my car was built at a time when Tesla used radar sensors as part of the signals package for full self-driving. My car had radar sensors in the bumper that would do things like figure out, is there a car you’re about to run into? And at a certain point, Elon announced by tweet that machine vision was good enough that they no longer needed to use the radar sensors. And so they disabled them. They took a feature that was physically present in my car that promoted safety and that was the industry standard for having your car not run into stuff and they turned it off and said, “No, the cameras are good enough.” And the cameras were not good enough and I never used self driving ever again. So I say this as perhaps a cautionary tale. I now happily drive a different electric car.
Q: My question is about Twitter 2.0. Being a big advertising platform, that was the key rationale for trust and safety. So I was going to pick your brain on this because new Twitter completely demolished that. They created something that was really unsafe for advertisers as well. Do you understand the plan here?
Roth: I’m not convinced the company’s management understands the plan here. I don’t think there is a plan.
Shortly after I left Twitter, I wrote a guest essay in the New York Times speculating that it couldn’t possibly get that bad. And there were three reasons. The first one was advertisers. So it’s like nobody who is trying to run a profitable company … would alienate advertisers. Wrong. Turns out they alienated the advertisers. The second reason was regulation, right? So it was like, there will always be a backstop against this because … the disinfo code of practice is a thing and the company has to comply. Wrong. Twitter withdrew from the disinfo code of practice. Unthinkable steps by the company. And then the third was App stores. This idea that a platform can only get so toxic before you see Apple or Google step in and intervene. And that one’s interesting because you saw a big blow up shortly after my piece came out in the Times where Elon suggested that Apple threatened to kick them out of the App store. Apple walked it back. Elon and Tim Cook went for a walk in Cupertino, and now they’re best friends. And Apple is advertising extensively on Twitter. You can speculate about why that happened, but I mention all this because I believed that the plan was you can only go so far before you run into those limits. And I think we’ve seen the company has just absolutely trampled those limits at every turn, and I don’t see how that works.
Q: A quick question about where trust and safety falls into these platforms. I know you can’t speak for Facebook or TikTok, but are you on the marketing budget line, which can be privy to the whims of whoever wakes up in the morning and thinks that this is important or not important? Or are you more core to the actual product?
Roth: Yeah. So every company structures it a little bit differently. At Twitter, we made significant efforts to have that direct connection between product engineering and trust and safety. Structurally, I reported to Twitter’s chief legal officer, so the P&L line rolled into legal, so nothing quite as discretionary as marketing. At other companies, it rolls into different places. At Meta, notably, it rolls into the government relations team, which I think is an interesting choice. At Google, it rolls into somewhere else. But most responsible companies have at least one team that’s dedicated to doing safety by design and privacy by design work, which typically involves trust and safety staff being directly embedded with product and engineering teams.
If you build a product and launch it into the world, and then that product is abused, trust and safety is to clean up the mess. If you can create a product that’s more resilient from the outset, you can start to get ahead of some of those challenges.