Skip to main content
ABC News
What Kicking Trump Off Twitter Can — And Can’t — Do

It was The Trump Show 24 hours a day, seven days a week. For years, Trump has used social media to set the agenda with every one of his thoughts, policies and lies. Then suddenly, just like that, The Trump Show went off the air.

Last week, Twitter made the unprecedented decision to permanently suspend Trump’s personal account, saying he had broken the site’s rules by inciting violence. Facebook did the same, indefinitely, and Instagram (which Facebook owns) banned Trump for at least the duration of his term. Amazon made similar moves by kicking Parler, an alternative social media platform favored by right-wing groups, off its servers, causing the site to go dark. Reddit, Snapchat, Twitch, and many other platforms made similar moves to limit or ban Trump’s content.

These actions — with just days to go in the president’s term and only after a pro-Trump mob rioted inside the U.S. Capitol — will undoubtedly make an impact, but they’re not a panacea. Research shows that this kind of deplatforming can work, but millions of Americans have already been exposed to Trump’s lies and extremist rhetoric, and they won’t simply stop believing him because he can no longer tweet his thoughts.

Trump is far from the first controversial figure to be booted off social media, which means we have some data showing how deplatforming can work. A 2015 study looked at Reddit’s decision to remove its most toxic subreddits and analyzed the behavior of those subreddits users after the ban. Using machine learning1 the study found that many of the users stopped using the site entirely, and those that stayed posted far less hate speech on the rest of the site — by at least 80 percent.

Amy Bruckman, a professor and associate chair in the School of Interactive Computing at Georgia Tech, is currently working on a study that looks at the deplatforming of major controversial figures like far-right commentator Milo Yiannopoulos and conspiracy theorist Alex Jones. Though the study has not yet been peer-reviewed and published, Bruckman said it shows similar findings to the Reddit study.

“We found that after they’re kicked off Twitter people talk about them less on Twitter, and people talk about their ideas less on Twitter,” Bruckman said. “Looking at the supporters, we found that the toxicity of their speech went down.”

Of course, those who really want certain content can always just go somewhere else to get it. But deplatforming can still drain the bad actors’ audience and change the behavior of the subscribers left behind. A study posted on the prepublication site arXiv last fall analyzed what happened after Reddit banned r/TheDonald and r/Incels, both of which spurred users to create off-platform communities. The researchers found that, compared to the subreddits, “there was a substantial decrease in the number of newcomers, active users, and posts” on the new sites, but they also noted that they found “an increase in the relative activity for both communities: per user, substantially more daily posts occurred on the fringe websites.”

A similar pattern was seen after Facebook banned high-profile members of anti-vaccination groups such as Larry Cook, whose page had some 195,000 followers on Facebook. He moved to Parler where, before it was shut down, he had 14,000 followers.

Experts I spoke to also noted that removing bad actors from a platform can help prevent users who aren’t yet down the rabbit hole from being radicalized.

“We need to protect those normies, the normal users who don’t want to be radicalized into neo-nazism or whatever it may be. We need to protect them from being harassed and recruited,” said Megan Squire, a computer science professor at Elon University who studies online extremism.

Of course, removing Trump himself from a platform is quite different from removing, say, a subreddit, or even an Alex Jones. Trump is the president, after all. He has spent the past four-plus years seeding many of the ideas that incited people to reject the peaceful transfer of power and storm the Capitol last week. Those ideas don’t just vanish along with Trump’s profile pages.

This is part of the reason why critics wanted tech companies to do something about Trump’s rhetoric before it got too late. When deplatforming is the last-ditch effort, much of the damage has already been done, according to Julia DeCook, a professor at Loyola University Chicago’s School of Communication.

“Most platform moderation policies, especially with deplatforming, are reactive instead of preventative,” DeCook said. “We’re not actually trying to prevent the spread of these kinds of ideologies on these platforms, we’re more just playing whack-a-mole and hoping that the things we do stick.”

So deplatforming doesn’t completely solve the problem of baseless or harmful language online. Banning Trump alone, especially after refusing to act for so many years, isn’t nearly enough to undo the damage and convince millions of Americans of the truth, that the 2020 election wasn’t stolen. Just consider the potentially violent events already planned around D.C., and across the country, ahead of Biden’s inauguration.

Changing the channel on The Trump Show only does so much when its audience has reruns playing in their heads.


  1. Which is when researchers teach an AI what hate speech or toxic language looks like using a data set, then ask it to measure how much a new data set looks like the training set.

Kaleigh Rogers is FiveThirtyEight’s technology and politics reporter.