Donate
  • Freedom
  • Innovation
  • Growth

This Section 230 Stuff Is not as Complicated as You Might Think

Imagine you’re in a bar, or a restaurant, or a retail business. Suddenly, someone starts shouting offensive, racist garbage loud enough to affect other patrons.
 
Should the management have the right to kick the offender out? Or give them a warning? Or ignore them? Maybe that person is a regular. Should the management be allowed to bar that patron for a few months, or even permanently? Of course it should, and businesses occasionally do.
 
Sports and concert venues routinely ban fans who cross the line and become violent or threatening to those around them. That’s because the owners of these private businesses have the right to ensure that their customers have an enjoyable experience while they are there.
 
Guess what? That’s content moderation. Just as the bar owner has the right to warn, expel and or ban a customer who violates the business’s right to provide a good customer experience, so do social media platforms have the right to warn, block, or even ban users who use the platform in a manner that, in the management’s opinion, degrade the experience of other users.
 
This stuff really isn’t that hard.
 
Let’s go back to our original scenario. Should the bar or restaurant be held legally liable for the offensive statements of the rowdy patron? Again, of course not. The business didn’t engage in offensive speech—one of its users did. It’s the offender who should bear any legitimate liability, not the unfortunate owner of the retail establishment.
 
What if a murder plot was hatched in the restaurant? Is the restaurant criminally liable? Of course not. The restaurant isn’t responsible—those who executed the plot are liable.
 
What the much-discussed Section 230 of the Communications Decency Act does is simply extend the same kind of reasonable protections that already exist in the analog world to websites. Facebook’s website, Twitter’s feed, etc., are really no different than retail establishments where people gather and socialize or do business. Sometimes someone has to be warned, or suspended, or banned for their conduct.
 
Section 230 gives websites the ability to moderate content without being legally liable for such moderation, and it protects websites from liability for users’ conduct on those websites. It’s not some kind of novel or atypical liability shield—it’s a logical extension of the kinds of protections that exist in the analog world to the digital world.
 
Yesterday the Supreme Court heard oral arguments in which a party is attempting to hold Google legally liable because radical Islamists managed to get their videos uploaded to YouTube. The family of a woman who was killed in an Isis terrorist attack is attempting to hold Google legally liable because of the presence of those videos, and claims that YouTube’s algorithm promoted those videos based on user interests.
 
This strikes directly at the purpose and function of Section 230. But it also strikes at the concept of responsibility. Is the killer responsible? Yes. Are the killer’s accomplices liable? Yes. Is YouTube? Of course not—not under any concept of responsibility as understood in the history of modern civilization.
 
You could lose a lot of money betting on Supreme Court outcomes based on oral argument, but the good news is that justices of all philosophical stripes expressed serious reservations about the plaintiff’s arguments. Let’s hope so, because making websites legally liable for the conduct of their users would be the end of user generated content and user participation on websites.