Supreme Court won’t hold tech companies liable for user posts
nytimes.comThis is a great example of NYT's bias against tech[1], because they somehow interpreted the (IMO very reasonable) court's decision[2] that the "aiding and abetting" terrorism requires specific intent. I quote the relevant sections from the ruling:
> The phrase “aids and abets, by knowingly providing substantial assistance” points to the elements and factors articulated by Halberstam. Those elements and factors should not be taken as inflexible codes but should be understood in light of the common law and applied as a framework designed to hold defendants liable when they consciously and culpably “participate[d] in” a tortious act in such a way as to help “make it succeed.”
> Defendants’ mere creation of their media platforms is no more culpable than the creation of email, cell phones, or the internet generally. And defendants’ recommendation algorithms are merely part of the infrastructure through which all the content on their platforms is filtered. Moreover, the algorithms have been presented as agnostic as to the nature of the content. At bottom, the allegations here rest less on affirmative misconduct and more on passive nonfeasance. To impose aiding-and-abetting liability for passive nonfeasance, plaintiffs must make a strong showing of assistance and scienter. Plaintiffs fail to do so.
[1] https://twitter.com/KelseyTuoc/status/1588231892792328192
[2] https://www.supremecourt.gov/opinions/22pdf/21-1496_d18f.pdf
I’ve been working on an app that will ideally have many user posts, and this agenda scares me.
You have these giant platforms that can afford to have entire departments dedicated to manual moderation, and even then you get rags like The Verge or Vox writing about how awful it is these companies make people look at disturbing content. It’s like, well what’s the answer? If you let people post content to your service some of them are going to post bad stuff, it’s not avoidable.
But as a solo founder who is hoping to bootstrap, what does this mean for me? My first phase of launching will be a very small group of creators that have been vetted personally and I have built a report system so I can be alerted when I do expand out.
But I just don’t think it’s possible to hold these platforms responsible for content users post unless they show extreme negligence. Otherwise you’re adding yet another barrier to anybody being able to compete with big tech. And it’s not fair to the big platforms either. If you’re TikTok or YouTube and you have so much content being uploaded even if you have a team that can do manual review it’s very difficult to catch everything.
So I’m glad for this ruling. As much as it sucks the web is so centralized now, pursuing this path of regulation will just make everything worse.
Hurray. We can continue saying whatever we like.