Meta and YouTube found negligent in landmark social media addiction case
nytimes.comThere is a fairly low amount of details about the case in the article. This NPR article [0] has a bit more, but it's still fairly sparse. Though it's interesting how Zuckerberg thought it was a good idea to say: "If people feel like they're not having a good experience, why would they keep using the product?".
Given that this is a case about addiction, that feels like a shockingly bad thing to say in defense of your product. Can you imagine saying the same thing about oxycodone or cigarettes?
[0] https://www.npr.org/2026/03/25/nx-s1-5746125/meta-youtube-so...
As someone who values a liberal society, I hope we’d be exceedingly careful in what we label “addictive” in the same bucket as oxy or nicotine.
I also hope the reasons are obvious.
Keep in mind that this case is about about a minor, not an adult. I don't think it's fair to ask children to resist social media through sheer willpower when there are legions of highly educated adults on the other side trying to increase engagement.
It should be no surprise that children can be manipulated by highly intelligent adults.
>[There are] legions of highly educated adults [at Meta] trying to increase [child] engagement
Why is this not only OK but the best way for Mark to spend every waking moment of his life?
Money thing? But often would he think about his bank account versus his products, maybe it’s pure drive?
I just wish for once one of these egomaniacal billionaires would actually put all their efforts and resources into solving climate change or ending world hunger.
Even his medical initiative Chan-Zuckerberg biohub is a self-congratulatory shell game. I worked in the same building as them for years, literally all they did was have parties, conferences, networking events and self-congratulatory schmooze things and never prioritized actual lab research or clinical advancements.
Be careful what you wish for. Serious, persistent world hunger in certain countries primarily exists not due to climate change or lack of food or even lack of money but because of local violence and corruption. For example, the notorious 1980s famine in Ethiopia attracted much attention in developed countries and many people (including billionaires) donated money to help. There was a drought which made farming difficult but the main problem was a violent civil war. Armed groups used food as a weapon against the civilian population by destroying crops and stealing foreign aid.
So, if you want "these egomaniacal billionaires" to end world hunger then you're effectively asking them to form private militias and impose peace by force in the developing world. The new colonialism. Is that what you want?
(observer here)
>to end world hunger then you're effectively asking them to form private militias and impose peace by force in the developing world
Does this happen to be your space? If this comment were posted to a forum of experts, I imagine they would hotly debate whether a range of ideas would work.
I struggle to imagine the private militia concept would be suggested in that context; with that said, I know nothing.
Incentives drive outcomes, do they not? It's too easy to become a billionaire as a charlatan and too hard as somebody able to make a difference. Rather than granting Zuck nearly unlimited power and politely asking him to use it wisely if he doesn't overly mind the inconvenience, why not create a world where monsters can't have that much power in the first place?
I don’t think anyone would disagree with you there but that requires changing the nature of humanity. Which is a much less realistic outcome.
I don't think it requires changing humanity. Just put a 100% wealth tax after 1 billion. And step letting money run politics.
Yeah, but who is going to lobby for it? Certainly nobody with actual money to pay for the lobbying.
And which politician would want to vote that in? Certainly no one with any rich friends who donate to their campaigns. Which means no politician that supports this is ever going to have the budget to get elected in the first place.
And then you have the problem that you cannot just fix this in one country. Because then all these rich people will find tax loopholes to claim they’re not nationals and thus exempt from this tax. So you have to convince every rich person and every politician in every country to change.
And now that you’ve created a wealth vacuum, you need to ensure that nobody rises up to flip the system again, using their wealth to manipulate everyone into repealing these new laws.
And now we are at the stage of having to change the nature of humanity…
The problem we have is that economics is driven by scarcity and consumption; and humans are largely driven by greed (or at the very least, a desire to make life comfortable). And we can’t have a future where rich people aren’t greedy, without changing the entire way economics works. Which also requires changing human nature too.
The basic concept of an impenetrable global taxation scheme came to mind about a decade ago, but at the time I was hopeful such a thing would be possible. (Ain’t no communist, but realize nice to have public roads etc. to get your employees to work - everyone chips in -> we all make more money.)
Is it human nature to rise up once a breaking point is reached? Since I concede it is not in our nature to finish our shift at our third job and go knock on neighbors’ doors, rock the vote. (agitating to elect the least greedy capable people)
Quick, keep my hope alive!
I just find it ridiculous that we as a society have allowed CEOs to become that wealthy. It's one thing to make your money from lucky investments, and become a billionaire. It's another to get there by simply running a corporation.
I don't mind them getting uberwealthy.
I mind the tolerance of society when some of these billionaires make their money on the back of negative externalities.
When "small" conflicts, like unpermissioned surveillance they use to psychological leverage against us, literally paying for content that gets eyeballs without taking any responsibility for the misinformation and hate they are financing to get produced, actively algorithmically pushing attention getting material without taking any responsibly for misinformation or hate material they are actively promoting, when they get paid for ads, but take no responsibility for taking money from scams and promoting them, and all the other seemingly "minor" but pervasive negative externalities that they hyper scale, people get hurt, and all of society gets degraded.
As everyone points out: incentives. If you don't take perverse incentives away from billionaires, or continue to give them perverse safe harbors, then those billionaires will relentlessly reinvest and innovate, in more harms, at ever greater scales. Things we still think are minor ethical issues, are not when they are hyper scaled.
This isn't some passive, life is rough sometimes situation, that people should be expected to weather. This highly financed, highly managed psychological, social and political harm, for profit. Even if the harm is distributed and seemingly low in any given incident. It adds up to a visibly degraded society.
Somehow social media gets treated with all the lack of responsibility of a neutral web site server. But they are highly active in how they operate. They should be responsible for their very active choices.
> I just wish for once one of these egomaniacal billionaires would actually put all their efforts and resources into solving climate change or ending world hunger.
I don’t. That presupposes that they have anything to contribute to begin with.
Their wealth beyond some millions (edit: being generous) is built on exploitation. That’s not necessarily a transferrable... skill.
And not just a minor, AIUI it's important that at the start, she was under 16
> Keep in mind that this case is about about a minor, not an adult.
This obviously means that tech is going to have no choice but to do "age verification". And I don't think there's much of a way to do that that wouldn't be uncomfortable for a lot of us.
Oh, corporstions pushed age verification, so of course they will not have any choice now. But before that they could just stop being addictive regardless of age.
I would prefer Meta make their products less addictive for children, with the side-effect that they're perhaps less stimulating for adults, than for Meta to keep their products the way they are, gatekept behind a system that allows them access to even more of my personal data.
I understand why they would want the opposite. They can f*ck right off.
Or assign responsibility to…parents and legal guardians…who are not children.
Sure, parents do bear some responsibility here too. But we are talking about a platform that is engineered to be addictive to adults too. So it’s not as if the platform isn’t still predatory even if we find a way to parent every child on the internet.
Meta is not blameless here. Responsibility can be shared when Meta (and others) are essentially preying on children. It’s an uphill battle for parents by Meta’s design.
They’re not Meta’s kids, they’re freemium customers.
Doesn't this lawsuit (essentially) prove otherwise?
It would work if parents had legal course to seek justice against corporations that stalk, groom, and manipulate their children against their wishes.
We already have a distinction because it’s been known for decades already that some things are addictive purely through reinforcement psychology and some things lock people into a chemical dependence.
For example see the glossary in https://en.wikipedia.org/wiki/Substance_dependence
And for some reason we only use "addiction" to describe things that are recreational in nature, not drugs that have no recreational use but can be quite dangerous to discontinue abruptly.
I’m not a doctor but I’m pretty sure that’s not the case.
Substances like caffeine, sugar, and painkillers are definitely still referred to as “addictive”.
Whereas substances like sertraline (antidepressant) are referred to as a “dependence” because it’s dangerous to discontinue abruptly (as you said) but there isn’t any psychological addiction involved.
To elaborate, I think experts usually use these terms as follows: an addiction is something where you have a continuous and difficult to resist drive to keep doing/using something due to it being inherently rewarding. A dependence is something where if you stop regularly doing/using something you’ll experience some sort of withdrawal.
> I also hope the reasons are obvious.
Based on the fact that many people here disagree about fundamental things, as well as the fact that “liberal” is a highly overloaded term, I think it should be obvious that it’s not obvious what you mean.
I don't think the reasons are obvious. Where do you put gambling on the spectrum?
If something compels behavior vs. behavior remaining a free choice, a liberal society can and should treat it like any other source of compulsion.
Personally, I am leery of any technical definition of “addictive” that operates outside the traditional chemical influences on physiology. So I would not describe gambling in that sense.
One might have a malady that causes gambling to take on the same physiological vibe for you, but that’s not what it means for gambling itself to be addictive.
I am not a neuroscientist, but I thought the actual physiological cause of addiction was similar in both nicotine and gambling: you crave the predictable release of dopamine.
If that is the (heavily simplified) case, is there a distinction for you between a chemically-induced dopamine release from smoking and, say, and a button you can press that magically releases dopamine in your brain?
You're missing the negative affect node of the Koob addiction cycle, which exists for gambling but to a lesser degree than for nicotine.
I don’t gamble, but if I did, I am fairly certain it would release little to no dopamine for me, win or lose.
I don’t smoke, but if I did, I’m also fairly certain I would find it hard to stop.
From everything I have read about addiction, it is far from that simple. One of the best examples are pain medicine like morphine. Give opioids to patients and some will form addiction to it and others will not, and the predictors for that are both genetic and environmental. It is not as simple as inject it into person X and now they are a slave to it. One way one can see this is in statistics in that long-term opioid use occurs in about 4% of people following their use for trauma or surgery-related pain.
It not at all certain that you would find it hard to stop if you suddenly decided to try smoking. There would naturally be a risk, but how high that risk is is a debated subject if you have none of the risk factors for addictions.
You’re being downvoted, but there’s an interesting point you’re trying to make. Dopamine-chasing is truly selective in the behavior and chemical sense.
There is a particular hard drug that I could be easily addicted to if it were cheaper and more accessible. Nothing else like it gives me irresistible craving for more. Not nicotine, ADHD meds or speed, benzos, and not even opioids have the same effect. So after I discovered this about myself, I went on a little journey to self test myself other possible addictions.
Social media? Nope. Video games and tv? yes. Gambling, hoarding, shopping: No. Sex: yes. Exercise: yes
I can’t rationalize any of it.
And yet, some people find themselves compelled to continue gambling long after they’re drowning in debt.
If you don’t want to call that addiction, fine, but you can’t deny that it happens.
Right: They’re gambling addicts. That’s a distinct fact from, “Gambling is a physiologically addictive behavior for typical humans.”
Right, there's a difference between a chemical that will addict most people simply because of the changes the chemical makes to the brain (even if the person doesn't even really like doing the thing that causes them to consume it), vs. an activity that gives you dopamine hits and can be addictive, depending on the person.
One is physical addiction and the other is psychological.
But I'm also feeling a parallel here to people who think that mental health issues aren't real medical problems and that people can just "get better" whenever they want. And that's concerning. We shouldn't be more lenient on things that are "only" psychologically addictive.
It's predictably addictive under common circumstances (a lack of socioeconomic support and a lack of alternative means to occupy one's time). If those circumstances are becoming more and more widespread in a society (which they are in this one), it behooves experts to consider that "typical" and "this particular cohort" might become harder and harder to distinguish, to the point where what would have been targeted interventions need to become general.
> If something compels behavior vs. behavior remaining a free choice, a liberal society can and should treat it like any other source of compulsion.
Indeed, and if we want those behaviours to remain as things considered to be choices rather than the nearly inescapable negative life-destroying feedback loops (activities with high addiction potential, for lack of a more concise term), they should be treated with special reverence and highly restricted from outside influence. Put another way, if we want liberal societies to be sustainable, I'd argue all forms of overtly addictive behaviour should—in many cases—be banned from public advertisement and restricted from surreptitious advertisement in entertainment, and we should have definitions for those.
For ages we've not had cigarette ads on public broadcasts, and yet people still "choose" to smoke, meanwhile there's been a increasing presence of cigarettes among Oscar winning movies in the last 10 years.
If you are addicted to smoking and trying to avoid being reminded of it, you'd realistically have to stop watching movies and participating in that aspect of culture in order to regain control of that part of your life. Likewise, with gambling, you don't only have to stop going to the casino, you have to stop engaging with sports entertainment wholesale.
You seem to be differentiating between physical and psychological addiction, and saying that only physical addiction meets the technical definition of addiction?
I’m saying society should tread extremely carefully in attempting to regulate citizens’ potentially psychologically addictive behaviors.
Good news, social media has been extensively studied and found to be addictive. So we have little need to tread carefully, we already know it’s addictive.
Fortunately it also has minimal to no value to society, so even if we overreacted and banned it completely it’d be fine
Ah OK. Yes I agree, there can be a blurry line between something a person does compulsively/addictively and something that he just enjoys doing. And it's different for different people.
To add to the confusion, sometimes I do stupid things just once. Even so, those things should be banned, for harming me.
We already have a category called addictive personality disorder where someone is much more prone to being addicted to pretty much anything.
In the US, regardless of what type of addiction you have, it is considered mental health. Open market insurance like ACA does not cover mental health, so there is no addiction treatment available. Sure, you can be addicted to a substance where your body needs a fix, but it is still treated as mental care. This seems to go directly against what your thoughts are on addiction, but that doesn't say much as you're just some rando on the interweb expressing their untrained opinions. So am I, but I'm not the spouting differing opinions with nothing more to back them up than how you feel.
Where would you put 24x7 political content?
A little further down then social media apps, but mostly the same. After all, it's the main source of outrage bait for those apps. If we're talking about Fox News or CNN there's less specific user targeting and the delivery mechanism is more constrained.
That's more like perversion...
Dark patterns are real. Deceptive advertising is real. So-called prediction markets amount to unregulated gambling on any proposition. Many online businesses are whale hunts and the whales are often addicts.
Specifically when it comes to children, we need to show more restraint in giving them the liberty to partake in potentially addictive substances.
It's one thing if an adult smokes and gambles, it's another thing if a child does. It seems to me that stuff you do in youth tends to stick around for life.
What's obvious to me likely isn't obvious to you or anyone else, therefore nothing is obvious.
I wish we'd delete that word from the English language.
I feel like people use the word “addiction” to refer to both chemical addiction and behavioral addiction, and that people understand that the latter is (usually) far less serious than the former.
I don't think you can put them into buckets like that. All addiction is driven in persuit of a reward. The magnitude of reward can be estimated with brain scans and stuff but to my understanding isn't universal in all humans.
Can we definitely say gambling addiction is less serious than alcohol addiction when there's individuals who find the former harder to quit than the latter?
> I hope we’d be exceedingly careful in what we label “addictive” in the same bucket as oxy or nicotine.
Not careful enough apparently: Nicotine isn't that addictive on its own, tobacco is.
Be aware, the vast majority of people who have ever smoked cigarettes occasionally never became addicted. They were not labeled as “smokers”. A non-trivial number of people today continue to smoke cigarettes on occasion. I like to have one on my birthday. Then again, I’m able to eat a chip and not consume the entire bag. I’m not convinced of these social science studies, and when digging into individual studies I’m sure the replication crisis comes into play.
Or you could read the studies that show addictive nature varies by person...
> Not careful enough apparently: Nicotine isn't that addictive on its own, tobacco is.
That is a very strong claim to make when the current scientific consensus strongly disagrees.
They're likely referring to this:
https://pmc.ncbi.nlm.nih.gov/articles/PMC4536896/
>However, nicotine can also act non-associatively. Nicotine directly enhances the reinforcing efficacy of other reinforcing stimuli in the environment, an effect that does not require a temporal or predictive relationship between nicotine and either the stimulus or the behavior. Hence, the reinforcing actions of nicotine stem both from the primary reinforcing actions of the drug (and the subsequent associative learning effects) as well as the reinforcement enhancement action of nicotine which is non-associative in nature.
You can find other studies about the addictiveness differences between cigarettes, vapes, chew, patches, pouches, etc. Basically, the methods with the most ceremony and additional stimulus are more addictive.
Tobacco may be the most* addictive delivery method, but nicotine alone is also addictive. To say its not is misinformation. Consistent use of nicotine still leads to upregulation, which does cause irritability, brain fog, cravings when you stop.
* I'd even change this to say modern nicotine salts in vapes are likely to lead to dependency faster than tobacco. A 5% nicotine salt pod will contain as much nicotine as a full pack of cigarettes, and so vapers tend to consume far more nicotine in a single sitting than they ever could with a cigarette. That combined withe constant availability means users of nicotine vapes & pouches (aka, no tobacco) are likey to have a more difficult time quitting than cigarette smokers.
Bottom line, its still dangerous to dismiss nicotine's addictive potential with or without tobacco as a delivery method.
How does that work when nicotine products that are every bit as addictive as tobacco exist, maybe you're just not aware of them? Sitting here with non tobacco snus (Swedish nicotine pouch) under my top lip, something I have been utterly unable to quit. I believe "nicotine free" tobacco would be completely non addictive.
tobacco contains MAO inhibiting compounds, which potentiate nicotine and increase addiction potential. that doesnt mean nicotine on its own isnt insanely addictive, i have no idea what the guy youre responding to is talking about. however, MAOIs were withdrawn as antidepressants for a good reason - they have a terrible withdrawal all on their own.
Gwern has a pretty good post on it:
Intuitively, why would you chew nicontine gum to stop smoking if it was just as addictive as cigarettes?
"I hope we’d be exceedingly careful in what we label “addictive”…"
To be sure. But still an obviously dumb thing for a CEO to say though.
Prohibition? Despite your hopes I'm not sure I got your intent.
What wording would you use then if the definition fit? You can use minor addiction or severe addiction but it's still an one.
> As someone who values a liberal society, I hope we’d be exceedingly careful in what we label “addictive” in the same bucket as oxy or nicotine.
The problem is that this runs directly into the evidence that is mounting from GLP-1 agonists.
A lot more things are tied to the pathways we associate with "addiction" than we thought.
Why is it that these philosophical ideas about supposed personal freedom again and again make an appearance when it’s about the freedom of corporations? It’s always that. Either that or with the Free User pushed infront of them like a shield.
Social media is addictive the same way anorexia is. If you think Anorexia isn't a form of addiction, then sure, you got your 'safety'.
There’s a big distance between libertarian and liberal societies. The libertarian tendencies of corporations are what tend to cause more harm.
Mmhmm those are words. Words that are hand wavy pretexts for conservatism rather than liberalism; as a lover of liberal society you hope it acts conservatively!
This just comes off as poorly obfuscated self selection. You own a bunch of Meta, Alphabet and other media stocks?
A lot of smokers don't feel they are having a good time and want to quit but can't. I'm not sure the same applies to youtube.
I knew someone who had exactly that feeling about YouTube. It was a genuine struggle for them to stop even though the amount of time they spent on it was negatively impacting their life and the content was making them more anxious.
> Can you imagine saying the same thing about oxycodone or cigarettes?
No, but unfortunately I can very easily imagine people saying it, just like the people who made loads of money from pushing those products did. Also just like the people who are profiting from the spread of gambling are saying now.
Why would someone choose to do a thing if it harms them? There are good arguments against laws that restrict personal freedoms, but this isn't one of them.
But what if we're talking about a product that you're giving away to children? I agree that for adults, cigarettes are fine. But in this case, you're actively designing to maximize tweens and teens engagement and the end result is them saying that they wan't to stop but can't.
Though to be fair, I was mostly pointing out the fact that this was a pretty dumb thing to say for a case like this, especially in a jury trial.
Yes, I agree with you, I think that regulation is needed here and that this was a dumb thing to say. I'm just saying that my reaction to Zuckerberg saying that people must love his product if they use it a lot is exactly what I'd expect him to say. It's also exactly why other parties must step in.
From what I understand the argument is, and to miss quote Marshall McLuhan it is “the medium, and not the content is the addiction”.
In other words is not the posts by the influencers, but techniques such as infinite schooling, and so on.
This is why meta and google could not relay on User-Generated Content Safe Harbor (Section 230) part of the law.
Yeah, Zuck is really being a bit of d** there. You can't spend decades hiring the best engineers in the world and give them millions of dollars worth of resources with the sole aim of creating products specifically designed to retain attention and then simply shrug and say "if you don't like it, leave it". That's just not a fair fight.
If people feel that smoking causes lung cancer why do they keep smoking?
> "If people feel like they're not having a good experience, why would they keep using the product?"
A statement that's been brought up even by HN commentators
Facebook is not a free market where you can choose. You're compelled to use it for several different reasons (and before some wiseass comments "you're not forced to. you can delete it" yes I know)
- They captured the early market. There was a small window of time in which to get users
- They ruthlessly bought up the competition
- They've deleted links to competitors
- They outright hijacked people's email addresses. It makes it hard to transfer users to another service or to email them outside the walled garden
- Even while they change privacy settings for users to make things more public, they wall off public pages. Your local neighborhood has a place where they post information? Even if everyone selects "Public" in the audience you can't see it without an account
Edit: Oh, and shadow profiles. And making it nigh-impossible to delete an account permanently
it's especially galling because he (or at least his wife) also funds neuroscience research at Stanford and elsewhere, and should have been well informed of the science behind addition, dopamine, and the reward pathways in the brain
"If people didn't like destroying the environment, why would they let lobbiests run their government"
-- Billionaires
Why not make personal responsibility illegal whilst we are at it. It is egregious that an individual can be held accountable for their own behaviours.
How much personal responsibility should we expect children to have? Genuine question. Because there was a time where some people believed that it was ok for kids to drink alcohol or smoke cigarettes.
> How much personal responsibility should we expect children to have?
This is what parents are for.
This is what the whole society is for, why singling out this aspect of behavior?
Education? Safety? Environment? Justice? Is this not also, then, what parents are for?
In that case, maybe we don’t need to card people anymore at the liquor store. If underage kids happen to buy, that’s the parents’ fault.
The fact that you're comparing nicotine to Facebook really throws into sharp relief just how far from reality this whole "social media made me depressed" stuff has strayed.
There's a large body of evidence on the damage that social media can do to people, and on the engineered compulsion to use it.
Yeah. The difference is that I have managed to quit nicotine.
Clearly those two things are not the same.
Do you use Facebook regularly?
I use Facebook marketplace and I'm part of a running group that's organized on Facebook.
The solution to this would be a law forcing these sites to allow third-party suggestion algorithms, so that you can choose who and how content is being suggested to you.
It could be perhaps as simple as allowing third-party websites and apps for watching Youtube on your phone. And it's okay if this would be a premium paid feature, so there's no counter argument that "it costs them money to host videos".
This is not an entirely new idea either. Before Spotify became popular, people would integrate Last.FM into their media players to get music recommendation based on their listening history, and you could listen to music via YouTube directly on the last.fm website.
The solution to all of Big Tech's monopolies is actually pretty simple: Interoperability must become a law - this includes using custom algorithms or allowing other platforms (like your own app) to access YOUR data on whatever platform 'hosts' it.
Cory Doctorow wrote a great article on it:
"Interoperability Can Save the Open Web" https://spectrum.ieee.org/doctorow-interoperability
> While the dominance of Internet platforms like Twitter, Facebook, Instagram, or Amazon is often taken for granted, Doctorow argues that these walled gardens are fenced in by legal structures, not feats of engineering. Doctorow proposes forcing interoperability—any given platform’s ability to interact with another—as a way to break down those walls and to make the Internet freer and more democratic.
Most notably, he retells how early Facebook used to siphon data from its competitor MySpace and act on user's behalf on it (e.g. reply to MySpace messages via Facebook) - and then when the Zuck(er) was top dog, moved to made these basic interoperability actions illegal by law to prevent anyone doing to him what he did to others.
We can’t depend on these platforms to offer interoperability or even laws to force them to do so. The DMA forced Apple to allow 3rd party app stores in Europe and they still hampered it so rarely anyone uses it.
We need platforms to offer that interoperability and simply connect to these “marketplaces.” Take Shopify for example, sellers use that platform to list on Amazon, Google Shopping, TikTok shop, etc. We need open source alternatives to those where the sellers own the platform and these marketplaces are forced to be interoperable or left behind by those that are.
For Facebook, Instagram, Twitter, each person having their own website where they post and that post being pushed to these platforms is also another way to force interoperability on them or be left behind.
It’s a tall task, but achievable and it will happen given enough time.
> For Facebook, Instagram, Twitter, each person having their own website where they post and that post being pushed to these platforms is also another way to force interoperability on them or be left behind.
There's an acronym for this: POSSE (Publish [on your] Own Site, Syndicate Elsewhere). Part of the IndieWeb movement, for those who want to explore this worthwhile idea further.
Sure, you can do that. But then the syndicated content usually ends up looking like low-effort slop and doesn't get much traction. Each publishing platform has it's own features, limitations, and cultural norms. If you want to have any impact then you can't just copy content around: you have to tailor the message to the medium.
How will it happen? Writing open source code is one thing, maybe enough people will volunteer their work. But running an operational marketplace or social media platform is something else entirely. You need a real revenue stream to pay for hardware, connectivity, operations staff, regulatory compliance, etc. That stuff isn't cheap.
I'm been building towards an interoperable marketplace[0] and realized I needed to launch open-source alternatives to Shopify[1], Toast[2], Instacart, etc to take on the proprietary marketplaces.
It really comes down to merit and how much value you can bring to the actual sellers in these marketplaces with the software. If enough sellers switch, marketplaces will follow.
0. https://github.com/openshiporg/marketplace
Breaking up these monopolies would be a good start. We aren't supposed to have those. There used to be something we called "regulations" but they got rid of that part I think. Elections have consequences.
Regardless of current administration policies, which of those companies actually meets the legal definition of a monopoly?
Just because you don't understaned why they are does not mean they are not. It just means you haven't done your research.
Do you have a substantial comment to offer or are you going to stick with lazy, low-effort snark?
These platforms aren't monopolies. Being popular isn't being a monopoly.
Yes, yes they are. Do your research.
Be careful what you wish for. Making it easier to access your data in a standard way just means more companies and governments will ask for it.
That just leads to embrace/extend/extinguish
Exactly. The deal of all these platforms is that there is a fuckton of up-front costs. Hard drives. Networks. Peering. Transit. Operators. Payment. Lawyers. SREs. And so on and so forth.
The solution to this used to be that governments provide the platform. You would think this wouldn't be hard to do, since people have now shown that this can work and so it's a guaranteed money maker, or as close as you're going to get.
Yet I can't find a single initiative.
So any such rules will just make all internet platforms disappear ... and nothing.
Govt already backs DNS, registration, and regulation of backbone transit along with a bunch of other services.
Beyond that, there really isn't much that a small shop couldn't manage unless you are trying to be the next FAANG (and lotsa luck with that).
The foundational problem with interoperability is that it can and will immediately be abused by bad actors as long as there is no price tag attached to every piece of communication.
Among social media, Mastodon (and anything Fediverse) has it the worst, obviously, but Telegram and Whatsapp are rife with spams and scams, Twitter back when it still had third-party apps was rife with credential and token compromises (mostly used to shill cryptocurrencies).
As for the price tag reference - we've seen that with SMS. It used to be the case that sending SMS cost real money, something like 20 ct/message. It was prohibitively expensive to run SMS campaigns. But nowadays? It's effectively free at scale if you go the legit route and practically free if you manage to get someone's account at one of the tons of bulk SMS providers compromised. Apple's iMessage similarly makes bad actors pay a lot, because access to it is tied to a legitimate or stolen Apple product serial.
Paywalls can have the opposite of the effect you want. Implemented incautiously, they can fail to disincentivize parties who can make profit in excess of the cost, and it can succeed at disincentivizing genuine, non-profit-motivated interaction.
Imagine how much less you would use text messages if they still had a per-message cost.
I would reply to your comment, but my 2GB data allocation for my cell phone is already spent this month.
Because some hostile entity might rat fuck the a slightly better system, we're destined to use the same current shitty system because something better might have a downside?
Do you understand that this is all literally made up? The rules can change anytime and society can exert its will to make better world rather than letting a dozen people decide how technology will shape humanity (mostly in a negative capacity if you look at the current state of things).
>Because some hostile entity might rat fuck the a slightly better system,
And make it a worse system, is what you happened to leave off.
>Do you understand that this is all literally made up
You mean the existing system that evolved from billions and billions of interactions? Explain what is 'made up' about it.
The thing is if you start 'making up' random ass laws that piss people off, they will run screaming back to the billionaires to pwn them with locked down systems. Apple is a great example here. Shit is locked down and people love it.
Being afraid to do things because they might possibly, but never proven, be worse is just the political machinations of enforcing the status quo where our corporate overlords get to dictate how technology shapes our lives.
I'm sorry but that's deeply undemocratic, todays generation should have a direct say in how new things effect their lives.
Failure to do this might literally condemn our species to extinction, and this only took less than 200 years to achieve. I'm sorry but they've proven their failure and it's time to make drastic changes.
Good news is many people agree with this across the electorate, so now you get to decide which people you want shaping society. The previous world order of US imperialism is going to end and I rather have the people decide what to do than those that want to continue running head first into extinction.
>The previous world order of US imperialism is going to end
I don't disagree.
Of course Chinese imperialism probably won't be much better.
This is a confusing comment. Interoperability and bad actors are separate concerns, because you get bad actors in systems of all kinds, not just in interoperable systems. Paywalling a system does not necessarily mitigate bad actors, either.
But bad actors already do this, as there is a monetary incentive to implement adversarial interoperability. There is then an incentive to not scale it up too much, lest that implementation get cut off sooner. For example, I certainly don't think all of the spam ads I see on Faceboot Marketplace are from individual people manually creating accounts and typing them out.
It seems likely that'd result in even worse suggestions becoming the norm as people adopt the third-party that gives the quick dopamine rush. It's like suggesting tastier heroin to fix drug addiction.
There's a difference between addictiveness and enjoyment, and definitely between addictiveness and satisfaction.
While the thing that gives you quick dopamine might win in the very short term, you can still step back and recognize when it's not satisfying in the long term and you're not even enjoying it that much.
And people aren't stupid. Junk food exists, yet lots of people choose to eat more wholesome food as the majority of their diet.
The problem with instagram or youtube is that you can't separate the good from the bad.
It's like if every time you went to store Y to buy milk, you would be exposed to highly manipulative marketing trying to get you to buy junk food. You would probably want to go to a different store instead.
What I'm suggesting is the possibilities of different stores, with different philosophies and standards, so that people can choose where they go. Corner stores (where almost everything is junk food) exist, yet people still choose to go to real supermarkets.
> It's like if every time you went to store Y to buy milk, you would be exposed to highly manipulative marketing trying to get you to buy junk food.
But that's very much the norm at supermarkets?
Parent poster has some… interesting and popular but entirely false views on neuroscience. Specifically, an extremely outdated view on concepts like the role of dopamine and dopaminergic neuronal populations in human cognition. Rather than an understanding based on science and the idea that incentice salience and valence is modulated by such populations, he is attributing pleasure and enjoyment to them because of a meme.
Certainly not. People don’t want the slop they push, the anxiety provoking, salacious, clickbaity spam that it has devolved into. Anybody that used YouTube before the last few years can tell you the difference is pretty major. This is not content people want, it’s content that maximizes clicks and ad sales.
> People don’t want the slop they push…
That's also true for heroin. Plenty of people really want to break the addiction.
The slop exists because people are attracted to it.
Heroin is a different business model than advertising. Respectfully, you are wrong.
Gosh, if you say so...
Heh, it's funny watching people, like the one above you, say "This thing is addictive because it is a real object, but this digital object cannot be addictive at all". The argument is so illogical you begin to doubt you're talking to a real person.
I never made that claim, and in fact believe the opposite. I simply disagree that heroin is a drop in replacement as a mental model. The differences between the heroin trade and YouTube are meaningful. For example, one is a physically addictive illegal drug that is a commodity exported by certain foreign nations while the other is a digital platform that makes money by ad sales and is a monopoly. Both can be addictive, but they are not the same thing.
People don't want to want it. But it's not obvious that merely allowing a choice of recommendation algorithms would allow people to escape the slop. Isn't anyone strong enough to choose a less addictive algorithm necessarily strong enough to not scroll Instagram for hours in the first place?
>Isn't anyone strong enough to choose a less addictive algorithm necessarily strong enough to not scroll Instagram for hours in the first place?
Absolutely not. It's much easier to make a one-time switch than to be continuously resisting temptation. Changing the things in your environment is an important tool to break bad habits. The book "Atomic Habits" talks about this at length.
I mean, the court case is about these platforms being addictive to kids, so if they said "accounts for users under X years have the algo and time caps delegated to their parents' account by default" it'd go along way to negate what they're being accused of.
They've already built all the tools they need around this at the moment, it's just they give them to advertisers rather than end-users.
"Let the parents manage it" is, unfortunately, part of the reason we're in this situation in the first place.
Anything that’s a premium paid feature will be irrelevant. Most people don’t subscribe to YouTube premium, even though they know their kids are watching a ton of ads. Adoption has also been incredibly brisk on the ad tiers of the formerly ad-free TV services like Netflix and Hulu.
I realize “less addictive algo” is a different thing to pay for than removing ads - but it’s, if anything, an even harder sell - I think the layperson wouldn’t even acknowledge that they are vulnerable to being psychologically manipulated. They think they spend so much time on these apps because it’s so enjoyable.
From most parents’ point of view, paying a monthly bill for their children to have a less toxic experience on TikTok, or YouTube will be considered an extravagance instead of a responsible safety expense.
Third-party recommendation algorithms would be interesting, but I think they'd only address one layer of the addictive design the verdict is actually about. Autoplay, infinite scroll, notification timing, the variable reward patterns from likes and comments -- those are all independent of which algorithm picks the next video. You could swap in the most wholesome recommendation engine imaginable and a kid is still gonna sit there for hours if the UI is designed around endless content with no natural stopping points.
I dunno, careful what you ban; TV has “infinite scroll” too.
Bluesky does this. In fact, the For You algorithm is a community built algorithm and way more popular than the native Discover algo.
> Before Spotify became popular, people would integrate Last.FM into their media players
I still scrobble to Last.fm from Spotify (and other media players). I rarely use it for discovery anymore, but it's occasionally interesting to look at my historical listening trends.
This seems like a clever (but perhaps overly clever) amendment to Section 230 protections for social media.
However, I've always thought that it's pretty bizarre for Section 230 protections to apply when the social media company has extremely sophisticated algorithms that determine how much reach every user-generated piece of content gets. To me there's really no distinction between the "opinion" or "editorial" section of a traditional media publication and the algorithms which determine the reach of a piece of user-generated content on Twitter, YouTube, etc.
Or just stop suggesting content. The landing page is just a matrix of already followed accounts with the text "Start by following some accounts you like..." as a placeholder if it's a new account.
I’m quite bullish on disintermediating the algorithms. AI makes it very easy to plug in your own. We just haven’t figured out the plumbing yet.
I’d be strongly in favor of interoperability laws to pry open the monopolies.
(One dynamic you do need to be careful about especially at first - interoperability also means IG can pull your friend graph from Snapchat, so it can also make it easier for big companies to smother smaller ones that are getting momentum based on their own social graph growth due to their USP. I don’t think this is insurmountable, just something to be careful of when implementing.)
That’s like saying the solution to cigarettes is that tobacco shops must be forced to sell clove cigarettes as a not-addictive alternative.
If the default algo/behavior is allowed to persist, it's going to be effectively no real change.
Drop the algorithm altogether? I subscribe to channels for a reason.
How do you prevent a Cambridge Analytica exfiltration situation with third party algorithms?
And how does this prevent addictive algorithms which will win through social selection?
The Cambridge Analytica stuff never got fixed, it just got hidden out of sight. The situation is worse than ever now.
The real solution is going back to a chronological feed of people you actively choose to follow.
At the very least, that should certainly be an option that users can select. And when the user selects a feed algo, it should stay fucking set until that same user actively chooses to change it.
Yes please. Algorithms should be plug-in-and-play and not endemic to the app. You should be able to take popular algorithms and plug them into any app
That's just laundering the bad actions though a third-party.
The winning third party algorithm will be the one that gives people the same rush the first party algorithms currently do, because people will use it for the same reasons; they get to see cute AI animals do crazy things forever.
99.9% of these would just be malicious spyware that people are tricked into agreeing to.
So better than the 99.99% status quo?
That's called a "feed generator" on Bluesky.
Virtually nobody would choose to pay a subscription for the non-addictive app version, and I'd even say this suggestion is a bit insulting to anyone who isn't high-income.
I will never pay a subscription for the current clickbaity slop. I might if the algorithm were better, closer to YouTube of 10 years ago, when it would suggest lectures, artfully done film shorts, and overall more interesting, high quality content.
10 years ago the most popular 100 videos on YouTube were all pop music videos. Justin Bieber had 3 of the top 10.
The youtube algorithm has been personalized for much more than 10 years and has never prioritized any kind of lectures or artful films over anything else it thinks a viewer will watch. You're asking for them to bring back an era that never existed.
If you're not getting those sorts of recommendations it's because you ddon't actually watch that kind of content, or you're removing your history.
I’ve watched YouTube daily for nearly 20 years. The majority of the content as well as the algorithm have changed substantially over that period of time. I’m not the only one to notice this btw. There is even a word to describe the phenomenon, “enshitification”. I do clear my watch history, and have never signed into YouTube, frequently resetting the app and watching online in private sessions with adblock. The frequency with which I have to reset the app to prevent the algorithm feeding me terrible undesired content has gone up overtime, I now do it once every few weeks. That’s how much I dislike what it pushes on me. I used to get stuff like “philosophy overdose” and sapolosky’s stanford lecture series, good operas. I now get stuff like “these 5 things are killing you while you sleep!!!” and “mom is shocked to find out her teenage son is raping and eating babies severed limbs.” I’m not being hyperbolic; that’s actually what YouTube recommends.
Seriously? You think they should allow random third parties to inject code into their platforms with all the possible security risks? Regardless the intent that is a terrible idea.
Or algorithms have to be submitted and approved by a government body before being allowed to be implemented and are frequently audited
I guess this is the only way. I don't think we need novel approach and I don't consider this a novel one since we already have government agencies verifying approved processes in other areas so why not content distrubution.
The only solution is to outlaw all recommendation algorithms. Accounts should only have access to a chronological feed they choose to follow. The host can promote whatever they want, but it has to be the same promotions for everybody.
I like recommendation algorithms. If someone on my friends list posted about a major life event a few days ago and I haven't seen it yet then I want that prioritized first, before more recent posts. Chronological feeds should be an option for those who want them but they shouldn't be forced on anyone.
I think a better solution would be to repeal section 230 protection for any kind of personalized or algorithmic feed. The algorithm makes you a publisher, and you should be liable for what you publish.
That would make it very hard, nigh impossible, for a platform like YouTube or TikTok to exist as it does today, and would instead favor people self-curating mechanisms like RSS readers etc.
How is RSS self curating? It's just a way to get a feed from somewhere. And under the maximally external-locus-of-control culture this jury is using, those feeds would themselves be deemed evilly addictive.
There is no solution for this kind of verdict beyond appeal, or changes to the law to rule such suits out, because it's not rooted in any logical or legal principle beyond the idea that people should not be responsible for their own actions (or their children's actions). But there's no limiting factor to that belief. You can't fix it with RSS or federation or making people select who they follow or chronological feeds. Those would just get blamed for "addiction" instead.
Each blog you follow in the RSS model you opted in to. And each post comes from a person, or a publication, who can be held accountable for what they publish.
Ordinary media, like newspapers, books, radio, and TV, have worked this way forever — people publish “channels” and you decide what channels to follow. A channel can be held accountable.
The algorithm model is different. People just publish “content” into the platform, and the platform makes a custom channel for each viewer, inserting content from people you’ve never heard of and didn’t ask to follow. And it optimizes that custom channel for whatever addicts you the most. That’s fundamentally a different beast than opt-in media consumption.
And if that blog is a newspaper or other aggregator? What makes the RSS feed of the CNN front page fine, but not the RSS feed of the YouTube front page?
There's really no difference. Media companies all aggressively optimize for engagement, often to the point of A/B testing headlines.
There’s a huge difference. Everyone sees the same front page on CNN, or HN for that matter. Nobody sees the same page twice on YouTube or TikTok. That’s a fundamental distinction between human curated media (even with A/B testing), versus machine curated media.
>and would instead favor people self-curating mechanisms like RSS readers etc.
That isn't what would happen.
What would happen is that only the platforms which can afford legal teams - in other words, the big platforms - would host user posted content under strict arbitration only terms, and every other platform (including Hacker News, which uses an algorithmic feed) would simply not. Removing one of the cornerstones of free speech on the web in favor of regulation will only centralize the web more.
And you wouldn't see mass adoption of "self curating mechanisms" because most people aren't like Hacker News people and would find the premise of having to manually curate data feeds from every they visit to be a tedious waste of their time.
I also think that platforms like Youtube and Tiktok shouldn't be illegal. I don't even think that personalized algorithms should be illegal - it's surprising that one has to point this out on a forum of programmers - but algorithms have no inherent moral dimension and the ability to use an algorithm to find and classify relevant content can be useful. The same algorithm that surfaces extremist content surfaces non-extremist content. The algorithm isn't the problem, rather the content and the policies of these platforms are the problem. And I don't think the solution to either is de facto making math illegal and free speech more difficult.
At least even money that an appellate court throws this verdict out entirely. Reminder that the US is the only developed country that uses juries for civil trials- everywhere else, complex issues of business litigation are generally left to a panel of judges. It's not that hard to rile up a bunch of randomly impaneled jurors against Big Bad Corporation. The US is kind of infamous for its very large, very unpredictable civil verdicts. There's an incredibly long history of juries racking up shockingly large verdicts against companies, only for an appellate court to throw the whole case out as unreasonable. Not even close to the final word in the American judicial system.
Edit to include: I mean this is coming the same day as the Supreme Court throwing out the piracy case against Cox Communications 9-0. Remember that this case originated with $1 billion dollar jury verdict against them! Was reversed by an appeals court 5 years later and completely invalidated today. Juries should not handle complex civil litigation, I'm sorry
Thanks for this take. Also explains why this did not result in much stock price movement today
Also at least partially explained by being priced in. The trial was known about and given the conditions described in GP it's not surprising that the verdict went this way.
The shotgun approach (suing FB, TikTok, Snapchat, and Google simultaneously) makes this sound as ridiculous as the punchline "woman sues McDonalds for coffee being too hot" (distinct from that actual case, which was less ridiculous than the headline).
Suing Facebook for systematically behaving badly is one thing, if you can prove it and prove it harmed you.
Suing _everybody_ is one random person getting rich for… being mad at the world she was born into?
> the punchline "woman sues McDonalds for coffee being too hot" (distinct from that actual case, which was less ridiculous than the headline).
Whenever the McDonald's coffee case comes up, I always see caveats about how the actual case was a lot less sensational than the "woman sues McDonald's for coffee being too hot" headline implies.
I strongly disagree. I'm very familiar with the details of the actual case, and the Wikipedia article gives a good overview: https://en.wikipedia.org/wiki/Liebeck_v._McDonald%27s_Restau... . Yes, the plaintiff received horrific third degree burns when she spilled the coffee on herself, but lots of products can cause horrible harm if used incorrectly - people cut fingers off all the time with kitchen knives, for example.
I find the headline "Woman sues McDonald's for their coffee being too hot" a completely accurate description of what happened, with no hyperbole and no "ridiculousness" at all.
You neglected to mention: - It was company policy to keep coffee excessively hot (180-190 degrees Fahrenheit, vs 140 or so for coffee brewed at home). This was to make customers drink it more slowly and request fewer refills
- Other customers had suffered similar burns, and McDonald's knew about it and did not change the policy
McDonald's, then, was willfully and inevitably causing injury to random customers in order to save themselves a few cents in coffee.
In light of those facts, I think a $2M verdict was too low, and the executives who decided to continue keeping the coffee that hot should have been criminally charged with reckless endangerment.
> It was company policy to keep coffee excessively hot (180-190 degrees Fahrenheit, vs 140 or so for coffee brewed at home).
The "official" recommendation for keeping coffee that I've seen, eg here[1], has always been around 80-85C, which translates to 176-185F.
Good home brewers, like this[2], will hold the coffee at that temperature.
In terms of expectation, how many people think coffee is normally capable of causing 3rd degree burns?
> Suing _everybody_ is one random person getting rich for… being mad at the world she was born into?
Nothing wrong with getting mad at the world when the world is complete and utter garbage to you.
Yeah there are so many reasons this could be reversed on appeal. Whether the judge correctly held questions of section 230, and the First Amendment, is not obvious.
Maybe if most people would agree the corporation is big and bad and should have penalties, it’s more democratic to go with that decision that the decision nine unelected philosopher kings come up with.
Democracy is flawed which is why our system has checks and balances both democratic ones and non-democratic ones. Mob rule is not preferred thanks
>There's an incredibly long history of juries racking up shockingly large verdicts against companies, only for an appellate court to throw the whole case out as unreasonable.
You might be blaming the wrong people. Looking at a lot of those "shockingly large verdicts", in that they would have bankrupted the company and forced it to be dissolved and reformed as perhaps a less objectionable version of itself: cool, shoulda done that. Sad we didn't.
Are we conflating matters of merit with matters of judgment, here?
How is any app/website that 1) appeals to kids, 2) sells attention, 3) does A/B testing and/or has a self-learning distribution algorithm NOT guilty of this?
It probably helps when you suppress research that shows you’re harming children and allow human traffickers to fester on your platform with 17 warnings or whatever.
The argument that research was suppressed and this is somehow damning is absurd on its face. The most obvious reason being that they obviously didn't do a very good job of suppressing it given that we hear this claim every day. The second being that they could have just not done this research at all and then there would have been nothing to "suppress" (this terminology is also very odd... if 3M analyzes different sticky notes and concludes that their competitors sticky notes are better than theirs but does not release the results, is that suppression?). The third is that studies with the same results have come out probably every year since 2010 and have been routinely cited in the mainstream press. Lastly, it ignores that many platforms have actually responded to research about potential harms of social media by implementing safeguards on teen accounts.
Look at the plaintiff in this case: it's a mentally unstable person who blames her life problems on social media. Never mind the fact that she had been diagnosed with mental illnesses as an early teen, or that an overwhelming majority of people who use social media don't develop eating disorders or other mental illnesses as a result of it (and in fact the incidence of say bulimia peaked 30 years ago in spite of almost universal social media adoption among young people). This is not at all like smoking where 15% of smokers will get lung cancer.
And due to some absurd legal reasoning the plaintiff was allowed to pseudonymously extort $3 million out of tech companies. Worst of all I see people on a technology forum applauding this out of some sort of resentment towards large companies!
Nobody ever accused these companies of being competent at suppressing the research (which includes third parties btw, not just internal).
Companies do this research for all sorts of reasons (including legal compliance, demonstrating due diligence to regulators, to understand users and improve products, etc etc etc). For example, it's not like Zuck commissioned an internal study to show how they're harming children, more like some internal team was seeking to understand why kids love a certain feature which led them to conclusions that make the company look bad.
To your third point, that research is usually leaked by whistleblowers or conducted by third parties, not because of the altruism of these companies.
Finally, the platforms aren't doing enough and with this court case, it seems like they've persisted in finding ways to hook children because of financial incentives.
The sources cited in this article are a good primer for understanding what these companies are doing: https://www.transparencycoalition.ai/news/meta-suppressed-re...
> The argument that research was suppressed and this is somehow damning is absurd on its face.
The argument is not that it is vaguely "somehow damning".
The argument is that the existence of the research and its findings, and that it was in the hands of the firms, and that the actively chose to suppress it, is evidence of one specific fact relevant to liability—that, at the time that they made relevant business decisions that occurred around or after the review and decision to suppress the reports, they had knowledge of the facts contained in the report.
> The most obvious reason being that they obviously didn't do a very good job of suppressing it given that we hear this claim every day.
The success of suppression is not relevant to what the decision to suppress is used to prove.
> The second being that they could have just not done this research at all and then there would have been nothing to "suppress"
The fact that, had they made different decisions previously, they would not have had knowledge of the facts that they actually had when they made later business decisions is also not relevant to what the existence and suppression of the research is used to prove.
> (this terminology is also very odd... if 3M analyzes different sticky notes and concludes that their competitors sticky notes are better than theirs but does not release the results, is that suppression?).
It would obviously be suppression of the report (which isn't a legal term of art but a plain-language descriptive term), but unless they later made fact claims about their product that were contrary to what was in the suppressed report and were being sued for fraud or false advertising, that suppression probably wouldn't be useful as evidence of anything that would produce legal liability.
> The third is that studies with the same results have come out probably every year since 2010 and have been routinely cited in the mainstream press.
Which is addditional, though weaker, evidence of the firms knowledge of the same conclusions (weaker, because its pretty hard to prove that the firm had particular knowledge of any of those studies, but it is pretty easy to prove that they had knowledge of the studies that there is documentation of the commissioning, reviewing, discussing internally, and deciding to suppress.)
But it doesn't in any way counter the weight of the evidence of the suppressed reports, it weighs in the same direction, just in much smaller measure.
The "overwhelming majority" standard for harm seems odd when you use 15% of smokers getting harmed as an example. 15% is not an overwhelming majority.
Is there a widely used phrase which represents the 95-97% range? One did not come to my head.
The jury disagrees with you.
>This is not at all like smoking where 15% of smokers will get lung cancer.
Unfortunately for you and social media sites, the legal standard for defective products has no "percentage" of people harmed to incur liability. Product liability is showing product was defectively designed and caused foreseeable harm to a specific plaintiff.
> absurd legal reasoning
It's certainly not surprising you think protecting minors in legal cases (she was a minor when the case was filed) is "absurd legal reasoning".
Addressing the actual legal questions in the case might be more fruitful than hurling shit against a wall.
Yes, I think you can make an argument that the jury verdict was in line with the law... if that's the case then I think the law here is ridiculous. I can read what the law is, we're having a discussion. If the story of the woman who was burned by McDonald's coffee was posted here you would have people arguing for and against whether people should be able to seek recourse in courts for harms of that nature.
I think there is a fourth portion that is probably more important:
Actively ignoring harm caused by your product. TV/radio has sold attention, but there were pretty strict rules on what you can/can't broadcast, and to whom. (ignoring cable for the moment) Its the same for services, things that knowingly encourage damaging behaviours are liable for prosecution.
Except cable is the more apt comparison here - broadcast rules exist because airwaves are an extremely finite resource and so we can argue that the government has a vested interest in what kind of speech can happen on them. No such scarcity exists with web services.
I think there's a little more nuance than that, but it seems roughly correct.
Wouldn't it be better if apps/websites targeting kids didn't use A/B testing to be more addictive?
I think addiction is a redherring.
Pokemon is addictive, computer games are addictive. Its whether they are knowingly causing harm, and or avoiding attempts to stop that harm.
Addictive patterns in games and other online activity is a bit less innocent than you are portraying it: knowingly causing harm is too low a standard. A lot of the profitability of online games, prediction markets, etc. comes from the whales. The whales are probably addicted. If your business is a whale hunt you are possibly causing harm at least to the extent that addiction is dangerous.
They'd find another method. Why are we allowing this in the first place?
I don't have an answer to fix this whole mess, but it starts with our attitude towards addiction. We've built a system that rewards addiction in all sorts of places. Granted, every addiction is different, and I'm of the opinion that it's not (drug = bad), it's how you use it and react to it. We can control the latter, but we choose to ignore it because we're too busy with anything else. This is a tale as old as time...
"Free market" and "entrepreneur spirit" fetishism and fear of collective social action against individual drives.
In the span of how long it takes for law to catch up to what’s going on, YouTube and Facebook has been around for a tiny amount of time.
They have been around long enough to have done unknowable damage to entire generations of humans
As usual unfortunately laws are reactive.
> Why are we allowing this in the first place?
Exactly what I keep coming back to.
For me, it feels like you could cut this problem down substantially by eliminating section 230 protection on any algorithmically elevated content. Everywhere. Full stop.
If you write or have an algorithm created that pushes content to users, in ANY fashion, that is endorsement. You want that content to be seen, for whatever odd reason, and if it's harmful to your users, you should be held responsible for it. It's one thing if some random asshole messages me on Telegram trying to scam me; there's little Telegram can do (though a fucking "do not permit messages from people not in my contacts" setting would be nice) but there is nothing at all that "makes" Facebook shovel AI bullshit at people, apart from it juices engagement, either by genuine engagement or ironic/ragebaiting.
And AI bullshit is just annoying, I've seen "Facebook help" groups that are clearly just trawling to get people's account info, I've seen scam pages and products, all kinds of shit, and either it pisses people off so Facebook passes it around, or they give Facebook money and Facebook shoves it into the feeds of everyone they can.
It's fucking disgusting and there's no reason to permit it.
> algorithmically elevated
I don't see a good way to make a definite legal distinction between the icky stuff versus normal an unobjectionable things which are, technically, also forms of elevation-by-algorithm:
rank_by_age(items) // Good rank_by_age_and_poster_reputation(items) // Probably rank_by_on_topic_ness(items, forum_subject) rank_by_likes(items) rank_by_engagement_likelihood(items) // Bad? rank_by_positive_sentiment_toward_clients(items) // BadReally, I see one right here:
Age is deterministic. When was the thing posted?rank_by_age(items) // Good rank_by_age_and_poster_reputation(items) // Probably rank_by_on_topic_ness(items, forum_subject) rank_by_likes(items) <-- here --> rank_by_engagement_likelihood(items) // Bad? rank_by_positive_sentiment_toward_clients(items) // BadPoster reputation is deterministic. How many times has this poster received positive feedback based on their content?
On-topic-ness is deterministic, if a bit fuzzy. That said I think the likes will reflect this, if you post a thread about cooking potatoes in the gopro subreddit, your post will be downvoted and probably removed via other means in which case it's presence in the feed is already null.
Likes are again, deterministic. How many people upvoted it?
In contrast:
Engagement likelihood is clearly a subjective, theoretical measure. An algorithm is going to parse a database for other posts like this, see how much attention it got, and say "is this likely to drive engagement." That's what I'm talking about.
And positive sentiment towards clients I can't quite read? I'm guessing you're referring to like, community sponsors but I'm not 100% certain. But that almost certainly is a subjective one too, and even if not, it's giving people with money the ability to put their thumb on the scale.
I don't think "deterministic" is the right term to capture this concept. An if-statement which bans posts containing a political phrase would be 100% deterministic, or one which prioritizes anything from a username on a list.
> On-topic-ness is deterministic, if a bit fuzzy.
If you permit that exception (even for good reasons) then it reveals how the original "algorithmic elevation" is too vague and unenforceable.
All someone needs is a ToS footnote like "this forum is provided for truthful international news and engaging with $COMPANY in a positive way." Poof, loophole. Anything the moderator (or moderator-algorithm) decides is "untrue" or "negative" becomes off-topic and can be pushed down.
> If you write or have an algorithm created that pushes content to users, in ANY fashion, that is endorsement
Yes. People make free speech arguments about this, but the list and order of stuff returned by algorithmic non-directed (+) lists is clearly a form of endorsement. Even more so is advertising, which undergoes a bidding process. Pages which show ads should be liable if those ads are fraudulent, especially if they're so obviously fraudulent that casual readers suspect them immediately.
(+) Returning a list of stuff in a user-specified query, on the other hand, is not endorsement. Chronological or alphabetical order or distance-based or even random is fine.
Note that section 230 is, of course, US specific and other countries manage without it.
Eliminating section 230 protections would heavily disfavor any kind of intellectually stimulating content, because it's hard for a platform to scalably verify that nobody's making defamatory claims. But pointless clickbait, heavily filtered Instagram models, etc. don't really have liability concerns on a video-by-video level. To me it seems like this makes the problem worse.
It’s not eliminating section 230 entirely, it’s eliminating it for algorithmically promoted content. If you have a site that has user content and you present that content in a neutral fashion, section 230 applies. If you pick and choose what content to present to users (manually or by algorithm), you’re no longer a neutral platform, and shouldn’t be getting the benefit of 230.
I understand that. My point is that this would mean algorithmic feeds can only contain vapid, pointless content with no liability concerns. To me, it doesn't improve the world to require that Instagram and Youtube exclusively serve slop, even if that might cause some number of people to abandon them for non-algorithmic platforms with better content.
The current state of affairs is that Youtube and Instagram have brought back fascism and the measles, so if the complaint here is "it's impossible to moderate algorithmic content at scale and so the platforms would become incredibly risk averse," I think I'd take that alternative. I also don't think effectively forcing a breakup of the current online media monopolies is a bad thing either - if you can't actually mitigate the damage of your platform because you're too big, then maybe you shouldn't be that big.
Literally every social media site I'm aware of has had, in varying strengths and at varying times, many still currently, a movement among users asking for a fucking chronological ordered feed. Just, what the fuck my friends are saying, in the reverse order that they said it, displayed in a list.
Not only is this seemingly the most desired feed among end users, it was also the default one. MySpace didn't have a choice in the matter, they had to show a chronological timeline, because they didn't have a machine-learning algorithm nor a way to make one. They could tweak it based on engagement metrics but on the whole, it was just here's what all your friends have posted, in reverse order, scroll away. And then eventually you'd hit the end where it's like "you're up to date" and then you go on with your fucking day.
But of course platforms hate that. They want you there, all day, scrolling through an infinite deluge of bullshit, amongst which they can park ads. And we know they hate this, because not only have platforms refused to bring back chronological feeds, they actively removed them if they existed at one time. Not only is this doable, it's the most efficient way that requires the least compute from their servers, but platforms reliably chose the inverse... because it makes them more money.
Also specifically on this:
> My point is that this would mean algorithmic feeds can only contain vapid, pointless content
The vast majority of these sites is vapid, pointless content RIGHT NOW, even if it attempts to convince you it isn't.
Literally every social media site I'm aware of has a chronological ordered feed of people you've chosen to follow. Facebook does, Instagram does, Youtube does. It's just not the homepage, and most people don't care enough about what feed they get to go navigate to it every time they open the app. Would it be nice to make them let you put it on the homepage? Sure, I'd support that.
For context, facebook is so dystopian when I login once every few years that I’m not sure I’ll ever use it again. And, I hate wading through the YouTube cesspool to find some educational content I like. But, I don’t think it makes sense to ban a/b testing or optimization in general. Some company could use it, for example, to figure out how to teach math to kids in a way that’s as engaging as possible. This would be “more addictive” technically.
That's a good point, I'm not 100% sure it's worth throwing away the potentially beneficial uses. There might not be a solution that's both feasible to implement and avoids banning useful things. In the end I usually come back to it being the parent's responsibility to monitor usage, limit screen time, etc., but it hasn't been working so well in practice.
> more nuance
Not enough to diffuse liability. 15 years ago when recommender algorithms were the new hotness, I saw every single group of students introduced to the idea immediately grasp the implication that the endgame would involve pandering to base instincts. If someone didn't understand this, it's because
> It is difficult to get a man to understand something, when his salary depends on his not understanding it. - Upton Sinclair
How’s this different than tv that a kid might see that has ads and programming targeting kids?
I watched 80s horror movies when I was in elementary school and had nightmares for years. Should I sue now?
How about parents be held responsible for how they care for their kids or not? Maybe a culture that judged parents more strongly for how they let their kids spend their time would be an improvement.
Being able to find some basis for comparison between two things does not render them equivalent, and this is an extremely frequent fallacy I see with regard to technology discussion on HN.
When it comes down to it, I’m not sure how you differentiate an “addictive” product from a well-made product that I choose to keep using.
When people say that Tetris and Civilization are “addictive” they aren’t implying anything malicious about the development, it’s more of a compliment about the game (and maybe a little lament about staying up too late).
But the addictive nature of social media feels different and I can’t figure out what that distinction is.
I have an instagram account because it's by far the best way I know of to keep up with various small businesses, local or otherwise, that I like.
What I go into the app to do: see if there are any updates from those businesses.
What the app presents me on launch: a bunch of nonsense selected for what will best-distract me. And you know what? Sometimes it does catch my attention for a minute or two!
What the app doesn't let me do: disable the nonsense, or even default to the tab of accounts I'm following. Hell they even intentionally broke ways to achieve this with iOS' scripting, you'd think that'd be niche-enough they wouldn't care, but apparently enough people were doing it that they bothered to break it.
The algo feed is addictive on-purpose. I would turn it off if I could, and there's a damn good reason they don't let you do that. I "choose" to engage with it sometimes, which sometimes gets people coming out to go "oh-ho! So your revealed preference is that you like the feed!" but that's plainly silly, as that's highly contextual and my in-fact actual preference would be to never see that feed again in my life, and in fact I've spent a little time trying to make that happen. It's only my "revealed preference" in a world where I've had to compromise by occasionally losing a couple minutes to this crap because the app won't let me go straight to what I actually want. That's my true preference, the "revealed" one is only ever briefly flirted-with in a context in which I'm prevented from attaining my actual preference.
Consider a person who struggles with eating junk food. They don't keep junk food at home, in fact. That is their preference, to not keep it around, because they don't want to eat it and know they will if it's there. Now concoct some scenario in which, in exchange for something else they want, they have to take delivery of a couple bags of potato chips and a box of cookies every week. And sometimes, they eat some of that before tossing it out or giving it away! "Ah-ha, so their revealed preference is that they want junk food!" Like, no, of course not.
There's a reason these apps have to prevent you from using any part of them except with the presentation they like: because they'd being addictive on purpose, and tons of users do not want the addictive parts, at all, but do want other parts.
People will now say "the algorithm" and "dopamine", explaining nothing. You see, social media is truly addictive because it's been honed to be addictive in some way that isn't specified or known or actually true.
OK, let me try to analyze it:
1. Humans are idiots.
2. We have idiot glitches where we obsess over some particular thing. This is our own business and our own fault, and is impossible to tease apart from just liking stuff a lot and benefitting from it.
3. These glitches tend to accumulate in certain areas, and then some companies find themselves in the position of profiting from human glitchy idiocy, even though they didn't want to be behaving like scammers.
4. Then some of them get cynical about it and focus on that market segment, the obsessed idiots. This can include gambling and social media.
Tetris and civilization are also harmfully addictive, but the scope of the behavior they can hijack is lower. "One more turn" at 2am is harmful. Just not as harmful as something that knows about and interacts with every aspect of your social life and your view of the real world around you like social/media apps do today.
A really well built hammer doesn't make you want to spend all your time using a hammer, it's just good when you need a hammer. That's a well-made product that you choose to keep using.
there's hundreds of good books on all types of addiction, including home shopping network style, gambling / lootbox / gacha, adrenaline, sex, and so on. My spouse, at the beginning of this month, went to a 2 day series of lectures about novel treatments for gambling, as part of their CEU for their license. I know most of HN won't know what i am talking about, so:
In general professionals must be licensed and bonded. The state requires a degree and a test for the first license, then, for my spouse's, something like 8000 additional hours of training, and something like 100 hours of continuing education per year. a CEU is 1 hour of continuing education. you have ~5 years of time to transition your license by doing the above training and CEU - as a rolling window. Doctors, nurses, etc all have to do this sort of thing.
Would any of you put up with that kind of stuff to make $80k a year?
Not to disagree with you, but in the case of Civilization, I do find it addicting in both senses. It is one of two games that I just cannot play, because I will be up until 3am playing. (Puzzles and Dragons was the other one, I think I had to uninstall it the day after I downloaded it)
Oh, not Factorio. I guess Factorio might be slightly less addictive than crack because I was eventually able to put it down.
I think this represents a strong misunderstanding of what addiction is, and how it works. I mean this respectfully, and not combatively -- I expect you have never had problems with addiction.
When it comes to behavioral psychology research, there is a strong understanding of concepts such as behavioral reward schedules; interval-based rewards, time-based rewards, variably-interval-based rewards. People have a very clear understanding of what sort of stimulus is and is not prone to addiction. You can get a mouse in a cage to become hopelessly addicted to pressing a lever for a reward depending on what reward schedule you use, and this does not translate to a mouse who can just get the reward at a regular interval. (or perhaps merely a less-addicting interval) The mouse in the cage pressing a button set to a variable-ratio reward is equivalent to an old person using a slot machine in a very literal and direct way. This also translates to social media with permanent scrolling. So many of the stories such, but the variable interval is the extremely enticing (or enraging) story that just might be the next one.
> Tetris and Civilization are “addictive” they aren’t implying anything malicious about the development, it’s more of a compliment about the game
Because it's a figure of speech, not a clinical diagnosis. Literal and figurative addictions are different beasts.
Intent, premeditation, scale are major differentiators. When they know they will cause harm, they concentrate and fine tune it for the effect, turn it into a firehose, and target it at specific individuals it's very, very different from what random ads, games, of movies do. These companies literally designed their products with the intent to make them addictive and target children, knowing the full implications and ignoring the harm they caused.
You're comparing a drug dealer who only sells to kids to a store clerk who also sells icecream to kids. It doesn't take more that scratching the surface to realize the similarity is very fleeting.
I understand what you’re saying, I personally don’t like or use social media, but I don’t agree that these companies are at fault after reading this article and others. I’d rather be wrong and learn something than think I’m right, so I welcome further criticism.
I agree with you that parents need to ultimately be responsible for keeping their kids off social media. I think there are a few problems here:
- Social media is still somewhat new, and the broader public is only now discovering that it's a clear net negative both personally and for society. Because this is such a new realization, I think a LOT of people have not really figured out how this problem should be dealt with. (both personally, via social norms, but also with regard to laws and regulations.
- No matter how awesome of a parent you are, 100% of your kids friends will have social media and they will introduce it to you kid. That may do less harm than if they have it themselves, but some harm will still be done.
- There are network effects to consider. It's true that it's your personal fault if you use cocaine -- however we also understand that cocaine is so addictive that it really cannot be used safely. Social media is metaphorically the same. It's a personal failing if you're a social media addict, however broadly almost everyone is susceptible to it. In my mind, that is an argument for regulation.
Now that said, I have zero faith that our government can actually build sensible regulation here.
They strategically use patterns that directly trigger the release of dopamine into the brain.
They've created algorithms that use slot machine like experiences that keep kids hooked to the screen.
These algorithms feeds users barely moderated content that feeds their worst instincts. With almost surgical precision when wanting to illicit engagement.
Then when research shows them the harm their causing they bury it, hire lobbyist, and double down.
Switch out a few words up there and you have the big tobacco playbook.
It's not just kids. My parents have spiraled in this way too. Why interact with each other when reels are more exciting? Why pursue friendships if you can experience it parasocially? This has been incredibly depressing, and it's a reason I make sure to value the people in my life. I have a lot of disgust for Meta and Google seeing what they've done to society broadly. All for money
Right, like social media and addictive drugs for instance.
Both things can be true. Parents can share responsibility. But it is also the case that Facebook actively suppressed research that showed that children using their platforms experience emotional harms. It is also the case that around the time you were in elementary school discussions about children’s programming had been ongoing for years and eventually regulations were put in place[0].
0: https://en.wikipedia.org/wiki/Regulations_on_children's_tele...
I can agree that I think they acted to harm society knowingly. I used to think regulation could help and maybe it can, but if there were some way to shape the culture to value, for example, educational tv programming, I think that would be the most powerful influence on tech/media companies. Regulation could serve to inform parents “this programming/platform is known to rot your kids mind” like a nutrition label and some day hopefully parents will be more likely to disallow it like some do knowing how much sugar is in sodas.
> How’s this different than tv that a kid might see that has ads and programming targeting kids?
Those ads didn't adjust themselves on a per-child basis to their exact interests.
Parents ought to be held held responsible for how they care for their kids. This isn't just true of their use of social media and devices, but also when it comes to teaching them to look both ways when crossing the street; making sure they understand the concept of private parts, consent and personal space; making them understand the dangers of alcohol, and many other things.
Does any of that obviate the need for safe urban design, anti-CSAM and anti-molestation laws, or laws prohibiting the local dive from serving a cold one to my 11 year old? Will simple appeals for "parental responsibility" suffice as an argument for undoing those child safety systems we put in place, or will they be met with derisive dismissal? Why should your "solution" be treated any differently? In fact you offer none. Yours is the non-solution solution, the not-my-problem solution, the go-away solution. Not good enough on its own, sorry.
For 30 (60's to 90's) years we told parents "It's 10pm do you know where your kids are", with an AD, on TV. We came home to empty houses and go in with a key around our neck.
Now, we call the police, and arrest parents, if kids are outside, unsupervised. https://www.cnn.com/2024/12/22/us/mother-arrested-missing-so...
When I was a child in the 80s and 90s, we had "jobs" as kids... Mowing lawns, Paper routes and so on. Now if you go offer to mow your neighbors lawn, the cops get called: https://www.fox8live.com/2023/07/26/officer-surprises-young-...
Parents are afraid to let their kids out of their site, and for those of us who have been pragmatic because we understand the data (and not the fear) they tend to look down on us.
Talk to any one who is Gen X and they will tell you that we basically got thrown out side all day (and had fun). Parents cant say "go outside and play" so kids end up getting handed devices... and they are going to play and explore and do the dumb things that gets them in trouble.
> those child safety systems we put in place
Except we have denormalized things that SHOULD be perfectly fine. And as fewer kids get to go outside unattended with friends, it pushes their peers to go "online" to socialize.
Maybe the government needs to run commercials "Its 10am, why isnt your child outside playing with the neighbor kids unsupervised"
As sibling comments point out, parents are already overly held responsible for how they care for their kids. To an absurd amount.
I have had CPS called on me by an overbearing school administrator. Have you had that happen to you? Let me tell you, it's not a fun experience.
Enough of this "blame the parents" mentality! Ironic given that the goal for all these platforms is growth at all costs. Where do you think "growth" comes from, after all? If you make being a parent so goddamn difficult that it's more rational to just not do it, guess what, poof goes your sweet, sweet growth.
So tired of this line of thinking. The parents are put into an impossible situation. Stuck between kids who by definition and by design will test the boundaries that they're given, and tech platforms that are propped up with not just trillions of dollars of valuation, but the societal expectation that you engage with them. Want your kids to compete in sports? Well, they need to have WhatsApp and Instagram to keep track of team events!
Give me a break. Equating controlling social media and devices to "look both ways when crossing the street" is disingenuous at best. There are no companies that make billions of dollars in advertising revenue telling your kids to jaywalk. But Facebook gladly weaponizes their algorithm to drive "engagement" - and, surprise, children with still-forming prefrontal cortices are drawn to content that reinforce their natural self-criticisms and doubts. So now my child, who has to be on Instagram to keep track of sports schedules, is also force fed toxic content because that's what a mechanical algorithm thinks is most "engaging" based on my derived psychological and demographic profile.
You want to talk about CSAM? X proudly proclaims that they have every right to produce deep-fake pornography with the faces of underage children. What action shall I, as an individual parent, take if my 15 year old girl's face is suddenly pasted onto sexually explicit video and widely shared thanks to xAI's actions? Shall I be held responsible for how I "let this happen" to my child?
You seem to imply in your reply that I disagree with you, hence necessitating a polemic style. I would have thought the last few sentences of my comment make it clear where I stand on simplistic appeals to "parental responsibility".
> Parents ought to be held held responsible for how they care for their kids.
If YouTube detects that a child is watching 5 hours of video a day, should Google alert child protective services?
Why don't we start with a mechanism for user registration that does not involve a simple pinky-swear "over 13?" checkbox and then continue the conversation about further steps.
How would that hold anybody responsible? What did you have in mind with respect to parental accountability? Does anything other than the legal system actually have power to make changes when it comes to bad parents?
> How’s this different than tv that a kid might see that has ads and programming targeting kids?
It's not, that illegal as well. You cannot target kids with TV advertising.
The difference is largely in the way that the legal caste perceives themselves to be aligned with media but opposed to tech.
We're a two parent household and my spouse had cancer and never really got all of their energy back, and works full time, so the entirety of home, land, and car maintenance comes to me.
I homeschool our youngest because the school system here sucks, based on the experiences of our older two. I'm always exhausted. I solved this (the "parents must be more involved") by watching my kid play roblox, arguing with them about spending their money on gift cards instead of lego, posters, or whatever that isn't so fleeting; i also don't let them have a cellphone. They turn 10 in June. We don't have TV or CATV, i have downloaded most of the old TV programs that kids liked, and grandma doesn't watch kid's shows so he really doesn't have a perspective on what everyone else's viewing habits are. He watches YT on his Switch about fireworks, cars, and then also some of the idiots with too much money acting goofy, plus what i would call "vines compilations" of just noises and moving pictures, i don't get it, but it seems harmless. For the record, pihole no longer blocks youtube ads, so i was just told there are ads on the Switch, now.
But anything beyond that, i can't watch nor do i want to watch their every interaction on a computer. I gotta cook, the weather isn't always conducive to send them outside to play, as well. When i was growing up and was bored, there wasn't too much i could do about it. Today, my youngest has virtually anything on the planet just peeking around the corner. America's Funniest home videos and a blue square shooting red squares at orange squares? yeah, ok.
===========
It's getting to the point where i think people who have really strong opinions on topics like this need to disclose any positions they might have that influence their opinion. My disclosure is that i have no positions in any company or entity.
Everyone in the US has been fed a lie that if we just work hard and don't interfere with the billionaire class, that someday, we, too, can be rich like them. It's a bum steer, folks. For each 1 billionaire that "came up from the slums" or whatever, there's 100 that are billionaires because their families did some messed up stuff, probably globally, sometime in the last 200 years. And offhand, knowing the stories of a bunch of billionaires: 10 in the US that were honestly self-made, didn't fraud, cheat, or skirt regulations to become that way seems almost a magnitude too high.
i bring all of the above 2 paragraphs fore, because if one has a position in facebook, of course they're going to rail against facebook losing 230 protection for any part of their operation, instagram, FB feed, whatever. If a person has a position in GOOG, or Apple, or Tesla. What's that Upton Sinclair quote that's been mentioned twice? If someone believes that, given luck and grit, they too could make a "facebook" sized corp, but not if the government says "you can't addict children to sell ads", then i consider them a creep.
record: my oldest two are early 20s, now.
A/B testing is one way to make things “addictive” but you can also make addictive products without it.
A really good designer could make a highly engaging app or an editor can write clickbait headlines all with without testing.
These products maximize revenue through engagement with advertisements. The outcome is built into their business model.
I would argue that no app/website should be selling itself to kids. No corporation should be trying to tether its ARR to children's attention.
When my kids were young, we canceled our Disney Channel / etc cable subscription and showed them more PBS and similar.
It was really annoying turning on a show for 30 minutes then for the next week hearing about that new toy they just have to get. It was exhausting.
Correct, selling attention inevitably leads to harm.
As a parent, the only solution is sticking to ad-free subscription services. PBS is a godsend here, but there's other good options out there too. Tragic that the public broadcasting funding was cut when there's clear harms in the free* commercial options.
*Except for your time and mental health of course
Agreed. Libraries have books and DVDs, and you have things like the classical stations. You also have playgrounds and walks in the park, etc. (I'm also a parent of two young children.
Always doing wholesome stuff with your kids is certainly not easy or trivial, but there is a cascading effect here. If your child does not expect to be able to just watch TV all the time it's easier to keep them interested in other things. Once that expectation is burned in you'll be fighting it for a while. And once that expectation is burned in, a small child will _never_ say "I've had enough youtube, I don't need any more."
So I really don't want to be self-righteous about always doing wholesome stuff with your kids (we definitely do not succeed 100% of the time) -- but rather point out that letting them use addictive media has negative, cascading consequences that actually do make it harder for you as a parent. It's analogous to drinking to relax. You get relief now, and pay for it later. Not actually a good tradeoff much of the time.
The unfortunate reality is, the internet has more up to date info than books, dvd's, pbs, and even 'the classical channel.' I play the piano and have found immense amounts of rare but nice music online, and only online. I completely agree that the media is bad, just pointing out that it's necessary to a degree if learning is your aim.
PBS is great if you are looking for a workable harm reduction strategy. Eliminating that type of entertainment is probably an even better goal.
I guess ultimately it depends on if the app/website authors do so "negligently" or not.
> Jurors were charged with determining whether the companies acted negligently in designing their products and failed to warn her of the dangers.
So if you do so while providing warnings and controls for people, that might make it OK in the eyes of the law?
Probably not much other than scale. Facebook is large enough that they can hire behavioral researchers to make this stuff more addicting while looking the other way and raking in the money. I think Roblox is just as bad (maybe worse) regarding addiction for kids. I’ve played hundreds of hours with my sister’s kids and the way all these low quality slop games handle grinding, progression, and pay gating is honestly disgusting.
But then again, I manage to get myself addicted to a video game usually once a winter for a few weeks, and don’t play games for the rest of the year. There’s really no solution to this, but I don’t want to live in a world where everyone is hopelessly addicted to shallow digital experiences.
Because most are just no where near as good and effective at ruining a kid's mind as meta. If others were as good as meta at destroying whole generations of cognitive development, they'd probably also be liable.
algorithm would be the key word I think.
A/B testing is very, very different to handing over control of your content to a reward function that optimizes for time spent over any other criteria.
We had 10 years+ plus of having products like Facebook, Twitter, YouTube, hell even LinkedIn with a basic content model of "you build your own graph of people who you pull content from" and their job was to show it to you and puts ads in there to fund the whole enterprise. If I decided to follow harmful content? That was a pact between me and the content creator, and YouTube was nothing more than a pipe the content flowed through. They were able to build multi-billion dollar businesses off of this. That's really important, this was enormously profitable. But then the problem happened that people's graphs weren't interesting enough, and sometimes they'd go on the thing and there were no new posts from people they followed, and this was leaving money on the table. So they took care of that problem by handing over control of the feed to the reward function.
More accurately, especially for Meta products: they completely took control away from you. You didn't even have the option to retain the old, chronological social graph feed anymore. And it was ludicrously profitable. So now the laws of capitalism dictate that everyone else has to follow suit. I now have extensions on my browser for Instagram and YouTube to disable content from anything I don't follow - because I still find these apps useful for that one original purpose they had when they blew up and became mainstream. Why are these browser extensions? Why can't I choose to not see this stuff in their apps? That's the major regulation hole that led to this lawsuit, imo.
It's the same thing you see with people blaming smartphones for brainrot. We've had 15 to 20 years of smartphones with more or less the same capabilities as they have today and for the vast majority of that time my phone didn't make books less interesting or make me struggle to do chores or manage my time. For a full decade or more I saw my phone as a net positive in my life, was proud to work for Twitter and generally saw technology like the Louis CK bit about the miracle of using a smartphone connected to WiFI on an airplane. But in the last five years or so, things have noticeably and increasingly gone to shit. Brainrot is a thing. All my real life friends who are the opposite of terminally online or technical are talking about it. I don't use TikTok but it seems like that is absolutely annihilating attention spans. The topic of conversation over drinks is how we've collectively self-diagnosed with ADHD and struggle with all kinds of executive function.. but also are old enough to remember a time when none of this existed. Complete normies are reading Dopamine Nation and listening to Andrew Huberman trying to free themselves.
I don't know what the exact solution is, but there's at least a simpler time we can point to when we all had smartphones and we were all connected via platforms and we all posted and consumed stupid pictures of each other and it wasn't.... _this_.
This is the clearest articulation of the problem I've seen in this thread. The chronological social graph feed era was fine. The handoff to engagement-optimizing algorithms is where things broke.
I'd add one additional layer: it's not just that the algorithm picks what you see, it's that the entire UX is built around keeping you in the loop. On YouTube Kids, even with autoplay off, the end-of-episode screen shows a grid of recommended videos. My toddler doesn't care about "the algorithm" in any abstract sense. He just sees more fire truck videos and wants the next one. The transition out of the app is designed to fail.
Your point about smartphones not being the problem is key. I was at Google during the era you're describing, when the phone was a net positive. The hardware didn't change. The business model did.
Great point RE the self-learning algorithms. That's what I intended originally, but didn't communicate clearly.
regarding brain rot, short form content is absolutely going to be the root physical cause - people could tolerate smartphones prior to the inception of short form content. on a cultural level, this level of destruction could be compared to the effects of a coordinated and targeted attack from enemy nation states - if not for the fact that we did this to ourselves in the name of profit. one can only hope that the old guard wakes up to systematically handle this issue that we have no familiarity with, otherwise our system will buckle under the pressure of 10-20 years worth of nonfunctional humans. i do find a technocratic dystopia far more likely, considering the aforementioned mentally castrated opposition ... hows a generation of kids going to win against trillions of dollars of zuckerberg 'engineering' steering them since birth? shame on the 'engineers' who engendered this mess, shame on their shepherd 'managers', and shame on the sociopaths at the top.
It sounds like an adult was awarded $6 million because she watched a lot of youtube/instagram as a kid. Literally any social media site would be guilty of this; I hate to say it but we need better corporate protections if cases like this are allowed to enter court.
At least legal experts are critical of the decision: '“I don’t think it should have ever gotten to a jury trial,” said Erwin Chemerinsky, dean of the UC Berkeley School of Law'
I'd hope the next iteration of social media tools humanity builds are less about reinforcing the individual ego and more about collective improvement, learning, and supporting the health of our species.
Anecdote, but it does seem like a lot of younger folks I speak with are exhausted by the dark patterns and dopamine extraction that top-k social media platforms create.
If agents/AI/bots inadvertently destroy the current incarnation of social media through noise, I think we'll be better for it.
> I'd hope the next iteration of social media tools humanity builds are less about reinforcing the individual ego and more about collective improvement, learning, and supporting the health of our species.
This sounds like the original internet.
Before adtech took over.
The original internet wasn't about that at all, it was just in limbo while people were figuring out what it was going to be. It wasn't developed or optimized enough to be _anything_.
It will come. The problem is. So will the addictive stuff. The key is going to be real meaningful connection. Social media wasn't about community. Web 2.0 was. In 2005 we were connecting with real people we knew and probably up until 2011-2012 maybe we still were, but I guess friends of friends, colleagues, people in our network. Then it got really bad.
Getting back to community is key.
> I'd hope the next iteration of social media tools humanity builds are less about reinforcing the individual ego and more about collective improvement, learning, and supporting the health of our species.
To me this statement reads as both inaccurate and ignorant of human nature. Social media was actually better when it was about individual ego (Myspace/LiveJournal); as obnoxious as that can be, today everything is worse because of petty tribalism. Most conflicts on social media are inter-tribal, whether it’s racial, political, national, or feuding “stan” culture groups. The worst problems come from groups who organize on platforms like Discord or Kiwi Farms to direct harassment campaigns against perceived enemies (or random “lolcow” victims).
Simple observation of the present world and history will tell you that a platform focused on “collective improvement” will only appeal to a small subset of potential users. Of course such a platform would not be a bad thing. Places like this (such as The WELL) used to be common when the internet was dominated by academics, futurists, and tech enthusiasts. But average people are not interested in this kind of platform, and will not participate in good faith in such an environment.
> To me this statement reads as both inaccurate and ignorant of human nature
> But average people are not interested in this kind of platform, and will not participate in good faith in such an environment.
I'm not ignorant of human nature and tribalistic tendencies. The undercurrent of my comment is of an optimistic hope (or cope) that we can move past competitive individual validation programming. I'm aware that it's due to our nature, but also aware that it's exploited by dark patterns and extraction at scale through software.
Thanks for replying. I agree that dark patterns and other psychological manipulation is a problem, I just don’t think it’s necessarily ego-centric in origin any more than gambling. These companies have found very efficient methods to extract attention and money from humans by exploiting their brain’s natural reward functions. I’m not sure what the answer is, because it’s obviously a problem (again just like gambling addiction), but I do support people’s rights to engage in things like gambling.
Since we don’t live in a perfect world, I suppose some regulation of the industry would be fair, just as we mitigate the harms of gambling somewhat through regulation. I just worry about regulation being used as a Trojan horse to stifle political organization and/or open communication about corruption, cronyism, and oppression.
It may be that the future is more small platforms where conflict is limited to in-group conflict rather than global platforms where all of humanity’s disagreements are surfaced and turned into fodder for monetization.
Gambling is a great example. When I say "ego" I really mean the reinforcement of the individual pattern through survival-resource games, power play, or external validation. I'm not using it in the classic psychological way, perse.
Regulation could work, but in my opinion the problem isn't devious mastermind product people attempting to entrap humanity -- it's self entrapment in a recursive way.
Regulators could add red tape and boundaries for what is or isn't kosher or legal, but in the end can prohibition fix systemic integration with addictive technological superagonist of our own creation?
I guess I just don’t see humanity awakening to transcendent egolessness any time in the near future (if ever). Based on my experience, the average person is fairly constrained by their biological reality. We often like to pretend that this isn’t the case, and pretending may work for a while, but eventually sufficient stress causes the illusion to unravel forcefully.
Regulation isn’t perfect; in the best case all it can do is limit the worst harms. It’s still a bad idea to engage in regulated gambling, as you are very likely to lose money. Almost everyone knows this, yet many people do it, and I can’t see that changing any time soon.
> I'd hope the next iteration of social media tools humanity builds are less about reinforcing the individual ego and more about collective improvement, learning, and supporting the health of our species
Do you have a mechanism for this in mind, incentives-wise? I can't see this making money.
I guess the real question is whether a website where you communicate with friends and close ones needs to be a multi-trillion dollar company in the first place... historically most of them have not been worth very much at all.
The question then becomes how can you make a website with all your friend (and by association all their friends) make enough profit to run itself?
You mean, how can my friends and I fundraise my $3 VPS? It's going to be rough, but I think we'll find a way ;)
(If we hit the stretch goal, we can upgrade to a raspberry pi!)
This is a bit of a silly response on your part. You're not answering the question of WHY people are on FB and not on the little sites like existed 20 years ago before FB. It's called the network effect. You have friends, your friends have friend, those friends have friends. Rather than there being 30 bajillion separate sites representing these friends connections, people go "hey, why not one site with everyone there".
Said little sites may run for a bit and die, and the massive monolith remains, at least until another monolith replaces them.
Well, indulge my silliness for a moment... what if the servers on the internet could talk to each other?
I suspect in just a few more decades, we shall reinvent the 90s and 2000s p2p networks from first principles.
I mean do you remember what p2p turned into for things like downloads? "Some_popular_thing.mov.exe"
They worked great when most actors were well behaved and got abandoned pretty quickly once that changed.
I'm not sure how that applies here? The argument is that a p2p network will be flooded with bots?
There's a p2p network I use today which doesn't have that problem, probably due to its small size. Meanwhile all the big platforms do — including this one!
Early Facebook was kind of a great mix. It had enough people on it, it was making money, and the advertising was much more reasonable. At the time it really was a place to connect with IRL friends.
It needs enough revenue to fund its operations. And most people won't pay for such a website, so if you want one place where most people you know are, then...
Come on, don't hand wave over the obvious. Think about how much it would actually cost to run a social media website that competes with the big social media on the core product of sharing and communicating with friends. It would be extremely realistic to build something that's both free and sustainable with just regular ads, as was done decades before.
(EDIT: to clarify, I don't mean to build an alternative monopoly, I mean to build alternatives that are big enough to survive as a business, and big enough to be useful; A few million users as opposed to the few billions Facebook and Youtube (allegedly) have)
The reason it's hard to imagine such a thing today is because the tech giants have illegally suppressed competition for so long. If Google or Meta were ordered to break up, and Facebook/Youtube forced to try and survive as standalone businesses, all the weaknesses in their products would manifest as actual market consequences, creating opportunity for competitors to win market share. Anybody with basic coding skills or money to invest would be tripping over themselves to build competing products which actually focus on the things people want or need, because consumers will be able to choose the ones they like.
> Think about how much it would actually cost to run a social media website that competes with the big social media on the core product of sharing and communicating with friends.
It would cost tons man. You don't understand the scale these apps operate on at all. Meta has their own data center footprint that rivals AWS or any other cloud company and they had that before AI, and it's not just all to run ads on. On demand photo and video streaming and storage for free for all of humanity is incredibly expensive.
Social media with only millions of users is basically worthless because it won't capture enough of an average person's circle to be useful to them
> On demand photo and video streaming and storage for free for all of humanity is incredibly expensive.
Maybe you missed my edit? I specifically said not a clone of the monopolies, but a competitor big enough to be a sustainable business. The economics of a monopolist's empire are irrelevant.
> Social media with only millions of users is basically worthless because it won't capture enough of an average person's circle to be useful to them
There's so much wrong with this statement. First of all, I will never meet anywhere near a million people in my lifetime. A regular human being's real social connections won't be anywhere near that big.
But even if it is (or users want to discover/follow random people), it doesn't take a computer science genius to discover how to interoperate between social networking apps. Meta and Google would never do this, but that's because they're anti-competitive monopolists; if you're a startup trying to gain marketshare and win on your product's quality, interop with other networks is a no brainer. We probably don't even need regulation to require interop, as the market will see it as a useful thing to develop on its own.
I feel like discord is kind of like this used correctly, but with the recent drama and such it feels terrible
A $4.99/mo subscription would yield more revenue than Facebook makes in ARPU from all that fancy, creepy, and intrusive ad tech. Paying YouTube to not advertise to you makes it a 10X better experience.
> $4.99/mo subscription would yield more revenue than Facebook makes in ARPU
Even ignoring the adverse selection of who'd subscribe, their ARPU is higher than that in North America: https://www.statista.com/statistics/251328/facebooks-average...
Well, another example comes to mind. Coordinated efforts to preserve the biosphere for all mankind are probably not going to be great for GDP.
We've tied our incentives to a structure which is not in alignment with continued survival. The real question is how can we incentivize ourselves to continue to exist?
The "the incentive structure says we should all destroy our brains" thing is just a small aspect of that.
Actually that's probably really good for GDP, just not over the kind of time periods an individual human deals with or cares about.
> We've tied our incentives to a structure which is not in alignment with continued survival. The real question is how can we incentivize ourselves to continue to exist?
The continued survival of individuals or humanity as a whole? The individuals seem to survive OK, and arguably there's nothing that could convince them to prefer the survival of the amorphous group, save for some kind of brainwashing.
Heh, that's a very good point. GDP begins to correlate with the biosphere over sufficiently long timespans.
We shouldn't be optimizing for quarterly returns, but for the next ten thousand years.
Ads were profitable before the outrage optimized flamebait internet era.
It doesn't need to make money directly (and probably shouldn't).
The incentives would be those which have motivated people throughout history: to create something which benefits humanity.
Ah yes, I too love free servers and bandwidth.
Lol, it doesn't have to run for free and servers are really powerful these days (especially if you don't use a slow language). There are other monetisation strategies besides exploiting users for profit.
It doesn't have to run for free, but if you're competing against anyone else running for free you've already lost the game as they suck the air out of the room with the network effect.
Next, text only platforms are nice, but niche on the modern internet. People seem to love multimedia which takes tons of bandwidth/cpu.
Paid for services don't mean spam free either. If it's worth people to pay for, it's worth spammers paying to get in and spam.
Then you have all the questions on what happens if you grow, how do you deal with working with all the laws around the world, how do you deal with other legal issues.
Having a site/service of any size can quickly become an expensive mess.
I hear word that in some countries, the government makes it so that screen time is limited, and algorithms promote educational content. Fortunately we civilized peoples are free of such a brutal oppression ;)
> If agents/AI/bots inadvertently destroy the current incarnation of social media through noise, I think we'll be better for it.
They are going to be (and AI slop already is) so much worse. Once they get ads to work well / seem natural the dark patterns will pop right back up and the money spigot will keep flowing upwards
Coming from someone who hate social media (and has kids) this might seems like a good thing on the surface, but I worry it will be another case used to allow the government to limit speech on the internet for adults.
This is a civil trial between a regular person and corporations about product liability. It has nothing to do with the government.
True, but people in the government are already pointing to it as reason to pass the "Kids Online Safety Act" and overturn section 230.
Liability and free speech are conjoined at the hip in the United States, courtesy of Section 230.
> Liability
Product liability is a subdivision of tort law that allows for recovery for damages caused by the makers or distributors of a product. This case has nothing to do with Section 230, the plaintiff successfully argued that the product was defectively designed and caused harm to the plaintiff.
Section 230 immunity is not a shield against all liability, it's only a shield against hosting problematic user content.
Apps like instagram and YouTube should be required at least to give an option to disable reels and shorts
There should be a law to require the ability to disable algorithmic customization of content. If these apps are so compelling it shouldn't take a Spark cluster riffing on my private viewing habits to come up with content for me.
I don't recall a lot of complaints about Facebook or Instagram when it was actually your friends' content. But now it's force-feeding everybody their own "guilty pleasure" viewing material 24 hours a day. It's fucking sick.
> disable algorithmic customization of content.
What does that even mean?
I assume they mean a similar experience to browsing /r/popular on Reddit. You're getting a feed that isn't tailored to your browsing history, likes or preferences. It's less addicting and the company doesn't need to know anything about you to provide the experience.
one of the benefits of being on android and being able to sideload apps. Look up "revanced youtube" and you'll be able to turn off shorts.
ublock origin for blocking them on desktop. If you're on an iphone... uninstall youtube?
my quality of life has increased substantially... although sometimes the app bugs out and shorts still make it on my home page. I spend like 10 minutes scrolling through shorts and get a weird shock "how the fuck did I end up here?", restart the app and boom shorts gone again.
Don’t forget WhatsApp. Kids are allowed to have WhatsApp as messaging but they get fed videos there too. There is no way to really disable them . Also this be allowed as parental supervision, not something that kids can override.
We need a return to consent. I want to be able to say "no". Not "see fewer shorts", see NO shorts. Not "maybe later", actually fucking "no".
This
For YouTube, on the mobile app: Setting -> Time Management -> Daily Limits -> Shorts Feed limit
there is no option for zero
Youtube shorts will come back but you can just click the row each time to show less. Otherwise if you really don't want to see them on the desktop at least a browser extension works well.
Perhaps we need more social activism (remember that?) to stop people falling into this kind of addiction. I remember anti-drugs campaigning , they were everywhere. Phone addictions are not taken nearly as seriously.
Using the war on drugs as an example as a successful social movement is kinda hilarious
People still generally think drugs are bad, don't they? And only the ones that were included in the war on drugs (not nicotine, alcohol, caffeine)? So it was a success.
> People still generally think drugs are bad, don't they?
No
> There were high levels of agreement that drugs are a problem in Irish society: 88% of respondents agreed that drug-related crime is a major problem in Ireland, and 87% agreed that the availability of illegal drugs poses a great threat to young people nowadays.
https://www.drugsandalcohol.ie/27213/
> Only 22 percent of respondents said they would be willing to work closely on a job with a person with drug addiction compared to 62 percent who said they would be willing to work with someone with mental illness.
https://publichealth.jhu.edu/2014/study-public-feels-more-ne...
In before someone says ‘blame the parents’ and not the multi-billion dollar companies who’ve spent decades targeting children for lifelong addiction, ignoring the negative effects on their mental health.
It need not be either-or.
The guy who made the drugs is guilty. The guy who sold the drugs to kids is guilty. But parents who failed to warn kids about drugs and to oversee them properly are also guilty...
Generally in an article about arresting or sentencing a drug dealer, people don't bring up that the drug users are actually to blame.
Now if we're in a discussion around the cartels, plenty of people do bring up (and there's also those that get annoyed by it) that the drug users are actually the ones funding the cartels via their drug use.
Along these lines, I think another fun comparison might be opioid use and Purdue.
I think that that is actually an oversight. One needs to consider the entire chain. For example, with proper parenting, there would be a lot less youth demand for drugs. It doesn’t make what a drug dealer does any less bad, nor does it make the efforts of the police to arrest the drug dealer any less important. But it’s suboptimal to consider a small piece of a system, without thinking of the whole.
It's also suboptimal to jail parents for not convincing their child that drugs are bad.
> It's also suboptimal to jail parents for not convincing their child that drugs are bad.
I never suggested that
So is the judicial system that is not making this illegal or don't enforce laws to prevent people targeting kids to create early dependence on drugs.
That is a fair point, I did not attempt to make a complete list, of course, but you are right, there are more layers that could be named. All valid. The point I was making is that parents are also responsible.
eg: I grew up in a very nasty place. My neighborhood had a few pregnant 13 year old girls and a lot of drunks and smokers, including kids in their early teens. My parents kept me away from it all, while also both having full-time jobs. They put a lot of work into filtering whom I could be friends with and where I was allowed to be. THAT is the job of a parent.
Sure, sounds like that's great parents you got.
But at systemic level, we must consider the effect of social dynamics globally, not only how the most virtuous citizen deal with the direct situation. Pauperisation of the masses will mechanically lead to more social problem on the overall, even if they will always be brilliant heroes to point to as possible through exceptional behavior. And society that are structurally helping everyone to fall in distress or weak situation also help the exceptional people go further as they are freed from many cognitive loads they would have to deal with otherwise.
I agree it's the job of a parent, but two parents (and with only a single job each) is sadly not the norm in many challenging environments.
The thing is, it should be both. Parents often give too little fucks for long term welfare of their children, often also guilty of same vices. Issue is, these addictions are way more destructive to young forming mind than to adults. Nobody having small kids now had fb or instagram access when they were 5, did they.
Maybe you don't do this. Certainly I don't. But when looking around, its much less rosy and... lets say in blue collar families its too common to drug kids with screens so parents have off time. Heck, some are even proud how modern parents they are. Any good advice is successfully ignored, and ideas of passing some proper time with kids instead are skillfully avoided. People got lazy and generally expect miracles from life without putting in any miracle-worth efforts.
Companies just maximize their profits till laws allows them (and then some more), and expecting nice moral behavior by default is dangerously naive and never true.
Consider that the insane growth in the cost of living - especially childcare - combined with wage stagnation means that now the vast majority of families have 2 parents with full-time jobs, keeping them away for their families for much longer than before. Consider that childcare is much, much harder to even get into now than in decades past. Consider also that "EdTech" means that nearly every child needs to be on an internet equipped-device at all times.
But sure, "Parents often give too little fucks for long term welfare of their children", that's definitely it. Parents just hate their kids! What a useful perspective you've brought to the discussion.
Look, I am in this category too, and all that high cost and parents far applies for me too. I live in Switzerland, country of many wonderful things in society but state helping young parents ain't one of the strong points, in contrary. Both me (cca senior banking position in IT) and my wife (doctor) have intense time consuming jobs. All family is very far and can rarely help.
Still, given all that, I don't do cheap excuses like that. Its pathetic and weak and simply untrue. Things are harder but thats it, not impossible like your side of argument wants to conveniently claim. Quality time well spent with kids is highly proportional to outcome of raising efforts. No way to hide from that simple fact, and nowhere to hide from results of parenting, everybody can see them in plain sight.
But if you setup your life so that pathetic things like career are your upmost importance and you have no time nor energy for anything else, those are your choices and thats fine. Just not getting why folks then have kids, just to skip on actually raising them and then whine how unruly they are, raised by toxic groups with no role models. Having and raising kids is not some fucking checkbox to tick and move on, its 20+ years full commitment and biggest achievement in one's life, or biggest failure. Worth some proper effort, no?
Oh man if they think YouTube and Instagram are addicting they should see what Roblox does lol
As someone who maybe fired about Roblox once like three years ago, what does Roblox do that is way more addicting than YouTube and Instagram, and also I guess they're ignoring reports showing the harm even more than YouTube and Instagram, if I understand you correctly?
It's an interactive world - where games can be built by anyone (I personally know/met some of the devs) and all the games have some randomization/gambling mechanics involved. Lootboxes is just one tiny example. Infinite novely - there's literally infinite number of games one can play.
I don't have time right now to provide a full/quality answer with more examples - you can do a bit of seraching online to learn more.
Also from personal expeirence as well (from family and friends). When their kids comeover they have tiktok on their phone and roblox on their laptop
Right, but that sounds like a bunch of video games. Question is, is Roblox specifically designed to be addictive, like Facebook/Instagram/TikTok? And if they are, did the companies willfully ignore reports about how dangerous it was?
If the answer is just "No" to both of those questions, then it sounds like a regular video game that can be addictive (like everything else), but it wasn't specifically designed to be addictive, like some social networks are designed.
Aside from daily login rewards, loot boxes, gamifying gambling behaviors and FOMO designed micro purchases? Roblox is bad and many times if your nearby is absolutely not appropriate for kids and I’m quite liberal on what’s appropriate beyond normalizing emotional damage.
https://pure.psu.edu/en/publications/the-system-is-made-to-i...
The funniest one? The 10-k discussing legal issues as risk regarding addiction
Hardwick et al. (2025) “They’re Scamming Me”: How Children Experience and Conceptualize Harm in Game Monetization https://papers.ssrn.com/sol3/Delivery.cfm/5164006.pdf?abstra...
Kou, Hernandez, Gui (2025) “The System is Made to Inherently Push Child Gambling in my Opinion”: Child Safety, Monetization, and Moderation on Roblox https://pure.psu.edu/en/publications/the-system-is-made-to-i...
Song et al. (2025) How Predatory Monetization Designs Manifest in Child-Directed Online Games (SOUPS 2025) https://www.usenix.org/system/files/soups2025-song.pdf
Kou & Gui (2023) Harmful Design in the Metaverse and How to Mitigate It: A Case Study of User-Generated Virtual Worlds on Roblox https://sites.psu.edu/healthandplay/files/2023/05/Harmful-De...
Tunca et al. (2025) Navigating parental concerns in children’s engagement with Roblox https://pmc.ncbi.nlm.nih.gov/articles/PMC12821821/
Roblox Corporation (2024 Form 10-K) https://www.sec.gov/Archives/edgar/data/1315098/000131509825... I find this hilarious.
Not all games are created equal, I loved Zelda tears of the kingdom and the sounds and rewarding game were in my opinion addictive however they are not in the same league as roblox
The best part is when you get a cohort of a few families to go camping and teenage daughter forced dad to drive 45 minutes each way for cell service to avoid breaking the daily login chain.
I don’t think people appreciate how these mechanisms impact society as a whole
There's also Prodigy which schools push on kids to practice math has the same thing including pay to win mechanics.
Read the book “Careless People” if you have a chance - according to the book, social media companies figured out they have real leverage with politicians since they can influence elections. As a result they are actively pushing for far right candidates to reduce their own taxation and regulation.
I don't think this accelerationism/fascism hobby of many tech bros is going to age well.
That book was so lame and the author leaves out how she profited millions and then only complained after she was fired.
Its also funny how they “discovered” they were influencing elections after they influenced the 2008 and 2012 elections.
How did the author not know this when she sought out and joined the company in like 2013!
The parts about playing Settlers of Catan with Zuckerberg was funny. I wonder what his side of the story was and if people were really letting him win.
I thought she discussed her pay/stock, what a big deal it was to her and how that affected her decisions very openly.
She did write about how she decided to stay because of the money.
Her book doesn’t cover the amount and I couldn’t find anything public where she discloses.
I figured it was a lot based on standard FB salary+stock and the years she was there.
I'm not going to attack you but I do just want to highlight analogous comments of "she could have left"
- She was trying to work to change things
- She was pregnant and otherwise had young children and needed the money
I don’t buy that.
- She was not trying to change things. She was working to get countries ingratiated with FB execs
- She didn’t get pregnant until years into her work. She chose to have a second child while staying employed. She was already “rich” with millions likely earned when her first child was born and could have worked anywhere (but not making what FB paid). Wasn’t she an attorney? Prestigious attorney salaries are definitely enough to support children and a spouse who is a teacher.
Just needs a health warning label, like on alcohol or cigarettes. Then onto the high sugar products, and a quarter of the grocery store
If we want to compare it to alcohol/cigarettes, then kids shouldn't be allowed to use this either.
and the government should tax it accordingly
I don't think that you can practically expect to tax speech.
You can tax reach though.
We have health warnings for food that contains lots of sugar, fat and/or sodium in Canada
This just seems ripe for selective enforcement if not codified in law. I agree the algorithm they use can be addicting, but it's because it's simply good at providing content the user wants to consume.
Besides a general 'don't be too good' I'm really not sure what companies should do about it. It just seems like it'll lead to some judges allowing rulings against companies they don't like.
Television's goal was always viewer retention as well, they were just never able to target as well as you can on the internet.
> it's because it's simply good at providing content the user wants to consume.
Well, a drug addict wants to consume his drug. Because his drug is good at keeping abstinence syndrome at a bay and probably the tolerance hasn't build up to levels when the addict couldn't feel the "positive" effects of it.
The user feels an impulse to consume the content, but whether they want it we can know only by questioning them. They can lie consciously or unconsciously, but there are no better ways to measure a desire to consume it. When talking about doom scrolling I never met a person who said they want to do it, but there are people who do it nevertheless.
> This just seems ripe for selective enforcement if not codified in law.
I agree. I'm not sure how they define "addiction" and how they measure "addictiveness". It is the most important detail in this story.
I see it as similar to the public health crisis created when protonated nicotine salts made their way into vapes along with flavors allowing 2-10x more nicotine to be delivered and the innovation that made Juul so popular with children.
The subsequent effects - namely being easier to consume and more addictive - eventually resulted in legislation catching up, and restrictions on what Juul could do. It being "too good" of a product parallels what we're seeing in social media seven years later.
Like most[all] all public health problems we see individualization of responsibility touted as a solution. If individualization worked, it would have already succeeded. Nothing prevents individualization except its failure of efficacy.
What does work is systems-level thinking and considering it an epidemiological problem rather than a problem of responsibility. Responsibility didn't work with the AIDS crisis, it didn't work on Juul, and it's not going to work on social media.
It is ripe for public health strategies. The biggest impediment to this is people who mistakingly believe that negative effects represent a personal moral failure.
Companies that sell products to the public have managed this for a hundred years. Some are good at it, some are not, some completely disregarded their obligations. This is not all that new.
thats the point
Lets just be honest, if you make enough money its legal in America.
Unless you hurt children, then its mostly legal and a slap on the wrist.
Nukes are the same as knives, just different in magnitude. Should one have special rules?
I think in America the second amendment makes it legal to own a nuke.
> I'm really not sure what companies should do about it
disassemble the intentionally addictive properties they built into their platforms to maximise engagement and revenue at the cost of the mental health of their users.
Short form video is a different beast altogether, and much more concerning. The fact that these platforms don't offer a way to avoid short form altogether is a big issue.
YouTube allows you to "show fewer shorts" but what if you don't want them popping up at all?
AI Slop is the best thing to happen to these platforms - because it will lower trust and engagement as people (hopefully) become tired of inauthenticity. Rage bait is potent when the event in the video _actually_ happened, but when you realize it was AI generated, the manipulation feels even more obvious (though it was always there).
These platforms should also allow users to understand how the algorithm has categorized them, and be able to configure it. YouTube, Instagram, et al. would be safer places for viewers if they allowed users to tell them what they want to be exposed to, and what they don't. Big tech is dodgy about this currently, because the more control the user has the lower the engagement (good for the user, bad for profit).
That "show fewer shorts" button doesn't do a damn thing. I click it, refresh the page and whala, shorts.
Previously I made a chrome extension that removes them from web... But I haven't updated it in a while. Basically just inspects the HTML/CSS patterns of the shorts components and removes them from the page. You could probably code/vibe code a similar extension in 10m.
Just kids? Not adults?
I have a somewhat unusual vantage point on this.
I'm a former Google engineer, now running a children's mental health startup (Emora Health), and my toddler is already on YouTube Kids.
So this verdict hits on every axis for me.I wrote up my full take here [1], but the short version: I don't think the "Big Tobacco moment" framing that NYT is pushing actually holds up.
Litigation is negative reinforcement, and if you've ever tried telling a toddler "no" you know how well that works long-term.The families in this case absolutely deserve to be heard. The harm is real. But courts can only punish — they can't redesign a recommendation algorithm.
The change has to come from people who understand these systems building better ones.
Haidt has been saying for years what this verdict just confirmed. The evidence was never the bottleneck. The will to design differently was.
I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted. And it keeps popping up. I know it's easy to build a Blippi block feature using AI that blocks across channels.
Thats the kind of solutions we need. I know we have the tools. Just need intent and purpose
[1] https://www.emorahealth.com/clinical-insights/social-media-v...
> if you've ever tried telling a toddler "no" you know how well that works long-term
Parent here. Acting like it’s impossible and you have no choice but to let them have their way is a cop-out. Telling kids “no” and enforcing boundaries is part of the job.
> my toddler is already on YouTube Kids.
> I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted
I have a better solution that I use: If I can’t stay involved enough to monitor what the kids are choosing to watch, I don’t let them loose watching YouTube. They get to go play outside or with LEGOs or do puzzles or any of the other countless activities that are fun for kids.
This isn’t a problem that is solved by creating advanced filtering that lets you block anything related to Blippi (whoever that is) isn’t going to solve the problems of letting your kids loose on YouTube. They’re going to find another cartoon you dislike. The solution is to parent, set boundaries, enforce them, and find other activities for them.
You're right that enforcing boundaries is the job. I'm not arguing otherwise. And yes, we do plenty of LEGOs and outside time.
I believe you're conflating two things: parenting discipline and product design. The question isn't whether I can physically take the TV away. I do.
When I say "block Blippi," I don't mean I dislike the content. I mean I'm done with screen time and the UX makes that transition harder than it needs to be. Autoplay is off, but the end-of-episode screen still shows a grid of next videos. Of course he wants the next one.
So I block Blippi. Except Blippi's main channel cross-posts through Moonbug into hundreds of other channels. It's a hydra
YouTube already does content fingerprinting for music industry DRM. The technology to let a parent say "block this creator everywhere, and let me turn it back on when I choose" exists today. They just haven't built it for parents. Because the system isn't designed for children. It's designed for engagement.
So yes, parental responsibility matters. But "just don't use it" isn't a scalable answer when the product is specifically engineered to undermine your choices. That's the design problem I'm talking about.
Just a tangent, interesting that you brought up Blippi. Any issues that you have with Blippi if you don't mind me asking? :D
Ha — the guy is hyper. But I'll give him this: he introduces my kid to garbage trucks, excavators, fire trucks. I'm not physically taking my toddler to see all of those all the time
My issue is with YouTube's UX. I watch an episode with my son, we're singing along, he's excited about putting out the fire. Episode ends. Even with autoplay off, the next recommended videos show up — and of course he wants to watch the next one.
So I block Blippi. Except Blippi's main channel cross-posts into Moonbug, which cross-posts into hundreds of other channels. It's like trying to kill a hydra. Here's what gets me: YouTube already does content fingerprinting for DRM enforcement in the music industry.
The technology to let me block Blippi across every channel — and turn it back on when I want to exists. They just haven't built it for parents. My point that we can build systems designed for children if we had the intent
This is real. No matter how much I configure content controls on YouTube for my daughter, she scrolls past everything and ends up on brainrot videos — and then she can't stop. I've felt for a long time that this is by design.
two verdicts in two days, $375m in new mexico and $6m in LA. meta's insurance company already got cleared of covering these claims. if even ten more states follow, meta is paying out of pocket at a scale that actually shows up on the balance sheet.
Are there any takeaways here for builders of social media applications who are not Facebook or Google? Is this a warning to not make your newsfeed algorithm "too engaging" or is it only really relevant for big companies?
I'm not an authority on this matter. But if you say "I can stop any time", and it is not true, then you have a problem.
A good time to (re-)recommend the movie "The Social Dilemma".
I think all important public policy decisions should be left to random personal injury courtrooms, as long as the PI lawyers collect their customary fee. It's silly to let a regulator or legislative body butt in and so prevent the PI lawyers from collecting their cash. So what if you have no say in the result?
Why this site doesn't let me enter? Why temporarily restricted?
This stop-bot thing can be annoying at times.
WSJ not tryna be found negligent in any social-media addiction trial.
its gone too far half of browsing now gets blocked as a normal user
I found myself trying to fill out a captcha the other day whose letters were so skewed and crazy I really had no idea what they were. It took me four tries!
Mandatory age verification is coming.
my thoughts exactly... this "verdict" came with very suspicious timing.
otherwise know as mandatory identification
Good. Long overdue
I think this is going to end up with a huge chunk paid out to state health departments for the foreseeable future.
Kind of like how tobacco companies now pay out billions every year and its a major source of funding for states.
Hopefully this means more health services available. But it will just serve like an ongoing tax.
AIUI, this particular case is a pilot for about 2,000 similar (but not similar enough to be combined into a class action) cases. They are not actions by state or local governments for damages they have received, as was the case with tobacco.
I expect state governments to follow up.
The similar case about child predators was brought by NM’s attorney general.
I've heard about "landmark" cases against these companies over and over again for the last decade. There seems to be at least one every couple of years. And yet literally nothing has ever happened or changed.
Since these are civil lawsuits, it just takes more people coming forward to sue. There are plenty of cases where a jury found a defendant liable for damages only for the defendant to continue the bad behavior and subsequent juries awarding ever-increasing and compounding punitive damages. Big Tobacco and Purdue Pharma (went bankrupt) are examples of this pattern. Monsanto was famously hit hard with massive "repeater" damages after they continued selling and marketing Roundup despite prior judgements.
The exact same can happen to Big Tech. The goal is to get them to stop the bad behavior now.
I feel the same way. They're just going to appeal the case until they find a layer of the legal system where they have leverage.
This is the kind of stuff that is causing them to push for mandatory identity verification laws. If they are being held liable for the the desires of their users, they're being forced micromanage the affairs of their customers, which preclude anonymous usage.
Meta is not pushing for mandatory age verification laws.They are pushing for age verification burdens to be pushed to the OS / App Store layer.
Not only that, in my opinion the many positive reactions to this decision are a sign of a decline of personal responsibility and a desire of people to be managed by the government and treated like cattle. Blaming everyone else but themselves for personal problems and failures has become the default for many people.
Why is it bad to want the system to push people towards healthy behaviour but it's totally okay to want the system to push people towards unhealthy behaviour?
I want "the system" to neither push people towards healthy nor towards unhealthy behavior if by "push" you mean "force by law." I want a system that maximizes personal freedom and individual responsibility. I'm fine with "the system" providing advice and nudging, though.
What a surprise! I guess they didn't pay enough protection money. Still, better than nothing.
When I was a kid, tv commercials were heavily censored and the tv channel could and would be fined immediately if something inappropriate was shown.
How is it that these days social media can circumvent all these safeguards and then somehow blame the parents if a kid is watching something inappropriate on an app designed for kids (like YouTube kids)?
The issue is that politicians are beholden to social media companies because they can literally get them or their opponent elected. After reading Careless People, I was amazed at how leaders of so many countries wanted to meet Zuck because he wields so much power.
I really hope this ruling is the beginning of the end of the free rein they've had.
Don't get me started. So many existing laws just seem to be conveniently ignored because... it's 'digital'?
In a lot of countries there are specific laws banning the deliberate targeting of advertising to children (and in contexts where you would reach children, heavily regulated), but for over a decade Meta would allow you to target within the ranges of 13 to 18 years old.
That's to say nothing of the scams and deepfake celebrity ads they let run. Imagine if a deepfake ad of Warren Buffet promoting an investment opportunity ran on TV, the network would get sued into oblivion. On Meta though, there's no repercussions.
I actually quit Instagram because I found it so addictive. Wild that there's a case. Parents need to just take away phones from children. Simple as that.
Great news but this will probably the catalyst for more "age verification" nonsense. These algorithms are bad for everyone, not just kids.
> During his first-ever appearance before a jury in February, Meta's chairman and chief executive, Mark Zuckerberg, relied on his company's longstanding policy of not allowing users under the age of 13 on any of its platforms.
> When presented with internal research and documents showing that Meta knew young children were in fact using its platforms, Zuckerberg said he "always wished" for faster progress to identify users under 13. He insisted the company had reached the "right place over time".
Soon there will be government IDs required to use social media sites because parent's can't take phones away from their kids.
this has to be the first of many right? fingers crossed this leads to some meaningful change.
You mean it's the first of many appeals, I assume.
Trial courts will decide pretty much anything. Then the case gets appealed over whether the trial court correctly interpreted things you probably perceive as uncomplicated, like the 1st Amendment.
It's a huge deal because it was the bellwether case for over 1,000 other similar cases.
ah yup:
> It comes on the heels of a Delaware court decision clearing Meta’s insurers of responsibility for damages incurred from “several thousand lawsuits regarding the harm its platforms allegedly cause children” — a ruling that could leave it and other tech titans on the hook for untold future millions.
Yep. The insurance covers accidents and negligence, not deliberate decisions to impose harm to children for financial gain.
Sounds too good to be true. I’ll hold my breath.
I wonder at which point do children become such a liability for platforms that it's easier to just ban all children altogether.
Children don't have disposable income to buy ads/subscriptions. They don't have experience to write about. The only thing they have that adults don't is time which translates into engagement metrics.
In an ideal world, the adults that buy/manage the computers would create age-restricted account for children, and the OS would give this information to the browser, which would just transmit it via HTTP. This is the safest method to verify ages. If an operating system doesn't want to support this, it's ultimately the adult's responsibility to install one that supports it. This would mean there would be no burden on the adults (the majority of the planet) to verify their ages, so there would be no burden on the platforms to restrict ages either.
If platforms could verify ages without inconveniencing their main user base, I wonder if platforms would just start banning all minors, or if there is some reason to allow minors in the platform that justifies all the liability surrounding them.
Children are an extremely valuable ad target.
They have their hands directly on their parents heart strings, and their parents have a credit card.
This isn't anything new, think about the toy ads we had on TV when we were young.
I guess you are right. I assumed that something like Youtube Kids would have no ads at all given the audience, but it seems it does have ads targeted at young children. Bleak world we live in.
Nobody takes “age-restricted account[s] for children” seriously.
Parental controls and age-restrictions are almost universally half-baked, buggy fig leafs to displace negative attention from software and content providers.
I’ve been thinking about this a lot while building Murmel (https://murmel.social). One thing we wanted to avoid from day one was the “infinite engagement machine” model, so instead of pushing algorithmic slop, we just surface links that are already being shared by people you follow on Bluesky and Mastodon.
It ends up feeling much closer to “what’s interesting in my corner of the web right now?” and much less like a system trying to keep you trapped inside it.
Small scope, obviously, but I think more social tools should feel like utilities, not casinos.
So... should we all sue Youtube and Meta now? This is a semi-serious, follow this precedent to its logical conclusion, question.
Wow, so does this pave the way for massive class action lawsuits? Not familiar with how precedents like this play out long term.
> which could expose the internet giants to further financial damages and force changes to their products This is so wildly untrue, it's either downright deception on the part of the NYT or they simply don't know how to do math. In this case the judgement was for $3M. To keep the numbers simple for the sake of this comparison, let's not get into the 70/30 split of this amount and just imagine that YouTube (Google) ((Alphabet)) had to pay the entire amount. Their revenue (again, keeping it simple, don't @ me) for 2025 was $350B. Humans don't typically conceptualize the difference between millions and billions very easily, so let's knock everything down a few orders of magnitude. At that rate they take in around a billion dollars a day. So imagine this as a person who takes in about a thousand dollars a day, and has a yearly salary of $350,000 (quite comfortable to live on, while not being obscene). Now apply the same math to the amount they have to pay, and what do you get? A grand total of $0.03. You'd have to do it with a bank transfer (or use a nickel and overpay considerably). I make considerably less than $350K/yr, and I can still confidently say that if I had to pay a fine of 3¢, that wouldn't just make me not take the fine and the court that issued it seriously, it would have the opposite effect it's intended to. I would now see that it costs me next to nothing to keep doing things the way I always have, whereas if I were to change things like the very nature of my business to avoid the fine, that could have a potential serious effect. What the court has done is demand that when Google and Meta spit on their customers, they also throw a few pennies in their change cup to show how sorry they are, while changing nothing and going full speed ahead.
How about optimize for engagement with people you know irl and not influencers and media?
Notably a different case from the other one in New Mexico:
Jury finds Meta liable in case over child sexual exploitation on its platforms
And one with much deeper implications on how they operate. It's easy for Meta to just hire more moderators or treat reports of exploitation with higher priority; if this verdict stands, I think they have no realistic choice but to abandon usage targets.
Realistically they will hire expensive lawyers, pay out hundreds of millions to billions in settlements, fire lots of people (workforce is predominantly American), etc.
Even if they do what you're saying, lots of people who've used any Meta property in the last 15 years has a potentially viable case, and no future work can swat those away
Just give people the option to turn off algos. "I do not consent to suggested content"
I don’t feel good about this case- on the one hand, I’m all for sticking it to big corporations. On the other hand, nobody has claimed that Meta and YouTube were doing anything illegal, so this case is different from civil suits brought after a criminal case finds someone guilty. This is a case where the jury decided they don’t like how two corporations acted, and are just giving money to one person. Why does this plaintiff in particular deserve this money?
I’ve argued in the past that the right way to create the change in corporations we want is to change the laws, and people have made valid points that Congress has basically given up on doing that. But even so, civil cases with fines don’t seem like that way to make lasting change. In the analogues to the tobacco fights, there are LAWS that regulate tobacco company behaviors as a result. The civil case here isn’t going to result in any law. So what are companies supposed to do? Tiptoe around some ill defined social boundary and hope they don’t get sued? Because apparently the defense of, “no I didn’t target that person and I didn’t break any laws” is still going to get you fined. What happens when a company from a conservative location gets sued in a liberal location for causing a social ill? Oh, we’re cool with that. But what if a company from a liberal location gets sued in a conservative location for the same thing? Oh, maybe we don’t like that as much. I’m taking the libertarian side here. I know plenty of people who don’t watch TV, don’t use Facebook, and I know plenty of people that recognized that they were spending too much time on digital platforms and decided to quit or cut back. So a healthy person can self regulate on these apps, I’ve seen it and done it. I’m just not sure how much responsibility Meta and YouTube bear in my mind. If they’re getting fined $3M plus some TBD punitive amount, are we saying that this 20 year old person lost out on earning that much money in their life or would need to spend $3M on therapy because of Meta or YouTube? It feels a little steep off a fine for one person.
If Meta and YouTube really were/are making addictive products, wouldn’t a lot more people be harmed? Shouldn’t this be a class action suit where anyone with mental trauma or depression be included?
I don’t know the details of the case, but I highly doubt that this one plaintiff was targeted specifically, and I doubt their case is that unique. I read tons of news articles about cyber bullying, depression, suicide attempts, and tech addiction. Does every one get to sue Meta and YouTube for $3M now?
The case was brought under product liability law.
If I sell you gizmo, and I know, or should know, that using the gizmo could seriously harm you, and I don't tell you or do anything about it, I am liable for damages you incur.
I’m not sure the plaintiff’s mental harm was caused by Meta and YouTube. You can be just as depressed without social media and online videos. And even if they were, other cases that are kind of similar to this have not found the corporation responsible. The parents of Sandy Hook didn’t get any money out of Remington, and their product is much more directly linked to harming people than an app. McDonald’s was not held liable in 2002 for making people fat. I am pretty sure the food at McDonald’s is more easily linked to our health outcomes than the link in this case.
Should Apple or Samsung be held liable for making the phone that the plaintiff probably used to use these apps? How much responsibility do they bear?
Further, Facebook/Instagram and YouTube are free products from the perspective of the plaintiff. These corporations didn’t sell anything to the plaintiff, so can they even be held liable? They did sell the plaintiff’s data to advertisers, which I think you might be able to hold them responsible if they misused that data, but this isn’t what the case was about.
I’m not rooting for depression or suicidal thoughts or anything, but this doesn’t feel like the right direction we need to be moving in as society. We can’t simultaneously argue for free speech and freedom of choice and also claim that we aren’t capable of making our own choices to live our lives responsibly.
I was just pointing out that the basic framework of the case is not some novel new idea, and a jury just decided to stick it to a big company out of the blue. There is a long history of this sort of case, and a jury did hear a lot of evidence from witnesses and instructions on the law from a judge.
Some of your examples are not very compelling.
In the case of Remington, there was another party that (presumably) the jury found was more directly responsible. Also, the victims of sandy hook were not Remingtons' customers.
Apple does not make you install instagram on your phone. And I doubt that you could find really compelling evidence that Apple knew in great detail the harms that were being caused, and rather then seek to mitigate them, instead made them worse in order to earn more money.
I'm not sure there a requirement that a product be paid for in order to be subject to product liability law.
I think I agree that these product liability cases are not the best way for a society to deal with these problems. I would prefer to see the democratic process arrive at some reasonable solution, based on the desires of the majority of the population. But there has been almost no movement in that direction, and I have my doubts there will be (in the US).
And I think it's important to see that this is about 16 (and younger) year old children, not adults.
> Apple does not make you install instagram on your phone.
Meta did not make her install Instagram on her phone.
> I'm not sure there a requirement that a product be paid for in order to be subject to product liability law.
You’re the one who originally used the word sell when pointing out this case was brought as a product liability case, not me. And selling something is the first step in establishing product liability. But even if the court allowed a liability case to go where there was no commercial sale, Meta and YouTube could have argued that their product would not be considered defective/harmful by a reasonable person- almost by definition the number of users of Instagram and YouTube make that argument- and thus they should not be liable for one person claiming a defect.
Like I said before, this should be a class action. One person doing it is a money grab and the jury just wanted to stick it to “big bad tech companies.” I probably wouldn’t care so much if they had found Instagram liable but excluded YouTube, but the fact that YouTube has to pay some of the damages means the jury was not thinking that hard.
I believe social media is on a collision course with an iceberg called Section 230.
Broadly speaking, Section 230 differentiates between publishers and platforms. A platform is like Geocities (back in the day) where the platform provider isn't liable for the content as long as they staisfy certain requirements about havaing processes for taking down content when required. A bit like the Cox decision today, you're broadly not responsible for the actions of people using your service unless your service is explicitly designed for such things.
A publisher (in the Section 230 sense) is like any media outlet. The publisher is liable for their content but they can say what they want, basically. It's why publishers tend to have strict processes around not making defamatory or false statements, etc.
I believe that any site that uses an algorithmic news feed is, legally speaking, a publisher acting like a platform.
Example: let's just say that you, as Twitter, FB, IG or Youtube were suddenly pro-Russian in the Ukraine conflict. You change your algorithm to surface and distribute pro-Russian content and suppress pro-Ukraine content. Or you're pro-Ukrainian and you do the reverse.
How is this different from being a publisher? IMHO it isn't. You've designed your algorithm knowingly to produce a certain result.
I believe that all these platforms will end up being treated like publishers for this reason.
So, with today's ruling about platforms creating addiction, (IMHO) it's no different to surfacing content. You are choosing content to produce a certain outcome. Intentionally getting someone addicted is funtionally no different to changing their views on something.
I actually blame Google for all this because they very successfully sold the idea that "the algorithm" ranks search results like it's some neutral black box but every behavior by an algorithm represents a choice made by humans who created that algorithm.
This is an opinion and I believe it's wrong. And you just have to look at the statute to see why [1]:
> (c) Protection for “Good Samaritan” blocking and screening of offensive material
> (2) Civil liability
> (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
"in good faith" is key here. Here's another opinion [2]:
> One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.
So far the Supreme Court has sidestepped this issue despite cases making it to the Appeals Court. Until the Supreme Court addresses, none of us can say with any certainty what is and isn't protected.
[1]: https://www.law.cornell.edu/uscode/text/47/230
[2]: https://www.naag.org/attorney-general-journal/the-future-of-...
I don't expect that to work, but who knows. Editors "rank", curate, select, present, etc content to people, and have for a long time, and it's always understood to be speech.
Remember, according to that link, 230 does not give platforms any new rights. It simply makes it easier for them to end cases faster and cheaper, that they would have already won on 1st amendment grounds.
Neither you nor I can say definitively what the law is until it's been testedin court and really until the Supreme Court weighs in and that just hasn't happened yet. At least I'm saying "this is my opinion (and, as an aside, I'm not alone in that opinion as I've pointed out). Condescendingly posting a "here's why you're wrong" link doesn't make you smart. Or informed. Or correct. Just confidently wrong.
Even in this post you contradict yourself. If S230 doesn't grant more rights, why does it matter? If it makes it easier, then it's giving you something, just like anti-SLAPP statutes give you something (and matter).
Also, this isn't a First Amendment issue. Nobody is questioning whether a platform can publish their own content or somebody else's. The issue is liability for what it is expressed. Publishing your own content comes under a strict liability [1] standard. Section 230 establishes that publishing third-party content does not, which again contradicts the point that that "230 does not give platforms any new rights".
Wouldn't you agree there's a difference between being able to post defamatory or false statements with or without liability?
Why do you believe that "Section 230 differentiates between publishers and platforms"?
Section 230(c)(i) [1]:
> (c) (c)Protection for “Good Samaritan” blocking and screening of offensive material
> (1) Treatment of publisher or speaker
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
This is a protection for being a platform for third-party (including user-generated) content.
Some more discussion on this distinction [2]:
> Section 230’s legal protections were created to encourage the innovation of the internet by preventing an influx of lawsuits for user content.
It goes on to talk about publishers, distributors and Internet Service Providers, the last of which I characterize as "platforms".
By the way, my view here isn't a fringe view [3]:
> One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.
This is exactly my view.
[1]: https://www.law.cornell.edu/uscode/text/47/230
[2]: https://bipartisanpolicy.org/article/section-230-online-plat...
[3]: https://www.naag.org/attorney-general-journal/the-future-of-...
This isn't good reasoning. According to your analysis, any website, ISP, or hosting provider that uses a firewall or Cloudflare is by definition a publisher, since they algorithmically shape traffic to prohibit suspicious IP addresses from accessing content.
Not at all. Intenet matters. Is Cloudfare trying to shape user behavior or push a particular position or content? No.
Just look at the Cox decision from the Supreme Court today. As long as the (Internet) service isn't designed for or sold as a method of downloading copyrighted material, the provider isn't responsible for any actions by its users. In other words, intent matters.
I find that technical people really get stuck on this aspect of the law. They look for technical compliance or an absolute proof standard because we're used to doing things like proving something works mathematically. But the law is subjective and holistic. It looks at the totality of evidence and applies a subjective test.
And intent here is fairly easy to establish. We could take an issue like Russia and look at all the posts and submissions and see how many views and interactions those posts got. We then divide them into pro-Russian and pro-Ukraine and establish a clear bias. We also look at any modifications made to the algorithm to achieve those goals.
This is nothing like Cloudfare DDoS protection.
They were also designed to addict adults, just saying.
Right, but adults are assumed to be somewhat more responsible for themselves. This is why we don't let kids (legally) smoke or drink, but we do let adults do so. We expect that adults can, in general, say no, and that children are less able to do so.
But it's not absolute. Some drugs are illegal for adults as well, for example. Why? Because they're too addicting.
So are Instagram and Youtube just nicotine, or are they heroin?
Huge if upheld. This was the bellwether case for thousands of other similar cases.
Everyone now posting on social media about how the sentence "Social Media is Addictive" is going viral.
Let me disable short form video content. Jfc YouTube…
There is no personal responsibility left in America. I have a child. It's my job to teach him and watch what he watches and does. I guess I am the only one who thinks this way. Good luck having the parental government raise your child. Parody: I let my child have cocaine and now they're addicted!!!!! Hilarious.
How old is your child? Younger than 6-8 it's easy to monitor what they're watching and enforce limits. By age 9-10 it isn't just about what they access in the home. Many schools in America are giving kids computer and tablet access, and kids are smart or curious enough to access social media there.
I agree that a big part of this is educating children about these hazards, but that also doesn't mean we should allow these companies to data science the shit out of our attention and will power. Many adults have concerning relationships with social media too -- exposure, pressure, and manipulation are key ingredients that are difficult for anyone to deal with.
Yeah it's too bad there aren't any tools you can use to block any content at your home YOU personally deem irresponsible /s. Im not sure what your argument is here. If it's for regulation then please do some reading on regulatory capture before you hand over your ID card while logging in to respond to my comment
> Parody: I let my child have cocaine and now they're addicted!!!!! Hilarious.
Cocaine is illegal because it is addictive.
LSD and hallucinogenic mushrooms aren't addictive and aren't legal. Cigarettes and alcohol are addictive and are legal.
Yet, I know many people who've done cocaine that are in other respects law abiding citizens. Making unjust laws makes us all criminals. The government cannot protect people from themselves, no one can. The best we can do is try to educate, and we can't even seem do that. Good luck out there buddy.
IMO, parents share just as much blame here, if not more. Giving your kids independence doesn't mean being oblivious to what they're doing online. Too many parents confuse hands-off parenting with not parenting at all.
Have you met kids? They’re devious, tech knowledgeable, and scheming and can find ways around any rule. Plus, no matter how good of a parent you are, you’re somewhat at the mercy of their friends’ parents as well. I can block TikTok from my daughter’s phone, but can’t block her from watching her friend’s phone while she’s out of the house.
I dont think parents going up against psychologists, data scientists, product managers and software engineers with the best pay in the world is any kind of fair fight.
now do Candy Crush..
I can't help but feel these are "revenge" verdicts. Public perception of these companies is dirt low, and there are so few levers the average person has to change what they feel is an increase in atomization, loneliness, breakdown of civic discourse, Cambridge Analytica level political targeting, misinformation, etc.
Maybe the social media companies could do more to combat all these. They certainly have a level of profit compared to what they provide to the average person that makes people squirm.
But does anyone believe for a second that YouTube is responsible for a person's internet / video watching addiction? It's like saying cable television is responsible for people who binge watch TV.
It's hard to square this circle while sports gambling apps and Polymarket / Kalshi are tearing through the landscape right now with no real pushback
>But does anyone believe for a second that YouTube is responsible for a person's internet / video watching addiction?
Yes? Is there an algorithm or not?
By this logic your Grocery store can be sued for you gaining weight because they use an algorithm to time notifications to advertise to you on your phone if you install the app
Yes, and they should be if they promote products that are known to cause harm, this is why we have labels (or at least they try) to inform users that what they are buying is bad for their body and will harm their health, there is no such thing right now on Tiktok despite knowing it's likely to harm you. The fact that teenagers think it's normal to take a hundred selfie a day is a direct sign of psychological distress imo, for a young teen.
We don't promote cigarettes (or at least in countries that have decent consumer laws) because it harms users, candies should be in the same category, it should probably exist but it shouldn't be promoted. When social media actively promote things that cause psychological harm while being aware (as we do have countless studies that proves it) of it to CHILDREN, then yes, screw them, we must force a change.
We should also more forward, imagine now if instead of having a thousand of engineers & businessmen VS teenagers, we could leverage their intellect to actually help the world (and still make money out of them), it is possible, we must force innovation if corporations aren't complying.
Do more? They have not done anything. These trials have shown they have long had extremely detailed understanding of what is going on with their product, and instead of trying to mitigate the problems, they have intentionally made the problems worse in order to profit more.
What evidence was presented in this trial to show that?
Doritos now liable for creating a good tasting chip? This is madness.
Yeah, people keep making the comparison to cigarettes but to me this is wildly different.
Cigarettes directly cause physical harm and even death. Social media can sometimes, under certain circumstances, depending on who exactly you're interacting with on social media, indirectly contribute to emotional harm.
Cigarettes are also physically addictive. Your body actually becomes dependent on them and will throw a fit if you try to stop using them. Social media is only "addictive" in the loose sense that all fun, mentally engaging activities are.
I'm not saying social media is fine for kids and we shouldn't do anything to reduce their use of it (TV and video games can be equally unhealthy IMO). I'm not even necessarily against legislation on the subject. But there's a huge difference between fining a company for breaking a law, and fining them for making a perfectly legal product "too fun" because you let your kids spend all their time on it and that turned out to be unhealthy.
This type of civil litigation where the courts effectively create and enforce ex post facto laws based on their opinion about whether perfectly reasonable, 100% legal actions indirectly contribute to bad outcomes is not a great aspect of our legal system IMO.
There are different kinds of addiction. The difference is physical vs. mental.
The best example of this is heroin, which has both a severe physical and mental addiction component, and it's the mental addiction that makes relapse so common.
Mental addictions rewire the brain's chemistry, causing the user to seek and only find joy in the substance. This is a better comparison for social media (albeit not as destructive and instantaneously harmful as narcotics)
Everything you do or even just think about "rewires" your brain to some extent. The difference with addictive drugs is that they do so in a way that bypasses your brains' natural processes. The same cannot be said for "addiction" to games or social media, or other entertainment.
There can still be social ills associated with these forms of natural "addiction" (e.g. gambling), and I'm okay with regulating those ills, but I'm less okay with the courts doing so unilaterally based on their subjective opinions with no concrete law backing them up.
One could argue that the ultra processed food industry is doing exactly what the tobacco industry did wrt to making their food addictive.
There is a difference in creating a food that tastes good vs creating a food that tastes good, but instantly wants you to eat the whole bag.
Normally I don't see people walking down the street staring at their Doritos
addictiveness != enjoyment
Although to some extent they're correlated, sometimes the things that are most enjoyable you wouldn't describe as "addicting" and vice-versa.
Eating a nice full meal is more enjoyable than eating doritos on your couch, but you wouldn't describe it as addicting.
If anything, I find my experience of youtube today to be less enjoyable than in the past
"YouTube argued that it was not a social media company and that its features were not designed to be addictive."
Well, that's laughable.
I strongly doubt that "negligent" is the proper word for "carefully designed to induce as much addictive behavior".
This is ultimately about the inherently pernicious nature of unregulated capitalism. Businesses want money. They get that by manipulating you, the consumer, to consume their services. They are "ethically" bound by (given an excuse by) fiduciary duty to pursue profit callously.
The result, in these corner cases where eating people is profitable? Shelob.
Negligent doesn't begin to describe their behaviour.
As long as we continue to value making money for shareholders above all else, such and possibly worst perversions will continue to happen. Capital has found all sorts of ways to make all sorts of questionable things addictive to sell.
I feel, and it's obvious to most that the only way a society can truly reform is by a shared consensus over their value system. This verdict could be thrown out by the appelette court(i feel it would be), so this is not the culmination of values resulting in what many hoped for.
It does not seem to me that this is a country where consensus on what, if anything, to put above capital will come about any time soon and with capital it's always been ask for forgiveness rather than permission.
The only time true justice that happens is when the harm becomes obvious being the shadow of a doubt(e.g. smoking) that even a monkey can tell it's time, game is up.
Perhaps if one day we can look into the brains of people with the clarity of glass and the precision of electrons and tell, will that time come when we all recognize how bad of an idea social media was.
When you put something out there, there's a question of ownership for how people end up using it. - Some think that "if you use it incorrectly, it's your fault" and probably agree with the statement that Palantir is not an evil software and that one must "change the administration". - Some think that "if you use it incorrectly, it's the creator's fault" and then you have safety labels on everything (see Prop 65).
It's a spectrum of risk between the user and the creator. My opinion is that there's enough scientific evidence that social media to show that it has a negative impact on kids and teenagers as their brains are still developing. I think a social media ban on kids is a good thing (similar to a driver's license or age of drinking).
If you deliberately design your platform to be addicting then you can't say people who become addicted are "using it wrong" though.
I'm a former Google engineer, now running a children's mental health startup (Emora Health), and my toddler is already on YouTube Kids.
So this verdict hits on every axis for me.I wrote up my full take here [1], but the short version: I don't think the "Big Tobacco moment" framing that NYT is pushing actually holds up.
Litigation is negative reinforcement, and if you've ever tried telling a toddler "no" you know how well that works long-term.The families in this case absolutely deserve to be heard. The harm is real. But courts can only punish — they can't redesign a recommendation algorithm.
The change has to come from people who understand these systems building better ones.
Haidt has been saying for years what this verdict just confirmed. The evidence was never the bottleneck. The will to design differently was.
I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted. And it keeps popping up. I know it's easy to build a Blippi block feature using AI that blocks across channels.
Thats the kind of solutions we need. I know we have the tools. Just need intent and purpose
[1] https://www.emorahealth.com/clinical-insights/social-media-v...
> if you've ever tried telling a toddler "no"
Parenting is rough! Good for you, for sticking to your guns.
> The plaintiff, Kaley, started using YouTube at age 6 and Instagram at 11.
Who was at the wheel here? If we call up all Kaleys teachers from this time frame and ask them "were Kaleys parents checked out" what do you think the answer would be? For as bad as education has gotten, I sympathize with with teachers because parents have gotten FAR worse.
It's not like we don't know these things about peoples behavior on devices... maybe it's something that should be talked about in school, along with how credit works, and how to file taxes.
Do we need to tell parents "it's 10am, have your kids touched grass yet?"... "It's 10pm did you take the tablet and phone away so they go the fuck to sleep?" --
"touch grass" as a meme/slang is literally people poking fun at the constantly on line. It's "hazing" and "bullying" to drive social correction.
Is the addictiveness of social media great? No. But the blame shouldn't be placed squarely on the companies either. What happened to personal responsibility? I was addicted to Facebook, I realized it, and I disconnected from it. I had withdrawals for a while (pulling out my phone and trying to open the app I had deleted without really thinking about what I was doing) but I quit. I know I am addicted to YouTube shorts, so I stay away from them. Occasionally I'll go on a bender and a few hours will slip by without me realizing, but while I know YouTube is designing them to be addictive, I blame myself for falling for it.
There are plenty of things in life that can be addicting; drugs, sex, money, power, adrenaline, entertainment, technology... The list goes on. If we remove everything addicting from life, you better believe something else will rise up to take its place.
The solution therefore isn't to remove everything addicting from life, but rather to raise everyone with the forethought to know what might be addictive, the self-awareness to realize when you are addicted to something, and the self-control (and support systems if and when necessary) to stop.
Personal responsibility is important. But at the same time, we don't let people open up a heroin shop and then claim it's your personal responsibility to not buy it and use it. We don't put slot machines in schools but tell kids that they need self-control to not get addicted to gambling.
I don't know what the answer is, but it feels wrong to lean _entirely_ on personal responsibility. We live in a world in which we were simply not evolved to live in. People literally make a good living by engineering and exploiting our weaknesses for profit.
> raise everyone with the forethought to know what might be addictive, the self-awareness to realize when you are addicted to something, and the self-control (and support systems if and when necessary) to stop
If only it were that easy. If you've ever known somebody who struggles with a serious addiction you'll know that even when they know it's destroying their life they still can't stop.
Maybe this applies more towards adults, but I don't think the correct answer for kids is only "just have self-control," something kids are notorious for not having. Certainly there's a lot of parental responsibility here but we can simultaneously hold companies responsible for their part too.
It also is a situation where the ubiquity of these companies make it exceptionally difficult for parents to regulate access.
This. Also, technology is ever changing, and expecting parents to constantly keep up with feature rollouts on these platforms is unrealistic.
Personal responsibility IS important, but we also don't allow cigarette companies to advertise on billboards with cute characters (remember Joe Camel?)
The problem is that internal communications inside these companies raised concerns about the manipulativeness, and even deceptiveness of the algorithms and tactics they were using.
They weren't just consciously creating an attractive platform, they were consciously creating a manipulative platform.
Yes, personal responsibility is important. That doesn't mean we need to allow companies to attempt to addict as many people as they can.
The question we should be asking: are these technologies a net-positive to society?
I’m glad you went through that and came out ok.
It seems though, increasingly, that the ability to avoid addiction is less about pulling one up by one’s own bootstraps, and in many ways determined more by genetics. That is to say, what might have been possible for you is much harder for others.
Look no further than GLP-1. People who have struggled for years - decades - with overeating are almost immediately able to cut back on addictive eating. It’s not that they suddenly discovered willpower. It’s a biochemical effect.
It’s no wonder then that kids are more susceptible to addictive building behaviors. Their minds are pliable and teachable.
Why would we not legislate things that take advantage of that?
If they are liable of making the thing addictive, it does mean it is their fault. In this case, it specifically says it's designed to be addictive to children, whose personal responsibility is probably not expected.
We can't raise other people. We can prohibit the addicting things like newsfeeded Facebook.
Everyone should at least be a conscientious junkie.
On one hand: sure.
On the other, it's very different when companies explicitly design their products to be as addictive as possible.
We've been through this with Big Tobacco already. Nicotine and other tobacco substances are addictive on their own, but tobacco companies were prosecuted for deliberately making cigarettes as addictive as possible, besides also marketing to children. The parallels with Big Tech and social media are undeniable.
Don't blame yourself! You had an encounter in the world and were greatly affected. Anyone who had the same predisposition and same exposure as you would of fallen in the same situation, just as they would have pulled themselves out of it the same way.
It is not, like, a moral thing to become addicted to something. And the ability to pull yourself out of it is determined, whether you are conscious of it or not, by your broader circumstances and by the same predispositions that brought you there in the first place. At the end of the day we are all fucked up animals reeling from the ongoing consequences of prematurational helplessness..
We should feel together in our problems like this, not distinguish ourselves by how we might individually overcome them. You are not "better" finding yourself standing over a beggar addict, you are lucky, never forget that. If for no other reason that it's not a sustainable world view otherwise, it leads to insecurity, anger, and relapse.
The dark truth of the world is that everyone is doing the best they can. How could they not? Why would they not? What is this thing that separates you from the addict or murderer? Unless you have maybe some spiritual convictions, I can't imagine what it is..
Just really, I know you had a powerful personal journey, but don't let it establish to you that we are all fundamentally alone, because we are not, and its good to help people who maybe need more help.