Meta Designed Products to Capitalize on Teen Vulnerabilities, States Allege
wsj.comDo grocery stores “capitalize on vulnerability” when they place name-brand products at eye level?
Do carmakers “capitalize on vulnerability” when they advertise pickup trucks as big tough vehicles for tough, outdoorsy men?
Do providers of health insurance for pets “capitalize on vulnerability” when they say you need to buy their product if you love your pet?
At some point people need to be responsible for their own decisions. And I can’t get that worked up about Meta’s free product.
Yes, yes, and yes. However, the three situations you named are targeting adults. Children and teenagers are worth insulating from the full might of advertisers, propagandists, and always-on virtual social circles.
Arguably, adults are worth protecting as well. A lot of marketing and even design intent for adult products is also extremely misleading and intentionally designed to prey on our collective fallibility. In that sense, it's at the very least unethical, and maybe should be illegal. We have some laws about truth-in-advertising, but they're very weak in a lot of scenarios.
I agree with you. I believe that one of the greatest problems American society faces is a corruption of public life on the terms of advertisers and propagandists, who are often the same people. However, I don't have a great solution for imposing a censorship regime on content generation and consumption, and so it is a little more helpful to simply choose an age at which to draw the line where the law treats you as capable of making your own decisions and protecting yourself from mass media.
What are you thinking about specifically? Some examples might help others see your perspective better.
Wow, really? Name anything heavily advertised on corporate media.
Insurance, defense contractors, pharmaceuticals, home security systems, cars, body spray, McDonalds, Coke, politicians; it's all fear, status, greed. It's explicitly taking advantage of our vulnerabilities with every combination of precision and blunt force, to sell pointless, stupid, toxic shit. Put on your They Live sunglasses. The planet is burning.
Let's advertise carrots on TV. Let's advertise public transport, and public healthcare. Let's advertise free fucking college.
Even Adults are fucked when info explodes. Cuz there are upper limits to what a 3 inch chimp brain can do.
It's been studied under so many different names Knowledge Gap Theory, Info Asymmetry, Bounded Rationality etc The more info Adults have to digest, bad decisions/exploitation are garaunteed.
I would argue that it is the responsibility of parents to decide how to do that. But if you want a legislative solution, there already is one (COPPA) which prevents people under the age of 13 from joining social media.
If you think those legal protections aren’t fit for purpose (they were created long before social media even existed), then you should take that up with your legislators. I personally wouldn’t trust them to approach that task without implementing something horribly tyrannical, like implementing a requirement for a full KYC process for creating social media accounts. So I’d advise that you be careful what you ask for in that respect.
> But if you want a legislative solution, there already is one (COPPA) which prevents people under the age of 13 from joining social media.
Nothing (besides parents) can prevent people under the age of 13 from joining social media. The kids just lie about their age. Everyone knows how to lie about their age to get something they want, from the time they're in kindergarten.
And I’d suggest that’s absolutely the way you want it to be. The consequences of pushing a full blown KYC regime on all social media are pretty dystopian.
The value COPPA does provide is as a tool parents can use. If they report their underage children’s social media accounts, they will be removed. But of course they still have to actually do the parenting, which is how it should be. I think you shouldn’t want government to takeover those responsibilities for them.
Are children and young adults banned from selecting products based on marketing inside stores?
Canada has forbidden the sale of cigarettes in grocery stores, and they've banned the display of any advertising for them in the stores that are still allowed to sell them.
Sure, because cigarettes have clear and unambiguous harms to health.
What makes advertising, I dunno, a remote controlled plane to a kid so much more unethical than advertising it to an adult?
You know, children's advertising in the 1980s and 1990s was heavily researched by psychologists and economists with the explicit goal of manipulating the emotional state of children to get them to bother their parents for toy purchases. I don't have any books or articles off the top of my head, but I've read several articles and analyses of how the pacing, pitch, and coloring of advertisements intended for children were designed to be as stimulating and upsetting as possible. There was also research put into when during the day and year you could play these commercials to the greatest effect, and the advertising slots at certain times were priced higher in response to this.
It is possible that this doesn't harm children. However, the deliberate injection of emotional manipulation and disunity into families with young children in order to stimulate the purchase of toys doesn't strike me as particularly good for the nation, and I don't know that I would have to think very hard about my decision if given the option to prevent it using the law.
I think we're in agreement that we should regulate advertising for things that are harmful for children.
Here is the issue of with these kinds of things, where does it end?
Do tv shows target teens too much with high energy music and dancing?
Does media target them too closely with intentionally addictive music riffs from Taylor Swift to Billy Eilish?
Will we shut down all these video games that clearly target kids with bright colors and, let's call it what it is, "aestheticized violence"?
We need to be careful about how we go about trying to protect children in this regard.
Yes, I agree that enforcement of moral norms requires setting arbitrary limits on behavior that aren't necessarily objectively defensible. Every law and ban regarding the dissemination of goods and services involves a slightly illogical compromise between a partially elected moral authority and the citizenry.
There isn't really a clear answer to your questions about whether or not "we" would continue to censor video games, popular media, and TV shows if limits were introduced about what minors can see on the internet. It would depend on the compromise reached between the citizenry and the enforcement agency that tasked with enforcing the bans. In some regards, I feel that American life is far too censored and supervised, but in others I feel that it is far too liberated and unrestricted. I happen to believe that there should be more content limitations on entertainment and social media, and less content limitations on political speech. If I were ever to meet with a large group of people who shared my beliefs, we would probably take political action to enforce our will for society.
I suppose that doesn't really prove or disprove your assertion that censorship laws would inevitably lead to more extreme censorship, but I think it's important to recognize that there are already many and forced censorship laws in the US, and altering them isn't really the craziest thing.
Ironically, many cereals are placed at eye level of children.
Are you saying the cereal boxes on display at grocery stores are mostly targeting adults?
At least it’s in the real world where you have to be physically present, and exposure is limited.
If you want to talk about kids cereal the real issue are TV ads pretending to be cartoons. A subject that has generated considerable concern, debate and state action over the years.
At some point,
we as a society are going to have to seriously engage the fact that we are now fully capable of manufacturing addiction, and at the moment, do so, both in adults and in children.
"Their own decisions" is not a stable concept. Setting aside esoteric philosophy of mind, you need look no further than your own relationship to your phone—tested out for many of us at the Thanksgiving table last week, as duly noted by Chris Ware's cover of last week's New Yorker magazine—to confirm this.
The mechanisms of surveillance captialism and a foundation of decades of consumer psychology (etc ad nausuem) have quite literally left us adrift in a world of stochastic mind control. At that same table many of us encountered the inexplicable world views of relatives whose propoganda bubbles did not intersect our own.
And we all have such bubbles, not least as a result of the cheerful professionalism of many who browse here.
Your decisions, just like teenagers' decisions, are not "your own" in the sense someone might have meant c. 1923. And before one cries, it has ever been thus, to that I say: no, it absolutely has not. The technologies for behavioral steering of today are as unalike what people contended with in advertising (etc.) a hundred years ago, as our logistic transport and energy industries are, amongst others.
Until we take this on, head on, as a society, the problem will just get worse.
> At some point people need to be responsible for their own decisions.
You're asking this of a group that largely can't vote, sign contracts, and which America doesn't trust to drink.
I can get worked up about Meta targeting children in ways which they don't have the experience, or knowledge to know about let alone to avoid. Children should be protected from bad actors like Meta and let me be clear, any company taking advantage of kids is a bad actor.
Companies take advantage of kids all the time - there's a huge history of this - https://en.wikipedia.org/wiki/Advertising_to_children
The entire toy unboxing industry is built around advertising to children.
Do grocery stores “capitalize on vulnerability” when they ... place junk food at the checkout line to capitalize on your impulsiveness when you're least able to defend against it?
Yes.
You can be expected to make responsible decisions as an adult. That doesn't mean that there aren't bad actors trying to take advantage of you, and that this behaviour isn't borderline unethical.
I should make responsible decisions as an adult, but it's still fair to call that grocery store's actions unethical, and that maybe they should be regulated if they can't be ethical on their own.
Can you define exactly what is unethical about displaying candy in the checkout line?
Should nothing be there? Only healthy vegetables? Is it unethical to sell things with added sugar in the first place? Or a certain amount of sugar per volume period?
The whole sugar/candy industry is unethical in my mind. It may take some decades for society to catch up, but eventually eating sugar the way we do today will be seen like smoking and the tobacco industry.
Why not extend this to basically everything not essential to life? Traveling to tropical destinations is clearly harmful for the world, living in big homes spaced out from each other is another one that causes unnecessary pollution. Selling alcohol, foods high in saturated fat (butter), etc.
If one course of action that does not even add more options of products to buy, just pushes them in a different way, contributes to worse health outcomes across a population, and you know it’s doing so, yes, that’s super unethical.
Targets have a Starbucks selling 10x as much dissolved sugar 25 feet away from checkout aisles. Are those unethical? How about in a separate Starbucks building, but on a pad site in front of the store with a drive thru?
Seems like an arbitrary place to draw the unethical/ethical line.
This is not, cannot be, and shouldn’t be math. Yes, it’s all “arbitrary”. Unhealthy impulse-items at the checkout are going to be regarded as quite unethical, by a lot of people, for really obvious reasons. The approaches you’re trying to use to “disprove” that isn’t how any of this works.
Many things are bad. Some are worse than others. Ones that are intentionally manipulative, as the impulse-buy aisle is, and greedily pushing high-margin products that are also unhealthy? Yeah, that’s an extremely shitty thing to do, no matter how common. The motivation is 100% greed, not delivering a better experience (as simply making candy and soda available in some normal aisle might). And in the Year of Our Lord 2023, every person choosing to create impulse-buy areas knows exactly what they’re doing and the effects it has.
The Starbucks bottles in the checkout aisle are, similarly, bad. The Starbucks that you have to walk over to, look at the menu with calories printed right next to each item while you choose what to buy, then stand in a second line, check out again, then wait at to get the drink, isn’t bad in the same ways. It might be bad in different ways, and to a different degree! But it’s not the same, and you’re not going to be able to construct some proof that requires I condemn those equally or else condemn neither, because that’s nonsense both in the specific terms of what we’re writing about, and also because it’s not a useful way to analyze or discuss these sorts of things in general.
>But it’s not the same, and you’re not going to be able to construct some proof that requires I condemn those equally or else condemn neither, because that’s nonsense both in the specific terms of what we’re writing about, and also because it’s not a useful way to analyze or discuss these sorts of things in general.
I think it is useful when ideas about government regulation start coming in (like the poster who I responded to wrote). I do not want leaders to (completely) capriciously determine what is and is not allowed.
Using people as a means to an end qualifies here and is generally considered unethical.
I don't understand what "using people as a means to an end" means here. Sounds like selling something to people to earn money, but that would describe all business.
Using people as a means is a reference to Kant[1].
> that would describe all business.
What if it does? Should we avoid an honest conclusion because it has profound consequences?
Yes, because unless there is an alternative proposal to feed and house the 8B people on this planet, then it is a waste of time to complain about the current solution being imperfect.
We don't need M&Ms in the checkout aisle to feed and house 8B people on this planet. Don't conflate what I'm saying with a general anti-business sentiment.
There are many kinds of economic transactions that occur. The best kinds are the win-win transactions. I have an excess of X and a dearth of Y, you have the opposite, and we swap them to our mutual benefit and walk away happy in the long term thinking that we both made a good deal, and even a third party analysis by experts would agree it was a good deal. This is basically the kind of transaction that happens when I swap $2 for a bunch of fresh cilantro at the market, to cook a meal with. Or when I pay a skilled mechanic a reasonable fee to do maintenance on my car that I can't do myself.
Then there's the other kinds of transactions: the exploitative ones. One person substantially and noticeably wins and the other person equally loses. There is no upside for the loser in the bigger economic picture of our lives. Lots of basic examples of this are common, and some are borderline fraud. An example would be the mechanic who charges 10x what he really /needed/ to make a healthy margin, because he's the only mechanic in town and I'm immobilized by my failed car, and I'm poor, and I'm putting it on a credit card because I don't have the money but have to get the car working to keep my job.
I'm positing that, in a larger holistic sense, the transactions for the candy in the checkout aisle (all the advertising that goes into it as well!) are like that. They're not so much economically harmful: it may in fact be a "good deal" in a basic math sense to pay $2 for the candy's ingredients and manufacturing process. But at the end of the day, they're turning a profit and you're continuing an unhealthy sugar addiction and eventually dying of diabetes. It's a transaction that's explicitly designed to exploit you and harm you for their collective profit. This is not a win-win, at least not in a larger, holistic sense.
Let me see if I'm understanding you correctly. You answered in the affirmative that if something is too hard then we shouldn't consider the ethics of it?
Yes, if the thing you are finding unethical is as broad and pervasive and undergirding as "business".
> At some point people need to be responsible for their own decisions
This is like saying “everything in moderation” in a discussion about nutrition. No shit. We’re trying to find that delineation.
>At some point people need to be responsible for their own decisions.
That point should come when they are no longer children. Targeting children to produce perfect little ecosystem consumers is... kinda evil.
>And I can’t get that worked up about Meta’s free product.
There is no "Free" product. You are paying with freedom, you are paying with attention, you are paying with privacy. It's not "free", it's extracting value from you.
It doesn't cost money, yes, but neither does working, yet we assume that transfer of value is such that it ought to be paid for. It's not Meta offering a "free" product. It's their users. Their users give Meta their data for "free", which then Meta uses for profit.
> At some point people need to be responsible for their own decisions.
We already do that. When they turn 18, we expect people to be responsible for their own decisions.
One normal human with 24 hours in a day losing 45ish hours a week to pull median income and another 8ish per day to sleep versus multibillion dollar companies hiring behavioral psychologists and marketing experts with collectively many thousands of hours per day spent finding ways to trick people—and their efforts demonstrably work.
The advertising industry’s a rabid dog the size of Godzilla and should be put down, whether it’s targeting kids or adults.
Not only I'll add to the choir of people answering "yes" to all those 3, but if any of those act in a way capable of redefining the reality people live in, they should be outlawed. Even if they are targeting adults.
Marketing gets a lot of freedom because of the assumption that they only take over a small part of the information a person has access to. To the extent that this assumption becomes incorrect, those actions become attacks.
Yes to all of the above? Advertising is a nasty industry and should be tightly controlled.
There are a great number of regulations placed on products (and their advertising) to ensure customers are informed and protected.
Examples: Drinking disclaimers (drink responsibly). Cigarette disclaimers and off putting mandated packet visuals. The traffic light system in the UK (which displays a colour coded breakdown warning of unhealthy food macros on the front of all ready meals). Alcoholic beverages by law having to specify their alcohol percentages. Foods by law having to specify their nutritional content and ingredients.
All of these regulations have been introduced to ensure customers are not blind to unhealthy choices (e.g., the traffic light system warning against high sugar content designed to make cheap addictive food). While not always effective, I believe that on balance these regulations make society a better place to live in. Similarly one could envision mandated social media disclaimers and warnings, and to regulate this way would be entirely within the wider norm, rather than something unusual.
Yes, yes, and yes. Advertising is almost always malicious. The days of it being primarily small businesses getting the word out about themselves are long over. Advertising firms now have actual psychologists that study ways to best exploit the brain of the common man, and that is wrong.
Hear hear. It's bizarre to see people like the parent commenter who are so unthinkingly accustomed to abusive and manipulative advertisements that they mistake its perversity for normalcy.
When I grew up in the 90s you kinda wanted ads. Otherwise you wouldn't know what was being shown at the theaters etc. People put up "No ads but Cineama ads and the phone catalogue" on their mailboxes etc. I believe many peoples perception of the world does not change from their childhood. At least very slowly.
Ads stopped being useful about 20 years ago by now and like 10 (?) years ago they started to spy on us.
The dilemma parents are grappling with is this: tablets and smartphones, while beneficial for children's learning and socializing, also expose them to constant marketing and propaganda, even within the confines of their bedrooms, as they attempt to connect with peers or complete tasks.
Previously, children's exposure to marketing and propaganda was mostly confined to their entertainment hours, during which they watched television or read magazines. There was at least some hope for moderation. However, "apps" have blurred these boundaries, as the same devices used for education and social interaction are also channels for persistent advertising and messaging, making it harder to limit exposure to just "entertainment" time.
> The dilemma parents are grappling with is this: tablets and smartphones, while beneficial for children's learning and socializing
Recent reports from teachers indicate that many children are intellectually behind their peers. A concerning trend is that these children struggle to hold conversations, a problem attributed to their parents phone/social networks addiction. Rather than engaging and raising their children through conversation and interaction, these parents often resort to pacifying them with tablets or phones.
What do you mean they struggle to hold conversations? I was an awkward kid and you could say I struggled to hold conversations but it wasn't due to an addiction to tech. I'm also socially well adjusted now, as an adult.
There can be multiple reasons for the same outcome in a complex system. Prevalence of social awkwardness at certain developmental stages due to variances in individual immutable developmental schedules should remain stable over short time frames. Unless you think the researchers/teachers are incompetent enough to have not accounted for the baseline levels of social awkwardness, then the fact that some people are awkward because genetics or whatever is an irrelevant point to make.
> Unless you think the researchers/teachers are incompetent
Well the original post said
> Recent reports from teachers indicate
I don't think teachers are incompetent but they aren't researchers.
Yeah. I think this is the real problem. Electronics let parents slack off on parenting but electronics do not replace socialization.
> while beneficial for children's learning and socializing
citation needed? or are we just assuming because, well, there's education and social information and apps available on them?
> citation needed
This isn't Wikipedia. It's a casual internet forum, and you don't need someone to come armed with mountains of proof for casual (and obvious) statements.
BTW: One of my kids learned to read by playing a Cookie Monster word game during the pandemic. We've had enough "edutainment" software for a few decades that you don't need to ask for proof in a casual atmosphere.
It's definitely harder to socialize when all your classmates have smartphones and you don't, but you mostly need access to personal messaging apps. TikTok or a Facebook/Twitter feed full of people you don't know IRL are where the problems come from and they aren't needed. If only there was a way of splitting those out into separate apps.
Basically we need more things like Facebook's push a few years ago to show more personal updates from close friends and less mass-shared political posts from organizations.
I think it is hard to argue that you can't learn anything or socialize on your phone.
I take the parent comment's point as "beneficial for children's learning and socializing" as compared to the status quo ante.
Understood. I probably should have said "potentially beneficial". A phone is a tool. It can be used or abused like any other.
This is not what needs to be argued. What needs to be argued is that the form of socializing and learning that can only can come through phones/tablets is worth the negative aspects.
As someone who grew up just before smartphones became a thing, I kinda also managed using books and shit.
no, the claim was that phones and tablets make it easier to learn and socialize, not that you can't without them.
Yeah, and doping makes it easier to win the school soccer game — still few would consider letting their kids do it. That is because the trade offs involved are not worth it at all compared to winning by just training harder.
Don't get me wrong, I am not dogmatically against smartphones/tablets for kids and I understand the pressures parents operate under, but if you want to figure out whether it is good to let your kids do X vs not doing it, you should probably take into account:
- what are the benefits? (claimed: easier learning and socializing, but also: parents don't need to deal with their kids)
- what are the downsides? (there are man studies linking e.g. depression to excessive smartphone use in kids, also: parents don't deal with their kids, smartphones are mostly used for consumption, hard to monitor where the algorithm takes them)
- are there any alternatives that have similar advantages while having less of the bad stuff? (I mentioned books, but of course it might be even feasible to limit the amart phone use to certain times of the day etc.)
And then you weigh those for yourself and decide. This was the point of my post before.
Well, the thing you should show is whether a phone allows you to better learn or socialize than without a phone.
They aren't arguing that at all.
education is debatable, but I don't think there's any argument to be made that social life isnt degraded without a phone. High schoolers won't be invited to things if they don't have access to a smartphone.
There is science driving the design of products to make them addictive.
For teen girls - the apps are designed to scare them about being socially excluded. For teen boys - the apps are designed to fill their need to master skills.
The issue that the government has to deal with with app addictions is self harm attempts by girls (e.g. emergency room visits) and underperformance of boys in the real world (e.g. low college enrollment).
If you are trying to make an addictive app, this is a good reference to understand the science: https://www.amazon.com/Hooked-How-Build-Habit-Forming-Produc...
BJ Fogg is a good reference too: https://www.bjfogg.com
There's a good article about how to fix it: https://www.laweekly.com/restoring-healthy-communities/
(Disclaimer: it talks about my work)
>For teen girls - the apps are designed to scare them about being socially excluded.
Any female magazine ever.
Agreed, but I do think the effects of the addiction are radically different between a social media app and a magazine.
I doubt there is any real difference, it's just a matter of quantity. The magazine comes out monthly or possibly weekly. The computer comes out all the time.
Yes, and moreover, the important point in the article that some people seem to be forgetting is that Meta itself believed that certain design choices led to addictive products and worked to incorporate those designs despite harmful consequences to children and adults alike. It matter much less that anyone on the outside believes this or not.
Additionally, saying that children and adults should be wholly responsible for this is like saying the Chinese and not the British should be responsible for their opium addiction (see Opium War) and that homeless in San Francisco should be responsible for their Fentanyl addition. They can always just say no, right?
I worry that if nothing is done, this will only get worse, addiction will become the norm, of one sort or another, and you can just look at history of the Opium War to see where this leads.
> Additionally, saying that children and adults should be wholly responsible for this is like saying the Chinese and not the British should be responsible for their opium addiction (see Opium War) and that homeless in San Francisco should be responsible for their Fentanyl addition. They can always just say no, right?
This is why I find it funny that FAANG people call themselves software engineers. In the real world, an engineer is wholly responsible for the projects they bring into the world. Imagine a bridge collapses and someone dies. Then in court the family is told that the person was responsable to research bridge designs before using it. These social media companies are just run by money hungry a-holes.
> This is why I find it funny that FAANG people call themselves software engineers. [...] Imagine a bridge collapses and someone dies. Then in court the family is told that the person was responsable to research bridge designs before using it.
They are software engineers though. Engineers build all of our weapons.
The bridge collapsing isn't accidental-- it was the intended outcome. It's a carefully-engineered trap.
This is what happens when you start using the word "addiction" outside of contexts where it applies. You get these kinds of invalid and dangerous arguments comparing actually addictive substances that hijack incentive salience directly on the physiological level to a screen and speakers that most definitely do not.
Gambling addiction triggers the same brain areas as drug and alcohol cravings
https://www.imperial.ac.uk/news/176745/gambling-addiction-tr...
As someone who has had issue with addiction (a real one by your definition as well as screen based one), it's plainly obvious that the brain mechanisms at play are the same.
So does listening to enjoyable music or viewing an impressive art gallery. I assume you're talking about glutamergic activity in populations in the shell of the nucleus accumbens. (edit: after reading the paper, https://www.nature.com/articles/tp2016256 , I was correct).
And that's funny because in the incentive salience theory of addiction, which they cite at the start of their paper, the nucleus accumbens populations don't encode for wanting, those populations encode for liking. The actual voxels of the brain this study should have been watching would be the ventral pallidum and ventral tegmental area. Those are necessary and sufficient for wanting(craving). The nucleus accumbens is not.
You'd think the director of the National Problem Gambling Clinic who cites the incentive salience theory in his first paragraph would actually take the time to understand the neurological correlates of the theory he's citing (but then again, "It is difficult to get a man to understand something when his salary depends on his not understanding it."). This lack should make you question the other aspects of this study.
Like how a 19 person MRI studies might as well not be studies at all in the neuroscience sense. They're for getting more funding to do a study with actual statistical power to make inferences. And note that in the actual paper they don't call it addiction, it's gambling disorder.
There is a for-profit pseudo-science, much like the anti-gay camps of the 1980s, which is spreading unsupported claims using words like "addiction" in contexts where the medical regulatory bodies and journal literature don't believe the concept applies. These people prey on the irrational behavior of parents scared for their children and try to convince them that things like addiction to a website on a screen is possible. They write popular press books, go on talk shows, etc, to keep the meme (and their funding sources) alive. But the DSM and ICD just don't support it. Neither do the recent literature; at least if you stay out of the pay for publish 3rd tier "journals" these scammers submit their "science" to. And yes, it even applies to media personalities associated with Stanford.
>> These people prey on the irrational behavior of parents scared for their children and try to convince them that things like addiction to a website on a screen is possible.
Saying that addiction to a website isn't possible is unfounded.
People get addicted to online gambling. That's just "a website on a screen." It's clearly possible and it clearly happens.
What are you stating? I genuinely don't get the point. Are you saying that screens/apps don't cause addiction?
He is saying people need to be more techno-optimistic and stop paying attention to all this fear mongering hack jobs about social media algorithms. These algorithms deliver billions of dollars of value on a daily basis. Money and GDP are more important than depressed teens. It's important to focus on what actually matters: market capitalization and profitability.
Poes law almost got me.
I mean, stating it that plainly is what makes it a joke, but it is indeed an accurate description of the choices we have collectively made as near as I can tell.
I thought I was clear. There is no such thing as "internet addiction" or any subset. There's actually not even "gambling addiction" anymore. It's been properly renamed to "gambling disorder". But I guess we're not talking facts here. Instead we're concentrating on how it feels to us. And various ambitious policians are realizing they can use that collective lay delusion to further their careers.
I am not pro-corporate as some are accusing me. I've never even had a facebook, twitter, or the like account in my life. I think these are terrible services and platforms. But it is even more dangerous to apply a label like "addiction" to them because then politicians think they can treat them like drugs... and we know how dangerous that response is.
> There's actually not even "gambling addiction" anymore. It's been properly renamed to "gambling disorder".
You're incorrectly making assumptions about that wording. They're all disorders now. E.g. a heroin addiction is officially "opioid use disorder" in the DSM. It's probably part of some initiative to be more inclusive or avoid the accusatory nature of the word addiction.
More than that, you're interpreting in the wrong direction. Gambling disorder and substance use disorders were both moved into the same chapter of the DSM-V ("Substance-related and addictive disorders"), reflecting ongoing evidence that gambling disorder triggers reward pathways in the brain the same way that drugs do.
https://link.springer.com/article/10.1007/s40429-014-0027-6 if you want more info on the history of categorizing gambling and other addictive but non-substance-abuse disorders.
I appreciate the correction and the reference. I am suprised that they decided to put gambling disorder with the substance abuse disorders under "Substance-related and addictive disorders". But the bulk of the paper is about how all the other behavior disorders besides gambling do not have sufficient evidence to include them with the addiction disorders. This continues to support my point, re: interaction with websites.
>reflecting ongoing evidence that gambling disorder triggers reward pathways in the brain the same way that drugs do.
Yes, people find things that are intermittently rewarding to have more incentive salience eventually. But gambling with random operant condition is not hijacking those neuronal populations responsible for reward prediction (like the dopaminergic neurons of the ventral tegmental area) and activating them in the absence of reward. It is merely reacting appropriately to actual reward as encoded by activation of the glutamergic populations of the shell of the nucleus accumbens (at least). That's a huge difference... though apparently not big enough to stem the political and social tides.
Ok, it's just such an absurd position I wanted to make sure. So essentially you're arguing about semantics?
Nope. It's important to use the right word in this case for 2 reasons. First is the trivial semantic one you've perceived; addiction has a definition and things like physiological withdrawl symptoms don't exist for behavioral disorders. They aren't addictions.
The second, more important, is that even if we rename it properly to "internet disorder" there's still not significant evidence for making it a behavioral disorder. This is backed up by the lack of inclusion in the DSMV updates and ICD10 updates or 11 just released. People have certainly tried to have these things included: their income depended upon it. But the science rejected it.
You could also make the same correlations between autism spectrum disorder and the rise of popular (ie, non-usenet, irc, etc) social networks online. But it obviously wasn't caused by it. It was caused by a better identification of the phenotype and more accessible treatment. I think the claimed and unverifiable "increase in bad mental health/etc/etc in teens" is much of the same.
> This is backed up by the lack of inclusion in the DSMV updates and ICD10 updates or 11 just released. People have certainly tried to have these things included: their income depended upon it. But the science rejected it.
It's not in the Bible either. So clearly this isn't a real problem.
Why are we trusting acceptance by a community of gatekeeping charlatans as the final say on whether or not a problem exists? Meta hires psychologists to engineer these very exploitative patterns they deny the existence of. They can't put that in the DSM-V. People would take notice that they're a rehab clinic in the business of selling heroin.
> even more dangerous to apply a label like "addiction" to them
my dude, teens leaving school are crossing the street without even looking up, because they're scrolling insta. Ok so they are dumb teens. What about the crossing guard lady ? She is paid money by the school board to monitor the crossing lane so the cars don't hit the teens. Now, the crossing guard also doesn't look up, because she is scrolling insta too. How do you think all this ends ?
The same argument was literally made against newspapers and how people were ignoring each other and their environment and causing accidents. It turned out just fine.
My concern would be more that it is entraining a sort of consumerist outlook, where corporate values are instilled into a child, rather than addiction. That has always been the case of course, with education preparing the new generation for the workforce. But the use of technology disintermediates the parent from that process.
Doesn't BJ Fogg work at Stanford?
Please correct me if I'm wrong, but it's my understanding that Meta (and most other big tech companies) have long been in the business of hiring a large number of recent social science Ph.D. graduates from top U.S. universities. People with a lot of knowledge of statistics and some domain-specific knowledge in their fields that could possibly be applicable to their job. The whole purpose of doing this is to create teams of marketing people doing in-house research to figure out how to best manipulate others by maximizing "engagement" or whatever other metric.
Isn't this just how all big tech companies operate as a normal business practice? Certainly Youtube is no better when it comes to targeted content and advertisements to children to their detriment.
My main point is that I don't think it makes any difference whether Meta has some internal document proving that they specifically target children with these practices. The problem is so much bigger than a single policy or company, and legislatures need to figure out a better way to address the overarching problems. I don't have much faith that these one-off lawsuits will make that much of an impact given that they almost always lead to some fine or settlement that is an acceptable business loss for the company.
I'm all for Meta being decimated by a thousand cuts in the form of lawsuits from various levels of government, but at best it would just be replaced with something else unless more regulation exists at the top levels (US / EU / etc).
It seems like basically all marketing and advertising is human pen-testing. Thought is serialized into video or audio and then deserialized back into thought which is evaluated. Sometimes this evaluation causes downstream thoughts and actions (including propagating the vulnerability). The question is whether the resulting action is 'organic' or a RCE - overriding the agency of the actor.
I think a core class that should be taught is how to safety deserialize sensory input as to avoid causing RCEs. Or basically 'patching' these known vulnerabilities.
The nuance with social media ('digital' media generally I guess), is how hard it is for third parties to verify/audit/understand wtf is going on to be able to prove if anything negative is happening.
With broadcast media like TV, I can see what the programming is, and I can watch the same ads that every other house is getting broadcast to know what's being shown to kids (and research companies do this). Similarly for retail media, I can go to a store and see what a retailer is doing.
For Meta with AI newsfeeds and targeted ads, it's impossible to know exactly what any one persons experience is. I don't know the veracity of this specific case is but as a minimum I think there should be some legislation that force these companies to be auditable in some way...
Should come as no surprise, honestly.
Above all else since turning public, Meta is in the business of making money. It's not illegal to target user's vulnerabilities in order to get the user to spend more time or money on their platform. It's unethical as hell, but it's business 101 - the shareholders would revolt if Zuck came out and said "here's this opportunity to make you all a ton of money, but we're placing our personal ethics above doing this, so we're not". He'd get sued for breach of fiduciary duty.
Now, are Meta's product strategies unethical (or questionably ethical), harmful to society, and setting bad precedent? Yeah, I'd agree with that. But the market and shareholders like money.
> It's not illegal to target user's vulnerabilities in order to get the user to spend more time or money on their platform.
Perhaps it should be illegal to target children in such ways? I'm tired of this argument that companies should be able to do whatever they wish in the name of profit, they need to be reigned with strong regulations.
It’s also BS. Companies have branding and PR teams because they know it matters for profits. The issue with meta, twitter, etc is their real customers are advertisers. They can piss off users, and just pretend some % of bots are users. And let’s be real, few marketing companies are held accountable to their level of actual impact.
I didn't read the comment you're replying to as endorsing Meta's actions, but rather stating that it is the only thing you should expect given the current lack of regulation.
I don't really disagree with anything you said, but I would suggest that this kind of thinking is largely at the root of a lot of society's problems. Big corporations have become almost as powerful as governments in many respects. They are integral to our lives in a myriad of was we probably don't even notice. Yet we don't just allow them to be completely amoral, we expect them to behave that way, almost demand that they do.
The result is that we have a lot of amoral institutions playing a key role in our society.
> shareholders would revolt if Zuck came out and said
Zuckerberg has super-voting shares that give him control over Facebook [1].
[1] https://www.reuters.com/breakingviews/zuckerberg-motivates-s...
Just as putting actual cocaine in Coca-cola at first also optimized shareholder value. And it was perfectly legal, and not really even considered unethical - heck, it boosted energy, the user gets a beneficial service!
Of course, we know how that worked out. What is galling is that Meta absolutely knows it is creating a bunch of cocaine addicted children.
Or Coca-cola was made using the coca leaf and the kola nut for flavor and energy reasons. The coca leaf has trace amounts of cocaine in it but is not like they were mixing cocaine powder into their drinks. They still use the coca leaf today but with the cocaine part bred out.
Bringing up wrong historic points about "evil capitalists" doesn't really help your case against Meta.
Hmm, let me scan my words again - did I say they were evil capitalists or used cocaine powder? ...Nope don't see that anywhere.
They used a natural product that contained cocaine. They very much "put this in" their drinks. The analogy is actually excellent because probably the early makers of Coca-cola didn't understand the dangers of what they were doing, either.
Many of those shareholders are themselves Meta users or have kids who use Meta products. Crazy what kind of masochism "the system" encourages.
> "here's this opportunity to make you all a ton of money, but we're placing our personal ethics above doing this, so we're not". He'd get sued for breach of fiduciary duty.
having a social media company that's a B-corp would be a nice world.
This is like saying - hey, let everyone buy their guns without background checks, because the shop has to make money!
If background checks weren't legally required, no store would be doing them. Every flagged check is a lost sale after all.
I for one hope this case reach the supreme court and it's struck down on egregious Government overreach. This case has no proof about net harm to teenagers by social media.
This case is basically projecting everyone's misplaced hate of social media without doing a proper controlled experiment of it's benefits/harm to the society.
You can't do controlled experiments on humans and hence the states have no case except overreach. If they really want to cater to their constituents then pass specific laws.
The data is well established that social media use by teens leads to worse mental health outcomes. I'm a parent and as my kids near the age in which social media becomes a thing, I started digging into. I had assumed it would be vague and filled with underpowered studies, but it's not. Social media is bad for kids and the data is very clear on it.
This post has a list of the some of the better studies and gives a good synthesis of the results:
https://jonathanhaidt.substack.com/p/sapien-smartphone-repor...
This article was a good rebuttal to Haidt's post: https://reason.com/2023/03/29/the-statistically-flawed-evide...
That's a good analysis, but I don't find it convincing. He's trying really hard to disprove Haidt's post by poking holes in many of the studies. If you look at 386 studies in the social sciences, of course you'll find issues with the analysis or design of many of them.
The larger trends ("most of the effect is driven by teen who use no social media", etc.) aren't supported by the data he presents (look at the table of "social media time" -> Depression for example).
Are the researchers who look into this problem predisposed to finding a connection? Probably. But I do think the open, community based analysis Haidt led was done well and if you look at what they found digging through 386 studies, it's compelling.
> He's trying really hard to disprove Haidt's post by poking holes in many of the studies.
Because this is how evidence based reasoning works. If the evidence that is supposed to support the hypothesis is fundamentally flawed, then our hypothesis doesn't actually have any support.
The fact that we can apparently so easily find flaws in what is supposed to be empirical evidence should make us more cautious about drawing firm conclusions. In fact, low quality literature is something of a plague in many social sciences at the moment (e.g. the replication crisis in social psychology).
This post presents a decent discussion of this sort of issue https://www.cremieux.xyz/p/beware-the-man-of-many-studies
I can prove the same correlation with the rise of
a) Broadband Internet
b) Transgenderism
c) iMessage
So, why Meta?
No you can't. Read the linked studies. "correlation doesn't equal causation" isn't a magic spell that disproves all research.
For more, here's an open, collaborative review of 386 studies: https://docs.google.com/document/d/1w-HOfseF2wF9YIpXwUUtP65-...
Thank you! It is well established that teenage girls’ attempted suicide rates rose with the advent of social media.
I dont think there are any smoking-gun causal studies, but the correlation evidence is very strong.
Isn't this largely a US phenomenon?
It has been studied mostly on US adolescents. There is some research on Dutch teens too.
I would say more that this effect is more well studied on US teens so we can say more conclusive things on US teens. In addition, there is some indication that the effect could also be present in non-US teens and should be further studied.
Thanks. Most things I have seen have the US/perhaps Anglo-Saxon countries stand stand out (e.g., https://www.psychologytoday.com/us/blog/the-resistance-hypot...), but could indeed by evolving.
That makes putting it on social media alone much more tenuous, though.
A lot of the research says that social media isn't a cause, but more of a catalyst in the presence of other factors like cyber bullying
This reminds me of similar effects where people attributed misinformation online to mostly right wing people, but it was actually right wing was only a catalyst when a predisponsity for chaos was also present OR when de-policing and federal investigations were blamed for rise in crime when it was only also when a particular district had a "viral" event.
I don't think social media is a causal factor in itself, but it is definitely a catalyst factor in the presence of other things like wealth inequality, clout chasing, and cyberbullying.
I don't know about you, but my Facebook is littered with every day pedophilia, mostly generated and edited photo/video shorts with literally children doing erotic dancing in prostitute clothing. I don't know, did I click on something wrong? Some ads lead to obviously adult pay-per-view accounts on third party sites - you're a click away from porn. Many posts, appearing for some reason clustered - I guess when their advertiser's budget allows - are Russian propaganda with armies of bots liking posts about "how great Motherland Russia is". And no, blocking, disabling and going out of my way to tell Facebook I don't like this content and that I'm here only for astronomical communities doesn't help at all. Being a Ukrainian father living abroad, this not only irritates the hell out of me, but makes me deeply sad about teenagers encountering all this crap. This is capitalizing on children vulnerabilities. This is making money on the propaganda of terrorism. I think Facebook should burn in hell, and I wish good communities, whatever is left, would move out of there and find a new home ASAP.
This has nothing to do with the lawsuit
Teenagers started having extreme and rampant mental health issues as soon as social media became pervasive. I’m sure it’s just a coincidence though.
Do you think the rise in rates of diagnosed autism spectrum disorder over the same time period reflect an increase in actual expression of the phenotype in the population? Or was it from an increase in diagnosis and treatment?
Do you think the rise in this premise of "rampant mental health issues" reflect an increase in actual expression of the phenotypes in the population? Or was this from an increase in diagnosis and treatment of mental health issues?
it was bound to happen with any communication platform teens use on the internet, it was already starting in the early 2000s with text messaging, definitely MySpace
MySpace is social media
Actually, if you look at the data, the debut and rise of Taylor Swift is obviously causing the teenage mental health crisis.
The mental health issues correlate with the rise a lot of things besides social media. Certain medications, for one. Correlations aren't causation, of course.
I for one feel no obligation to your existence; no need to provide you food, shelter, healthcare.
It would be government overreach to force such. Since I have no obligation to your existence itself, who cares what you have to say, or your philosophy? You’re just some pointless meat suit I have no responsibility to.
There you go; you got exactly the world you project you want.
Government has no obligation for Food, Shelter and Healthcare.
They are their to basically provide security, insurance against calamities and proper enforcement of laws based on constitution.
I strongly disagree. There are many groups in our society who shouldn't have to rely on charity for survival, like the disabled, the elderly and children. Letting those without the means to provide for themselves die if they're unlucky, even though our economy can support them, is a terrible thing and shouldn't be an acceptable point of view.
That's exactly where the "Insurance" part comes in.
Any act of "god" where pure bad luck is involved, some central entity should absolutely take care of them.
However, Government should also not be in the business of prepping bad choices -- diet, family formation, bad lifestyle. Any society that is given unconditional support will degenerate
> That's exactly where the "Insurance" part comes in.
How does that work for people who have never been able to work (i.e. chilren in bad households and people with disabilities from birth/young age)? They can't pay for membership, so why would a private insurance company insure them for free? What if all insurance companies happen to not accept them as customers?
> However, Government should also not be in the business of prepping bad choices -- diet, family formation, bad lifestyle. Any society that is given unconditional support will degenerate
How do you guarantee that no bad choices are propped up without letting individuals die due to missing coverage?
Lots of children in bad households have made out to be ok, and plenty of children with well-off families.
Disabilities birth/young age is act of "god". It's perfectly fine to take care of them. However they only account 1-2% of the population.
Death due to Missing Coverage is overrated. All Emergency Units take in patients and most exaggerated medical bill people actually don't pay the said bill
> Lots of children in bad households have made out to be ok, and plenty of children with well-off families.
And many of those children in bad households have been able to get where they are due to being helped by the state, which you do not want. So what do they do? Just die? If you don't give a concrete solution, that will be the outcome. Remember that we're talking about basic needs like food and shelter - a child can't do much about not having these available.
> Disabilities birth/young age is act of "god". It's perfectly fine to take care of them. However they only account 1-2% of the population.
Okay, then please explain to me what central entity should help these people, if it's not the state. As I previously said, what if all private insurance doesn't take these people on for free? What do they do except die?
> Death due to Missing Coverage is overrated. All Emergency Units take in patients and most exaggerated medical bill people actually don't pay the said bill
I am not solely talking about health insurance. You stated that you want to solve food and housing problems with similar insurance. Why would these private insurance companies give coverage to those who can't afford it? Please don't handwave this point away.
That’s true; we keep giving unconditional support to our Constitution based society and it’s a degenerate mess.
Isn't this the world we live in? Taxes for the common good, but the government doesn't force you to work and doesn't force you to directly provide for others.
Well-intentioned but still moronic socialism/communist projects fail for a reason. They fail to understand scale and human psychology and the basics of economics.
Whole lot of capitalist projects fail. Moronic capitalism and all those failed startups, the resources wasted on them! Most new capitalist businesses fail in their first year! Morons everywhere!
Market of goods and services with prices determined by social behavior among buyers! Sounds like socialism!
I know language skills alone make it feel like yer smurt but when you go compose dumb logic, eh.
Age requirements for cigarettes and alcohol and driving are also insane government overreach. No one has ever done consistent and conclusive research that I have read and approve of that clearly and undoubtably demonstrates that cigarettes and alcohol are bad! Or that vaccines are safe! And guns! Don't forget about guns! Children should be allowed to protect themselves! (And if any of you try to give me "evidence" then I'll know you're a liberal groomer woke.)
Here is the now unredacted complaint:
https://ia800508.us.archive.org/12/items/gov.uscourts.cand.4...
Employee names are still redacted. Given Zuckerberg's views on privacy, one wonders why they should remain "anonymous".
Now, please, let’s discover and discuss how TikTok is designed … I think it’s even worse.
The problem is the freakin' advertising-based monetization model. It makes kids the product instead of the customer. No wonder they turn into addicts.
This is why I support a TikTok ban -- we can't discover that. Not with the U.S. justice system and press, anyway, not in any way that matters.
The problems of social media go far beyond exploiting teens. It used to be you only saw mob behavior when a large group got together. With social media, you can have a virtual mob going all the time, and ready to materialize IRL for kinetic impact. We now just shrug when things like this happen (Jewish high school teacher forced to go into hiding because of students rampaging).
https://nypost.com/2023/11/25/metro/jewish-teacher-hides-in-...
Disclaimer: I worked at Meta for a time (not in areas relevant to this lawsuit). My experience was that many people working there also have children, cared deeply about this issue, and wanted to find ways to solve it.
The only enforceable claim is regarding usage of minors below the age of 13. All other claims are "soft" violations. Legal but unethical.
How do you regulate legal but unethical? You can't. So let's make it illegal. But how?
Maximum notifications per day? Deep introspection of the actual content? Good and bad influencers? Curfue? It's impossible to codify this into law, unless you're China.
Nowhere else on the internet will you find so many people defending such obviously bad practices. With such blatantly obvious biases, too.
Well... There's a lot of people defending an active genocidal land grab right now, but I take your point. And guess what - Meta is aggressively supporting that too.
Not saying this to start a flame thing, just advocating a sense of perspective for the sake of 8,000 murdered children, their mothers, and our shared humanity.
Boy this is so different than the MMA accusations against X. The WSJ fully reported on their methodology, gave Meta months to change, and partnered with a third party to verify the results. The advertiser responses to their ads showing next to arguably much more objectional content here are quite different.
I would pay for an adless and algo control version of Facebook, Insta and Twitter/X.
With Twitter even if I pay, still get the same number ads.
I want to customize what is shown in my feed.
I'd argue that Facebook itself is protected 1A speech (as are the recommendations of the YouTube algorithm). It's not a consumer product, and it's not a defective one. Parents have parental controls, and they should educate themselves on how to effectively use them.
I suppose that the broader concern is over precisely what duties a company has to its customers. They obviously have the duty to be truthful when making offers, but every customer relationship will have an adversarial component where each party benefits at the other's expense (or at the expense of third parties). In cases like a bar serving alcohol to customers, there's usually some responsibility to prevent patrons from getting extremely intoxicated and getting in a car. But that case involves a clear signal that someone is dangerous. Facebook doesn't know if someone's grades are suffering or if they're having mental health issues. It doesn't know if it should tell the user to "touch grass".
Marxists have long argued that the problem with capitalism is not that it's the cause of humanity's social problems, but that it systemically exploits humanity's social problems.
Yet another hack job about the dangers of algorithms. Do these journalists not understand that Meta is worth at least several billion dollars? Do they not get how much value Meta and all its products have delivered to people all over the world? Just terrible reporting all around.