YouTube actively deletes anti CCP comments
twitter.comRecent discussions:
https://news.ycombinator.com/item?id=23317570
This one is another word.
That difference isn't enough to support a significantly different discussion, so I don't think it counts as SNI (Significant New Information), which is the test we use. See past explanations at https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
The current submission is a nice test case actually. The comments here are no different than the ones in https://news.ycombinator.com/item?id=23223219, which had over a thousand comments and was only a week and half ago.
Btw, since someone always wonders: no we're not doing this because we're communists. It's a question of curiosity and repetition not going together (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...). HN is for curiosity (https://news.ycombinator.com/newsguidelines.html).
However tiresome it can get, there is a value, or at least meaningful information in repetition in itself. The constant desire for 'new' can be harmful as a phenomenon. I suspect many politically unpopular bills/initiatives do pretty well on repeat, when the 'newness' of the controversy has disappeared.
I do appreciate the work y'all do to keep this place nice, but I also hope you keep this in mind. I feel HN is 'influential' enough that being too ruthless about optimizing for 'new and interesting and curiosity-focused' might possibly diminish the values of HN as a spotlight/platform for important issues. I would pick the curiosity side though if I had to choose.
I think that's about right. I don't want to give the impression of being simplistic about this; it's just that if HN is influential, the best way for that influence to function is if we neither focus directly on it nor try to exclude it, but leave it in peripheral vision.
There's a related issue, which is that the better HN gets at its core thing (curiosity), the more the audience grows in quantity and/or quality, and then the more people then want to use that audience for something else. Sometimes that's to promote their company or event, sometimes it's to bring attention to some other matter—maybe more important than what's actually on the front page here. The more those things become the focus, though, the worse HN becomes at the core thing, so there's a sort of paradox where the better it gets, the worse it gets.
What seems to work is to focus on the core but not too rigidly. This is a good because rigidity turns into predictability which is bad for curiosity anyhow.
I don't think anyone is accusing YC of being communist, they're more concerned because of their desire to move into the Chinese market. And there's a clear trend between businesses moving into China and censoring anti Chinese voices
Well, where to begin. First of all, YC left China. Second, there is no dearth of "anti Chinese" voices on HN, which is a problem, because nothing as crude and nasty as "anti-Chinese" belongs here. I wish people would stop and think about this for a second or two. Do we really want to hound people of Chinese descent out of this community? Is that the kind of human you/we want to be? Because that's the mob behavior this amounts to. From my perspective, Chinese people have as much of a right to participate here as you or I do, and people who have a problem with that are in the wrong. That's not because I favor China. I just favor decency.
Third, this site has a serious problem with nationalistic flamewar, in which people vent their anger against another country in cartoonish ways. That is wrong and stupid, but worst of all it is tedious. On a site dedicated to curiosity, anything tedious is off topic.
Recent and relevant https://news.ycombinator.com/item?id=23223219
This is even more strange than the NBA tweet that started all of the controversy at the beginning of the year.
Censoring messages about an arm of a government where YouTube is banned has me wondering what is going on behind closed doors.
I love a good conspiracy theory as much as the next guy. But isn't the most likely explanation that individual YouTube moderators have been bought off, rather than this being a policy directed by senior Google management?
How hard would be for Chinese intelligence to recruit YouTube moderators, and offer them a briefcase filled with unmarked bitcoin in exchange for deleting the comments that they flagged?
> How hard would be for Chinese intelligence to recruit YouTube moderators
I think it's much simpler than that. There are also just millions upon millions of very nationalistic Chinese citizens (living globally) that would happily abuse the mod button.
There are Google employees... in China... that know exactly how the company works and how to game the system. Maybe they need to be "pressured" by the Chinese gov't - but I wonder if that's even necessary.
Social media is rife with people who will advocate for their country for free and without being asked to.
> abuse the mod button
If someone insults them in the comments and they report it, are they even abusing the system or are they using the reporting mechanism as intended?
I guess they're using it as intended there. But if they're using it because they don't like factual anti-china criticism, that's abuse.
It depends. YouTube's comment section doesn't have a particular editorial purpose, the closest thing it has to a mission is letting people comment on each other's videos for some vague sense of "community."
That's not an objective fact-oriented mission, and a moderation system that allows users to flag and kill comments that make them feel bad is still satisfying the constraints of the design.
I'm sympathetic to this line of thinking, even though I don't necessarily like the direction the rhetoric leads. Google offices in America also employ a great number (and proportion) of Chinese nationals.
Isn't bitcoin, by its very nature, "marked"? Perhaps you should change your conspiracy to use Monero or other privacy focused coins as the medium of bribery. :-)
> I love a good conspiracy theory as much as the next guy. But isn't the most likely explanation that individual YouTube moderators have been bought off,
Isn't that also a 'conspiracy theory'? What does this term mean to you?
This is an aside but your description of "a briefcase filled with unmarked bitcoin" is hilarious.
More topically, this was automated so it's not like a few bad actors who manually delete comments were to blame.
I think this is just a case of fighting low-quality comments rather than deliberate censoring of anti-CCP political speech.
Youtube is not supporting CCP. They regularly delete videos that are even mildly supportive of CCP, for example https://twitter.com/rachw82451432/status/1265308476034519040
Another example is the 'Fighting terrorism' documentary by CGTN, which has been deleted and reuploaded many times now.
Heck, I don't even call the above examples 'pro-CCP'. They just show a different point of view that isn't anti-CCP.
I see people here claiming something along the lines of: all anti-CCP comments are valuable examples of freedom of speech. But let's be honest here. Were it any nearly any other topic, people's usual opinion of Youtube comments is that it's a cesspool. While there is indeed valuable anti-CCP commentary out there, some really is not worth reading and just degrade the quality of the website.
I don't think YouTube fights low-quality comments like that. Comment moderation is generally left up to channel owners. YouTube just provides tooling like their spam filter, word blacklists, and "pending approval" settings. However, channels can choose to enable those or not.
> I think this is just a case of fighting low-quality comments rather than deliberate censoring of anti-CCP political speech.
As opposed to what comments? I've never seen a non-low-quality comment on YouTube.
High-effort spam. I've seen URLs advertised by combining 3 accounts worth of ascii-art.
It's almost an art form in itself, like when World of Warcraft trials didn't allow chat and spammers would just sign in to a hundred accounts and die in the shape of the domain name with all the corpses.
I have. Ironically, in non-anti-CCP videos, which tend to attract people who are tired of western narratives of China.
Their comments still aren't Hacker News quality, but they are far above the Youtube average.
For example, check the comments at https://youtu.be/ufxfSJgQuSI
> deleted and reuploaded many times now.
I wonder, how many times did they go through their 3 strikes? With the amount of videos google banned on all of their channels should now go into hundreds.
An ordinary content makers may well be banned after 1. An evident velvet glove treatment.
I don't know, maybe as a journalistic organization they have a special deal with Youtube.
But if I read in between the lines of your comments, you appear to be saying that all of CGTN's videos are propaganda that should be dismissed out of hand. I don't agree with that notion. Whether you agree with their views is another story, but I do think it is valuable for their story to be at least heard.
From Wikipedia: CGTV "is an international English-language news channel based in Beijing and is also referred as a mouthpiece of Chinese government." [0]
Funded by the state is correct. Being a mouthpiece is an opinion.
I find most of their reports to be pretty balanced and factual, even if they have an obviously mainland Chinese perspective.
Before concluding that their content is propaganda that should be dismissed out of hand, why not verify for yourself whether it's that bad? Their video content speak for themselves. You don't even have to agree with them.
The content produced by Liu Xin in particular are pretty high quality. She regularly invites guests -- even western ones -- on her show, who provide non-Chinese perspectives. Sometimes their guests even disagree with her.
If you disagree, I would love to hear why.
Sometimes, when analyzing bias, it's more important to pay attention to what isn't said, but to what isn't ever mentioned.
For example, it's easy to be factual and accurate when you can pick subjects that don't touch anything in a list of sensitive spots.
Another thing to keep in mind is the intended audience. If they don't expect more than, say, 0.1% of the Chinese population to watch that content, they might relax the censors. There are half a dozen cases of Brazilian rock records sung in English where the international version wasn't censored, but he Brazilian version was very tamed down.
I will admit that I haven't watched their content, though, so feel free to give a rebuttal.
I agree that 'things that aren't said' is something one needs to look out for. This goes for both sides of the media. There is a lot of mainland Chinese perspective that the western media doesn't cover. That is why CGTN is valuable. Granted, they have their own omissions. Nothing we can do about it -- all journalists have their own biases. Still, I think watching both is more healthy than just watching one.
"Who at Google decided to censor American comments on American videos hosted in America by an American platform that is already banned in China?"
Probably no individual. There are enough Chinese pro-nationalists using YouTube to generate noticeable signal if they all, independently based on their political creed or as an organized brigade, decide to start flagging posts. Once the flagging begins, the relative rarity of the characters in question combined against the flagging signal would generate a Bayesian prior that the word in question would tend to get flagged, and would preemptively start killing those comments.
This is one of the ways to train an automatic moderation system that is capable of discovering novel words the community decides are swears, and brigading is a known pathology that those systems are susceptible to.
While I think this is exactly what's happening, it demonstrates that we need more human in the loop interaction verifying the decisions of the machine learning algorithms. I don't think anyone that works in the space would disagree that this method would be highly susceptible to trolling. I mean look what happened to Tay[0]. You have to have some mechanism where humans are checking on how the system is learning.
The big question is: was this a recent effort to flag these phrases or was it a gradual thing? If it is the former, I think it is easy to forgive Google as things move fast. If it is the latter I think it brings questions about fundamental methodologies.
I am being intentionally ambiguous about what is being classified because there are similar complaints about other subjects so I want to generalize.
[0] https://www.theverge.com/2016/3/24/11297050/tay-microsoft-ch...
> There are enough Chinese pro-nationalists using YouTube
And they aren’t any more nationalistic than Americans. Reporting this slogan is the equivalent of, say, an American reporting “Drumpf is Hitler” or “Hussein Obama”. These are all dumb slogans which are spammed just to get a political reaction. Different people get offended about different things, that’s just something you have to deal with in a big community.
If there are so many Chinese people using YouTube their signal would behave no differently than people from other nationalities, no?
It's not so much a function of how many Chinese people are using it as how many instances of the word being posted result in a comment being flagged.
Intersections such as "The word is rarely used, but when it is used it happens in a political setting where someone is more likely to decide to hit the flag button" would train an ML algorithm that the word is unwelcome in general.
Couldn't you just have bot accounts that search for YT comments and key phrases and flag those? Simply enough flags results in auto removal regardless of YT's decision's on the acceptability of these words/phrases. This wouldn't be very hard to setup either.
No, no it would not. ;)
One challenge is that Google's actually got some pretty solid signal to find and kill bots. But it's not impossible to botnet their services; just harder than doing it to the average online service that doesn't have an army of engineers who've trained on the adversarial space of people trying to automate ad clicks for real-money revenue.
This sounds like a reasonable conclusion to me. Anyone here work for YouTube that can confirm/deny this?
Congress should revoke immunity under Section 230 of the Communications Decency Act, for any online community the size of YouTube that imposes their own "code of conduct" independent of US law. This is a very controversial idea but anything else will lead to censorship, as we're seeing again and again.
You're arguing for unbounded trolls, spam, and the complete destruction of all online communities.
I don't understand what worldview leads to the conclusion that any moderation of a private platform by a private entity is equivalent to government censorship.
Maybe we need to have a discussion about what kinds of trolling and spam should be legal. And if you have a problem with that stuff, why doesn’t the current state of YouTube comments bother you just as much as this idea?
> I don't understand what worldview leads to the conclusion that any moderation of a private platform by a private entity is equivalent to government censorship.
I didn’t say “government censorship”.
I prefer the current state to having the government literally kick down my door for "trolling" someone online.
Swatting, doxing, and harassment are all forms of trolling. It can have lethal consequences, and these are already variously illegal for good reasons.
You are more comfortable with the government regulating what kind of trolling is legal (an actual, on its face violation of the 1st Amendment), than you are with YouTube removing a gore video for violating their community guidelines?
This is the most bizarre take on libertarian free speech absolutism that I've heard all week.
Hey, I said it was controversial.
> You're arguing for unbounded trolls, spam, and the complete destruction of all online communities.
Nonsense. The Internet in general, and YouTube in particular did quite well for themselves for most of their existence, without any significant moderation.
You would have to pretty new to the internet to think that it would all collapse without heavy handed censorship.
I remember very clearly that up until about 2015 there was almost no moderation outside of spam, copyright infringement, or blatantly illegal stuff.
It would probably benefit these online communities to stop moderating and censoring stuff so much.
YouTube clearly is trying to gentrify their platform, and turn it into a sanitized short-form Netflix.
They pay media companies like MSNBC and Fox News to post clips of their shows, and heavily promote them.
They've driven many of the best content creators off of their platform. I used to really enjoy YouTube, but my favorite channels are constantly having to worry about their videos being deleted. They've been removing their old videos for fear of getting too many strikes, and moving to other, less good platforms.
> I don't understand what worldview leads to the conclusion that any moderation of a private platform by a private entity is equivalent to government censorship.
Well, when a platform becomes ubiquitous and powerful enough that only government can restrain it, then the question of people's general welfare comes up.
America has had these discussions before. We used to let oil companies, coal companies, rail roads, and phone companies become aggressive and destructive monopolies.
I think that at this point, calling YouTube "private" is not really correct. Not just do they heavily benefit from many government protections, but they are so big that they are not bound to market forces, and are essentially unaccountable to the public.
Corporations have no right to exist. And corporations that do not follow the rules certainly should not enjoy Section 230 protections. I don't think it is too much to ask that we be allowed to sue them if they don't follow the law.
A code of conduct doesn't have the force of law to begin with, in what way is Youtube having one independent of US law?
And if you're suggesting every instance of moderation on Youtube's platform should be a civil or criminal matter, and that a court or some other agent of the state should decide on each matter rather than Google, how would that not still be censorship?
The point isn't that moderation should be a crime or tort. The point is that exercising editorial control for political reasons should exclude them from 230 protections. That is, they'd be considered a publication rather than an open forum.
It would be censorship only with the consent of the governed, as it should be.
Also folks, keep in mind that I am absolutely not saying that the government should regulate every video on YouTube, I am only saying that YouTube should lose its special privileges if it wants to play that game itself. It is a critically-important distinction that I feel is being missed here.
And if the government decides Youtube has to censor anti CCP comments, you'd be fine with that?
No, I would vote those morons out of office. As it stands I don't have nearly enough money to vote the Alphabet CEO out of office.
You probably don't have enough money to vote a government out of office, either.
Unfortunately I don't, but I hope you understand the meaning behind this facetiousness.
So you're ok with porn on YouTube? What about ISIS beheadings and recruitment? Just to be clear, that is what you are proposing.
It is exactly what I’m proposing, yes. These companies should be forced to participate in the making of laws about free speech, they should not effectively be able to decide what free speech is by fiat.
This is both bad policy and politically untenable.
It also adds noise to the discussion which detracts from the very real bad behavior that’s happening here.
I would probably call your position on this issue "noise", too, but that's unfair.
You know that porn and beheadings are already explicitly not protected speech in the US and elsewhere?
You know that comment moderation on a privately owned website is both legal and protected by law, and in no way affects your free speech protections?
> You know that porn and beheadings are already explicitly not protected speech in the US and elsewhere?
OK cool, that resolves the parent's objection. I did not know that.
> You know that comment moderation on a privately owned website is both legal and protected by law, and in no way affects your free speech protections?
Read up on the CDA and the special privileges granted by Section 230. I'm not at all suggesting that online moderation should be made illegal.
Okay, I’ve now read the full code of section 230. It doesn’t seem to be impeding free speech. It specifically lists more or less the set of things that are currently not protected by US freedom of speech law, and makes ISPs not liable for attempting to automate moderation of those things. What - exactly - is the problem with section 230? What do you think it’s doing wrong, and which subsection is problematic? The title of the section says it relates only to “offensive material”, which is already not protected speech. You might, in turn, want to read up on what speech is actually protected. https://en.wikipedia.org/wiki/Freedom_of_speech#Limitations
> I’m not at all suggesting that online moderation should be made illegal.
Maybe I misunderstood. What are you suggesting then? What is the problem, and how is tearing down section 230 going to solve that problem? You said you wanted porn and beheadings to be allowed, and made a claim above that moderation is censorship. That did sound to me like you’re against moderation, so help me understand what you meant.
BTW, what is making you think that section 230 has anything to do with YouTube's comment deletion here? It sounds like they claimed it was a mistake. Whether true or not, it doesn't sound like section 230 is being used as a defense. So is this discussion about section 230 a red herring?
The problem is that YouTube can do this kind of thing on purpose, with no consequences, whenever they want to. We have no legal recourse according to precedent, our only option is to stir up a PR backlash big enough that they "notice the error" (which in fairness is probably what happened here).
Isn’t that true with or without 230? What does rescinding 230 fix? Again, they didn’t claim exemption because of 230, so what does section 230 have to do with anything here?
I agree that any recommendation algorithm should be considered a publisher. But won't revoking the immunity these publishers to editorialize (or censor, if you prefer) even more in order to protect themselves from liability?
I don't think so, because immunity would not be revoked across-the-board, but rather it would be conditional on their censorship policies. I think these companies would rather deal with the consequences of unsavory-but-legal content, than hire an army of moderators and still get sued every other day.
> Congress should revoke immunity under Section 230 of the Communications Decency Act, for any online community the size of YouTube that imposes their own "code of conduct" independent of US law.
The EFF is one of the strongest champions of freedom of speech online, and they disagree with you. https://www.eff.org/issues/cda230
Thanks for the link. Unfortunately, the key item on that page is a link "makes editorial judgments" to another page and is 404. Ha.
It's 404ing because the key cert for the page has expired.
That's awesome. :-)
Yes, and they could be right, but this is a very complicated issue, and the size of the platform matters a lot. In this case Section 230 effectively enables a monopoly on "mainstream" video content, where the monopolist in question has been given a blank check to control speech on their platform. That's a big problem too.
The EFF is a shill for Google and it's no surprise they disagree on this.
Congress should revoke immunity under Section 230 of the Communications Decency Act.
FTFY.
Clearly, we should trade out corporate censorship for government censorship.
Wait.
Sounds like a great way to lay the legal foundation for declaring FAANG a government-sponsored monopoly like AT&T prior to United States v. AT&T.
YouTube is not one of these already? Regardless of whether you agree with net neutrality, it has the effect of tilting the market for net traffic in favor of video streaming services.
Please keep upvoting.
I thought they said they would 'fix' this. Don't tell me eliding entries from the naughty list is some enormous engineering exercise. Only a fool would believe that.
Before I got my pitchfork ready, I decided to test with a comment on YouTube: https://www.youtube.com/watch?v=ufrR98sR7XY&lc=UgyRmEKscwt_U... (scroll down to highlighted comment).
9 hours later it is still there... ️
Interesting. Looking at Palmer's claim: "Try saying anything negative about the 五毛, or even mentioning them at all. Your comment will last about 30 seconds and get deleted without warning or notice, CCP-censor style." This seems to be evidence against the broader claim that anything gets deleted, rather than just negative comments.
I wouldn't be surprised if Google's algo has some idea of how toxic a person you are based on your previous comments.
That said, I wouldn't be shocked if every comment Palmer Lucky makes is shadow-banned.
Is there any reason to believe YouTube and not the users posting the comments, channel owner, etc - are not the ones deleting the comments?
Yeah, mine too still there. https://www.youtube.com/watch?v=CbV_lMS0R6U&lc=UgyIxFgZnPM8T...
This sounds like yet another anti-China thread that comes the same day as the new security law for Hong Kong (I'm sure it's just a coincidence).
I wonder: does China comment on repressive laws approved in other countries? Isn't Hong Kong a part of China? Why wouldn't they approve any laws they see fit? Why are the people that are concerned about this never concerned about repressive laws approved in US-friendly countries like Turkey or Saudi Arabia or the Emirates?
And why all of a sudden is everyone so concerned with what China has been doing for decades (something that, while definitely authoritarian, is not exceedingly nefarious either)?
It's a serious question. We all knew how China works, and even if we thought it's something that goes against some of our values, we never considered it bad enough to be a deal breaker.
So what happened that made us all of a sudden become so fixated about it?
Obviously because it's challenging Western world dominance.
Google has major investments in China. Looking at "Youtube" in a bubble doesn't tell the whole story.
I don't think "conspiracy theory" is correct - it's simply a private business doing what's best for itself financially.
Google isn't some government institution. ...or is it?
If the CCP says "if you delete anti-CCP content on Youtube we will act more favorably towards your other enterprises in China" and Google does just that, that's a conspiracy. I'm not so sure if that's what's going on, but it seems probable enough to consider.
define: conspire
intransitive verb - To plan together secretly to commit an illegal or wrongful act or accomplish a legal purpose through illegal action.
intransitive verb - To join or act together; combine.
intransitive verb - To plan or plot secretly.
-----------------
Governments conspire. If they do it well, you can not prove it, only theorize. "Conspiracy theories"
Another definition:
A conspiracy theory is an explanation for an event or situation that invokes a conspiracy by sinister and powerful groups, often political in motivation, when other explanations are more probable.
You're defining a bad conspiracy theory. Most (but not all) of them are bad. Some of them have been true.
To say all conspiracy theories are bad is to say the government never conspires, or at least nobody ever correctly theorizes about it. Seems unlikely.
No, I'm simply using the noun phrase the way normal people use it:
https://en.wikipedia.org/wiki/Conspiracy_theory
and check out the citations.
This claim is misleading. Youtube doesn't just delete anti-CCP comments, they also delete pro-CCP videos. For example https://twitter.com/rachw82451432/status/1265308476034519040
We so badly need decentralized alternatives to these services. Unfortunately I don't see anything on the immediate horizon that is up to the task.
Someone below pointed out that when other random users post these phrases, their comments are not deleted.
Possible alternative explanations:
1) The CCP is autoflagging comments from known anti-ccp users
2) The CCP is autoflagging comments only on chinese language videos
etc...
Given that this doesn't reproduce (i.e. there is a comment with this phrase up for over 9 hours) I'm skeptical of the explanation above that it's an automated system inside of YouTube's backends.
[1] https://www.youtube.com/watch?v=ufrR98sR7XY&lc=UgyRmEKscwt_U...
Nice hypothesis. Would be a shame if someone were to test it. https://www.youtube.com/watch?v=CbV_lMS0R6U&lc=UgyIxFgZnPM8T...
The linked tweet complains that the phrase 五毛 (50 cents) gets deleted. Does the much-discussed 50-Cent Army even exist on non-Chinese sites?
I have a hard time imagining the Chinese government finding people who speak fluent English willing to post for 50 cents a comment, and I've never run across an English-language forum where such activity was apparent. The terms "五毛" and "wumao" are usually just spammed at anyone perceived as insufficiently anti-Chinese.
So we're calling spam filters censorship now?
YouTube says that an error caused comments to auto-delete:
https://techcrunch.com/2020/05/26/youtube-china-comments-wum...
YouTube claims this is an accident [0] which I find a little hard to believe.
Is it crazy to think this is CCP state actors? [1] Both in the form of teams of people or bots reporting anything that they dislike to trigger Google's automation, or even just getting people hired by these companies to work on the inside for their interests.
[0]: https://www.theverge.com/2020/5/26/21270290/youtube-deleting...
I would guess the filter was told to inform advertisers that those words were present, and then accidentally was set to delete them immediately.
Seems more believable than China infiltrating a foreign company to block two words from appearing in Chinese on a website blocked in China.
Yeah I think that’s more likely too, but I also think it wouldn’t be exclusively for removing two words.
If I was the CCP I would get people hired to both steal IP and also look out for Chinese interests.
My guess is that someone pro CCP sneaked the keywords which trigger the deletion into a list of Chinese insults (i.e. on a list of purely offensive words).
At lest one of the words found first can indeed be seen as denouncing descriptions of communism. I can't judge if it's just that or quite insulting.
If this is true then the interesting question is how they sneaked it in there. From outside by social enginering? From inside by affiliated devs? Through an consultant company hired to create a list of Chinese insults?
The most ironic think is that if my guess is true then it might literally have sneaked in without anyone intention by just using a list of insults from somewhere else without cross validating it.
My guess is that it's the CCP abusing the flagging system to trick the existing anti-abuse systems into doing the dirty work for them... An infiltrator would be harder to get in place and much easier to find and deal with (if they did something as crass as manipulate a word list).
Great! Let's fix it! Suppose you just throw the words on a whitelist? Well, now there are a couple magic words you can put in unrelated spam comments to ensure that they don't get deleted.
Honestly, all the people knotting their underwear about this don't seem like they've ever seriously thought about abuse on large platforms. It's an inherently adversarial environment, and you have to game out second- and third-order consequences for pretty much everything. And even then there will be unintended consequences. A 'real' fix that doesn't break lots of other things takes time.
The "communist bandits" thing is a Taiwan vs PRC thing -- Mao was basically a bandit for like 10 years while fighting the ROC government. Or the revolutionary vanguard expropriating wealth for the people's liberation, if you'd prefer.
Seems roughly equivalent to calling someone "you syrian terrorist" to me? Not sure where to calibrate it.
'Communist' is an ideological alignment. 'Syrian' is not.
In this context it's very clearly about the PRC, referencing specific history, not some generic hypothetical communist.
Nobody calls berkeley student marxists 'bandits'.
That context doesn't change my comment. Affiliation with the CCP, past or present, is an ideological affiliation. 'Syrian' is not. Comparable to 'Syrian bandit' would be 'Chinese bandit'. If you want a Syrian analogue to 'communist bandit', you might try 'ba'athist bandit'.
'Communist' and 'ba'athist' are ideological affiliations, as are affiliations with the Chinese Communist Party or the Arab Socialist Ba'ath Party specifically.
'Syrian' and 'Chinese' are not ideological affiliations.
You're aware it's a 1-party state, right? That's an awfully fine hair to split.
The people making that comment are talking about the government and the nation, not an ideology (which is barely even followed in practice, anyways).
Despite what the CCP would like you to believe, being ethnically Chinese is not synonymous with supporting the CCP, anymore than being ethnically Syrian makes you a supporter of Syria's Ba'athist Party. This is not a "fine hair to split."
Whether these political organizations adhere to their own professed principles is completely irrelevant; neither do. Whether these political organizations tolerate opposition is completely irrelevant; neither do.
It could be an action from a rogue employee. Possibly not a management decision or specific company policy, just a filter added by an employee with access to it. There might be some 'lively' discussion internally in Google right now over this specific issue. In the past some Google employees have taken actions against individuals based on their personal political beliefs or other personal grudges. A good example of that was when Jordan Peterson was temporarily locked out of all Google services, including his Gmail, insiders said it sparked a lot of internal arguments at the company (eventually his accounts were reinstated with no official explanation or apology, though some employees did apologize to him privately for their coworkers behavior).
It's likely at its core a problem with poor employee screening, insufficient training and supervision, and vague/over-reaching policies given to employees that they sometimes interpret as legitimizing them to censor or ban based on their personal political beliefs.
Some employee there likely is a communist sympathizer, or has other connections to China's authoritarian ruling party.
This article says Peterson's account was deactivated due to an exploit of the spam account flagging system and not because some employee arbitrarily decided to deactivate it: https://medium.com/@zacharyvorhies/open-letter-dear-attorney...
Where did you get your version of the story from?
Or just someone PC enough that any derogatory slang term counts as hate speech to them.
I used the term “lesbian” in a comment on Facebook this week in a respectful and topical manner.
I received a notice that the comment was removed for violating community standards against “hate speech” and was told that’s my one warning for the next 12 months and any subsequent infringements would lead to a 24 hour ban. I used their process to disagree with the finding and they replied in a few minutes that they rejected my appeal.
Facebook is actually a lot better than others on this issue: https://www.vanityfair.com/news/2019/02/men-are-scum-inside-...
Obviously they can make mistakes, but I’d also be curious about your specific usage.
Someone must have reported you.
And now you're on double secret probation as one who has not only engaged in "hate speech", but had the temerity to protest their notice that you have been convicted and found guilty of hate speech.
What was the exact comment?
Why do you think they removed your respectful comment and deemed it hate speech?
This has been all over 4chan for a while now.
Well, this is kind of horrifying.
It’s not just those words. They’re deleting any comment with the English words “idiot communists” too.
This could be a new sport, a la Perl Golf: What's the most innocuous thing you can put in that will get something banned?
For real?
Didn't Palmer Luckey literally pay people to defame Hilary Clinton in the 2016 presidential campaign? I guess he knows all about how 50-cent army works, especially US ones.
https://www.theguardian.com/technology/2016/sep/23/oculus-pa...
Edit:
https://www.telegraph.co.uk/technology/2017/03/31/oculus-rif...
Palmer Luckey wouldn't be my symbol of anti-authoritarian either, but that's irrelevant to what YouTube is doing here.
There's literally nothing in the article about defaming Hilary Clinton. The rest of the article is merely the newspaper complaining that Lucky doesn't share their politics.
Edit: glad you noticed the discrepancy. The second article you added doesn't show any defamation either.
If you're genuinely asking, Palmer Luckey gave $10k to a group that put up an anti-Hillary billboard in 2016 (with a cartoon caricature) [0]. This group was also publishing stupid pro-trump political memes.
He was subsequently found out and forced by Zuckerberg to publish a letter pretending he supported Gary Johnson (a lie that Zuckerberg thought was more palatable than Luckey's Trump support). He ended up being sort of fired later because nobody wanted him on their team anymore.
The best information I've read about this is in Steven Levy's new book Facebook: The Inside Story [1].
[0]: https://www.thedailybeast.com/palmer-luckey-the-facebook-nea...
[1]: https://www.amazon.com/Facebook-Inside-Story-Steven-Levy/dp/...
Ok. Where's the defamation? That's what I was asking about.
I don't think you're really asking in good faith and this is veering extremely off-topic and pedantic (and I am not a lawyer), but here:
"""
> Although laws vary by state, in the United States a defamation action typically requires that a plaintiff claiming defamation prove that the defendant:
> 1. made a false and defamatory statement concerning the plaintiff;
> 2. shared the statement with a third party (that is, somebody other than the person defamed by the statement);
> 3. if the defamatory matter is of public concern, acted in a manner which amounted at least to negligence on the part of the defendant; and
> 4. caused damages to the plaintiff.
"""
https://en.wikipedia.org/wiki/Defamation#Civil_defamation
Arguably all of these points are met by a "too big to jail" billboard facing the public. Though given the nature of a political candidate and free speech, I think this would be protected. Particularly because "Defenses to defamation that may defeat a lawsuit, including possible dismissal before trial, include the statement being one of opinion rather than fact or being 'fair comment and criticism'."
That said, even if it doesn't meet the technical level to convict I think it's a reasonable thing to refer to as defamation colloquially and I suspect you know that too.
For clarity on the specifics of a 'defamatory statement':
> A defamatory statement is a false statement of fact that exposes a person to hatred, ridicule, or contempt, causes him to be shunned, or injures him in his business or trade.
Hillary was found to have violated state secrets, but bizarrely not punished as she'd done so accidentally. Saying she's too big to jail seems accurate.
Somehow China manages to control directly or indirectly companies, individuals, and sometimes politicians, in the US on a far bigger scale than the Soviet Union did. Oddly enough on a scale even higher than under Obama.
will they also ban me if I something against the CPSU?
if youtube doesn't ban me than HN might do that, they both didn't read he prague declaration https://en.wikipedia.org/wiki/Prague_Declaration_on_European...
Twitter censoring good, YouTube censoring bad.
Those are not "anti-CCP comments", it's just spam. They don't convey any information or opinion.
There had been multiple previous threads on this on hackernews which show that YouTube is indeed deleting comments with certain chinese keywords which are mostly used in anti-CCP context.
Furthermore YouTube comment are full of spam and YouTube doesn't delete them, so why is it that if they contain certain Chinese keywords (in Chinese) they do delete them?
> Furthermore YouTube comment are full of spam and YouTube doesn't delete them, so why is it that if they contain certain Chinese keywords (in Chinese) they do delete them?
I think this just shows their spam filtering is not perfect. They delete lots of spam, in both English and Chinese, but some slip through.
Your comment seems to imply that the ONLY comments that are being deleted are the ones with the Chinese keywords. That is certainly not true. It tries to delete all spam, but the spam detection has both false negatives and false positives. That doesn't necessarily mean there is a conspiracy (doesn't mean there isn't, but provides no evidence that there is)
> I think this just shows their spam filtering is not perfect.
Excuse my bluntness here, but that is a very lazy excuse. A lot of comments are repeated, utter trash that span multiple channels and videos from known bad actors. Sure no spam filtering is perfect but in this case it's obvious YT has gone out of their way to handle that data differently.
I'm not sure YT doesn't delete other kinds of spam, in particular other instances of hate speech.
"communist bandits" (or its Chinese version) absolutely does convey an opinion, and that phrase regardless of its use or context is being removed.
If you count hate speech as an opinion.
That's one of the many core dangers of disrespecting free speech for the sake of prohibiting "hate speech". It's a very, very short step for a government to persecute criticism of themselves on the basis of "hate speech".
That doesn’t make any sense. An opinion cannot be hate speech, that’s a categorization error and not at all how hate speech works.
It’s the other way around: an expressed hateful opinion may be considered hate speech by the law or someone. And an opinion may be considered hateful. But the other way doesn’t make any sense.
I'm not so sure that's true. Consider:
"All <n-word>s need to go back to Africa." (clearly and overly hateful)
"All black people need to go back to Africa." (still hateful)
"All African-Americans need to go back to Africa." (All words are accepted, but the notion is still rather hateful)
No matter what words one uses it could be considered hate speech by today's definition. Disclaimer: I do not nor do I advocate using racial slurs, the stub in here was just to prove a point.
An expressed opinion considered hateful can be “hate speech” (depending the legal/accepted definition), yes.
Can you please link to the hate speech definition that's being referring to?
There's no "legal" definition of hate speech according to the US federal government, but the most accepted that I've found is "any form of expression through which speakers intend to vilify, humiliate, or incite hatred against a group or a class of persons".
How about a thought experiment: in "communist bandits", replace "communist" with derogatory name for some other nation or social group.
Another one: "I hate onions". Now, replace onions with a derogatory name for another group. We can't prohibit negative words on account they can be used to express negative opinions about other groups of humans.
In the case of communist bandits, it's a criticism of a political ideology, and in particular here one party. Criticism of political parties and ideologies is completely fine, even when expressing insults to them. Expressing threats of violence or other crimes isn't permitted or moral, but expressing dislike or disrespect is perfectly valid. Can anyone say they've never expressed disrespect to any ideology or political party in their life?
No, "communist bandits" obviously refers to a group of people, not ideology. Furthermore, while you could argue it's about some particular organisation, it's often being used to refer to chinese citizens in general.
"Republican bandits."
Does that offend you?
All over social media, all day, every day, people accuse others derogatorily of being fascists. Both are political ideologies.
Would you be as happy to see fascism protected from negative commentary online, as you clearly would communism?
Is the word "fascism" used to refer to current citizens of a particular country, which don't have anything to do with the stuff we hate fascism for? Because the word "communism" is - the comments in question don't attack "communist ideology"; they are derogatory comments aimed at one particular nation.
Italy?
It isn’t funny when your opinions get categorized as hate speech and it’s your ox getting gored.
Thinking all Chinese people are lockstep behind the CCP is racist. It plays into the "Asian hive mind" stereotype.
Thanks for have some sense here. Just typing two words is not a comment. It is akin to spam on practically every platform.
The point is that those two words are (in China) associated with being against the CCP. It’s kindof similar to “black lives matter”. To some, it’s a symbol of hate (with re. to the police). To others, it’s a symbol of fighting back.
Even if you assume those comments are written by Chinese trying to fight for their rights - and I doubt it is the case - there's still a fundamental difference: #blacklivesmatter was a tag that accompanied actual content. Here, there's nothing but the "tag", copy/pasted all over the place without adding any value.
Why is this particular "spam" being deleted? Youtube comments are full of similar "spam" that somehow goes undeleted.
They deleted you comment if it contained one of multiple "words" (compound words, short phrases).
It's like censoring someone commenting "It's FBI sponsored" or "Russian propaganda". Sometimes a view words in a context can say as much as a long sentence.
Also in Chinese thinks are a bit more complicated due to how different glyph can be combined and/or used in conjuncture. So I wouldn't be surprised if there is a situation where commenting with a single Chinese glyph is saying as much as a long sentence!
> Also in Chinese thinks are a bit more complicated due to how different glyph can be combined and/or used in conjuncture. So I wouldn't be surprised if there is a situation where commenting with a single Chinese glyph is saying as much as a long sentence!
Chinese characters aren't magic. Putting two characters next to each other isn't fundamentally different from putting "Russian" and "propaganda" next to each other. You could write a sentence with just a single-character word, but that's not going to be more expressive than a single word in any other language.
It's amazing to me that this isn't against a US law. An American company is actively helping spread propaganda of a hostile, foreign country.
Just insane. Google's new motto should "we do what we want"
I’m all ready to bring out the pitchforks, guys, but I’m confused. Didn’t we all agree literally yesterday that Facebook not doing enough to reduce division was a moral catastrophe? Isn’t deleting what hundreds of millions of people see as a foreign propaganda slogan a textbook example of reducing division? Is the rule that we should delete only what you in particular find divisive?
In other news, that should be front page, but doesn't follow the Google hate that Hacker News accepts as front page material:
That's unrelated. Do you think it's alright for YouTube to delete comments discussing the Wumao?
Do you think it is okay for Hacker News to censor and rank content based on their own agendas? Shadow ban and prevent users from posting? This post is literally complaining about something that Hacker News does frequently. I couldn't even respond to this comment for over two hours with a message saying:
> You're posting too fast. Please slow down. Thanks.
This is what they do to people that don't follow their agenda. All the while, you can't even respond to comments on your own comment.
I asked if you support Google's actions here of removing discussion of the Wumao. Could you answer that question first?
It depends on the context.