Magic the Gathering content creator pleas YouTube to stop scammer bot deluge
youtube.comMaybe I'm a callous bastard, but I genuinely don't get how he feels personally responsible here. It's a YouTube problem, not a M:tG problem or a him problem.
He sounds genuinely distressed, and I feel for him, but I don't get why it causes him pain in this specific way. It's not your fault!
And for what it's worth, it's not really YouTube's "fault" either. Remember, they're victims here too. They didn't invite this, they didn't ask for this, and they're asking for ideas because they don't know themselves what to do.
Computers are sometimes likened to "magic" where we just cast a spell by writing code, and something technically beautiful happens, but where magic can interpret intent and act according to the caster's desire, computers can't, and there's no way to "Wrath of God" the spammers.
I hope this guy can find a way to live with this kind of thing, as I really don't see scammers getting wiped out in any conclusive sense, on YouTube or any platform. It sucks, it's unfair, but it's not realistically possible to deal with conclusively.
> Computers are sometimes likened to "magic" where we just cast a spell by writing code, and something technically beautiful happens, but where magic can interpret intent and act according to the caster's desire, computers can't, and there's no way to "Wrath of God" the spammers.
I somewhat agree with this, but I feel like there's some very low-hanging fruit that should be available.
When posting in comments on a video there should be:
- An avatar similarity check. If you have an avatar that is too similar to the channel of the video you're commenting on, your post automatically goes into a moderation queue (or just remove avatars from comments, except from the channel).
- A name similarity check. If you have a name that is too similar to the channel name, your post automatically goes into a moderation queue.
- A huge indication that a comment comes from the channel author. They have some indications of this now, but it should be very prominent, so it's *obvious* when comments don't come from the channel author.
None of these things are going to be trivial to implement for a company like YouTube, but this has been a problem for years at this point. These things could have been done by now.
Formatting author comments to have red background and white text is, like, an intern project.
Abuse is a very difficult problem that Google spends a lot of money and talent on. There’s no silver bullet like avatar/name similarity or other forms of detection because the scammer can trivially iterate until they avoid automated detection.
I think the indicator that a comment comes from the author is pretty noticeable but it could be better. Its a tough trade off to make between UX and fighting abuse.
The reality is that there are criminal groups who spend great resources to scam people online and sometimes they figure out a clever way around enough mitigations so they can completely hose a platform. There’s not a lot a platform can do against this kind of attack except detection and reacting.
I see it as an international crime issue where certain countries are indifferent to americans or even their own citizens being scammed. This is a very different problem if Google could simply pass their info on the scammer over to a competent law enforcement agency. It would be a lot riskier for scammers, and they’d have to put a lot of effort into evading detection. Definitely a pipe dream though.
> There’s no silver bullet like avatar/name similarity or other forms of detection because the scammer can trivially iterate until they avoid automated detection.
That's a cop-out, though.
It's true that they're not going to be able to stop every single scammer, but that doesn't mean they can't raise the barrier to entry enough that a significant percentage of scammers find that it's no longer worth it.
As for the specific measures the GP suggested—those are absolutely things Google has the resources to do. Image and text similarity are things they deal with all the time, and if they can make it effectively impossible (barring occasional random false negatives) for scammers who attempt to impersonate the author using these methods to get through without a human double-checking, that would be a huge blow to their ability to fool people. It's not like if you're clever enough you can, say, have an avatar that shows one thing to the bot-check systems and another thing to users.
> There’s no silver bullet like avatar/name similarity or other forms of detection because the scammer can trivially iterate until they avoid automated detection.
I agree that there's no silver bullet, but there need to be a hundred small changes that each increase the amount of work or decrease the scammer conversion rate until their ROI is materially harmed.
Name similarity checks have a few benefits:
- Would be quite easy to implement - Increases the scammers' work which reduces their ROI - Reduces the conversion rate (because instead of a spam message coming from "Tolarian Community Collage" it now comes from "Tulrin Community Collge"; the more the spammer has to iterate on the name, the less believable it becomes)
Bonus points if you add to the moderation queue a "Rejected because this was an impersonation attempt", and now the avatar/name of that commenter goes onto the similarity detection checks for that channel.
I agree with you. However, I'd point out a usability issue:
> A huge indication that a comment comes from the channel author.
This pattern requires that users have done something to have seen this before. While many users will see it, the users that are probably falling for this problem probably will not have seen this "huge indication" before, so wouldn't know it exists.
e.g. a user who never checks comments really, but then checks comments one day and sees the scammer might not realize there would be an indicator if it was the video's author.
I agree, which is why I would prioritize the other fixes as well. Still, these kinds of things (once broadly learned by the community) can have a significant impact.
Yes, it won't be 100%, but it would reduce the conversion rate of the scammers. That's ultimately the solution. No single measure will ever fix this, and if you wait to implement something until you will solve the entire problem space you'll never get started.
The solution is a hundred different changes that each reduce the scammers' conversion rate by 1-2%.
There are definitely ways to combat this but I think they're expensive. An avatar comparability check at the scale of YouTube would be immense. Running each comment through some machine learning algo would be immense.
> An avatar comparability check at the scale of YouTube would be immense.
I don't think it has to be. For each user you should already have the hash data computed (using something like imagehash this works out to a hash of 8 bytes per image, though you can obviously tune this up depending on storage/performance requirements).
Each time a comment is posted, you would do a distance measure between the commenter's avatar hash and the channel avatar hash. Mixed in with the network latency and DB I/O operations, I think this additional read/write that only needs to occur when a comment is posted could be done with a pretty minimal additional compute overhead.
That said, if Google doesn't have the compute overhead to do it, I gave an alternative. Simply don't display avatars from commenters other than the channel author.
> And for what it's worth, it's not really YouTube's "fault" either. Remember, they're victims here too. They didn't invite this, they didn't ask for this
“Let’s see if we can host all the world’s video content and all its commentary only using automated tools!”
That was the idea behind the social media boom and it was a foolhardy one. The moderation problems that social media platforms are having now are 100% of their own invitation. They just managed to hold them off for a while.
He isn't the one causing the harm, but people are being harmed that would not if his business did not exist- it seems like this weighs on him.
If every time you had a party, one of your guests would be hit by lightning and die, you might feel guilty about hosting parties. Would you feel comfortable inviting your friends and family over?
> He sounds genuinely distressed, and I feel for him, but I don't get why it causes him pain in this specific way. It's not your fault!
I’m familiar with this personality and his brand is being distressed about many other things. I think he’s just expressing feelings for his audience or something. He’s also apologized for lots of other things that aren’t his fault- rude magic fans, sexists, racists, people who spend too much money, people who don’t spend enough, etc.
He tends to spend a lot of time apologizing for things that aren’t his fault or that he has no control over. It’s not unique to him, and seems kind of common. I’m not sure what to call these over-empathizers.
> Maybe I'm a callous bastard...
He is upset that his posts are acting like bait, that his likeness is getting used to scam people who wanted nothing more than to connect with someone they admire.
For a lot of people, it would be impossible not to take this personally.
His brand is being damaged by this. For anyone who has never had a brand with value, I could see why that seems dumb, but for some people brand becomes everything.
> Maybe I'm a callous bastard, but I genuinely don't get how he feels personally responsible here.
The bit where it gets complicated is that the scammers are using his name and likeness. The way these scams work is by convincing people that the youtuber in question wants to connect with them and then they steal money from them.
How would you feel if I would go around on HN tell people that I'm Zetice and steal money from them? How would you feel if the way you would hear about this is by confused people messaging you and demanding very angrily that you pay them back? You Zetice worked hard to be a person with integrity, and someone is squandering away your accumulated goodwill, hurting the very people who look up to you and you can't do anything about that.
I think I would feel terrible, and I would try to use my platform to shine a light to this problem. Worst case a few of my followers would wisen up and develop resistance to this kind of scam. Best case some manager at youtube shovels more money into solving the problem.
Imagine talking with multiple people who think you scammed them. Someone with your name (almost) and avatar picture convinced them to send a hundred dollars for a PlayStation, and they only did so because they believed it was you. A month goes by and the PlayStation didn't show up - now they're trying to contact you to figure out what happened.
I think the creator has clearly spent a lot of time and effort building his community, and he is witnessing members of that community being hurt, but also his reputation is being directly attacked by the scammers to hurt his image with the very people of his community.
>> Maybe I'm a callous bastard, but I genuinely don't get how he feels personally responsible here.
Probably because he's not a callous bastard.
Don't worry, I'm not saying you're one, either. I suspect you just don't have his responsibilities. Like, hundreds of people who somehow hold you up in great esteem because of the stuff you create. That's a heavy burden and most people who aren't assholes will take it seriously.
Guy just shot up in my esteem five hundred levels.
(yeah yeah, I know M:tG doesn't have levels... :P)
He doesn't feel personally "responsible". He is emotionally invested in his followers well-being. Which is something that comes through in a lot of his videos. He wants the best for the people that follow him, and so he feels like he's failing in his inability to help them on this. Which is why he's also pleading Youtube (that should absolutely be staying abridged of this race).
If you have any kind of venue, lets say a restaurant or a shop, you don't want your customers be targetted by any kind of illicit activity even if it is not on your grounds, e.g. in front of your store. Even less you want anyone use your identity in committing fraud. It would't matter if you are not in any direct reponsibility for that. So I can very much understand his concerns.
Maybe you are—people are taking advantage of his reputation and brand and harming his viewers.
If you insist on seeing it through a lens of self-interested sociopathy, he could loose his most ardent fans & contributors and suffer damage to his brand and/or reputation, via bad interactions with those scammers.
Empathy and concern for people isn't a novel concept.
The only thing I can see that could possibly be done is to either remove commenting altogether, and remove that as a method for determining engagement with the video. Or to make it so that to comment requires full validation (either by restricting to individuals with Youtube Premium, or some cost to be able to comment) of the individual who is commenting so that automation of it is no longer feasible from a monetary perspective. In theory, they could still allow users to post "Anonymously" but they would have to be posting from an account which has been paid for, or for which some monetary or time cost was necessary to prevent a spammer from simply spinning up more accounts.
There is no way to automate this because you have too much money on the side of the scammer should they break through it since if they only catch 1 out of 10,000 people, it immediately pays their bills. This is the big issue with email spam, we can fight and fight, but at the end of the say, there is no monetary cost really to sending out a stupidly absurd number of emails.
To continue a bit more, I have really tried to figure out if there is any way to deal with email spam that doesn't either ban anyone but incredibly trusted servers (And even then it fails horribly), or doesn't put some sort of cost on people sending emails (either time, effort or monetary.)
I would rather have email be free and open, but I see the issues that arise from bad actors in that environment who have no real cost to abusing it and have major potential for gain if they are successful.
That being said, the easiest answer, in my head, would be to make it so that sending email to people in an unsolicited fashion has some cost. Yet, even that is problematic, because I want some people to send me unsolicited email from time to time....
Charge an inbox entry fee. https://www.bbc.com/worklife/article/20181023-people-pay-20-...
And yet gmail spam filters work fine probably 98% of the time. Just implement that on comments.
This sounds good to me, but I think gmail spam filters have gotten worse over the years.
I'm persistently getting emails coming through telling me I won a yeti cooler. About daily...
I mark them as spam, and this kind of email still doesn't go away.
This problem is very ubiquitous - not just against YT but also on social media sites like Facebook.
I have given up reporting these. The social media companies usually employ some automated method, whereupon reviewing the post, said automation determines it's just fine.
I also went all the way to finding some prolific scammers in Canada, handing over their details to the FBI and state police... but two years later, their domains are still active in scam campaigns.
The big thing I think I would do in this situation is to find whoever is handling this problem the best and direct that community discussion over there, disabling comments on the video platform. If the video platform is unable to address the scammer bots issue, then that community traffic can just go somewhere else.
I don't know enough about the community discussion platforms/forums to say who's the best, but I've not seen this level of spam on self-hosted forums or even reddit.
This is definitely not the future I had anticipated as a kid on dialup internet in the 90s.
Meta does this, saying it's "just fine". Especially Instagram. I reported an account posting hardcore porn videos and they were like "lol thanks, we don't have time for this, it's fine". Then I wrote the word "tits" in a comment of some redhead's picture and the comment and removed because it was "harassment". Say what? I escalated the issue and a month later they were like "the committee didn't pick your case".
The amount of scammers I get in my private messages on Instagram is also insane. How clumsy they act. Method A: Take a random picture that you commented on a longer while ago. Say they're them and that you, the fan, has been chosen for special treatment. Yada Yada, they want to send you a picture of them but sigh coincidence has it that it requires an iTunes gift card to work. Fucking really? The profile picture is that of a blonde but all of the profile's friends are black including the women. Method B: A more popular porn actress had an Instagram account, you comment on that account, any comment. They copy pictures of that account, use AI to generate similar pictures of that person. Again you're addressed as "the fan". You get the rest. Those accounts have over 200k followers each. Maybe they're also part of that person's network monetization strategy. I know a guy who worked in SMS sex chat, pretending to be a woman and sexing male chatters up. Something like this may be happening here.
The internet, especially social networks, remain a dangerous place.
Sometimes I want to go along with it just to see where it ends and what they do to make it work, but then again who has the time for that?
Can someone in this space explain how this is a hard problem for Youtube to solve? In my limited understanding, I can see clear blockable patterns with the bot posts shown in this video.
I believe one of the difficult parts of these bot posts is:
- there isn't a lot of $$$ in squashing them, since it lowers engagement numbers (engagement inflated by bots is still engagement)
- the cost of hammering down on a real user is high in terms of PR, moreso than the cost of letting a bot continue running
- no one's making them and they have no real competitors in this space, so what does it matter? where are YouTube's customers gonna go?
Yet YouTube will shut down entire channels if there's a whiff of copyright infringement.
It’s more that there is no function that discriminates against a given kind of spam without a false positive rate, and after you implement it, the scammers can just switch techniques to another while now you are continuously dealing with the false positive rate of your method. The attack surface is nearly the entire human language and we’re not good enough yet at understanding if it is a scam in a scalable, automated way, so we have to keep bolting on things with false positive rates that cause support tickets and lower engagement over time. This is an incredibly hard problem.
YouTube has no interest in improving it's UX.....it's only interested in politics.....and spam is on their side...because it increases their side channels.....
Seems like it'd be exceedingly well-suited to an ML model that's tuned by a neverending stream of data (positives, false positives, false negatives, etc). The cat-and-mouse game would still be there, but the "lag time" between shifts in spammer strategies and the model's ability to deal with them would presumably grow increasingly small over time, until eventually it would cease to be worth bothering with for many current spammers.
"It's not something that a human can combat, it's - it's a program!".
Remember this in the days to come.
And this bit also:
People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.
https://www.washington.edu/news/2015/09/17/a-q-a-with-pedro-...
Does YouTube offer the ability to pre-moderate vs post-moderate comments? Yes, he would have to go through and allow many, many comments, but also pretty soon the scammers would give up based on my prior experience with blog and email spam.
What can be done? Those bots are using IP addresses. If they're v6 they still have prefixes. Analyze all the bots, determine their addresses. If they go to non-private aka not households ban the network or prefix. Abuse@ all incidents.
Yes it requires manual labor buy Alphabet has the money to afford to pay a group of people to combat this. They will in turn probably reduce the amount of % they pay out creators.
If I had the resources Alphabet does that problem would be solved quickly. Kill the messenger.
We need a way to make commenting on social media cost something. As long as it is free there is no way humans can compete with bots. As nobody wants to spend real money on it we probably need some social credit system but one that is hard to game and probably rate limiting. We urgently need this otherwise AI will take over the world by dominating the social networks.
I know in a certain infamous forum, it was something like $10 to make an account. If you got banned, you could just fork out another $10 on a new account. Which certainly limited some behaviors, or at least made them expensive.
Depending on the implementation of a pay-to-comment scenario, it could still be profitable for the scammers to pay. They would definitely be tracking CTR or whatever their equivalent is on scam campaigns.
In this case, the scammers are all using similar profile photos, so that part of the whack-a-mole seems like easy pickins. At least at creation time, they could fuzzy compare the profile photos of the commenter and the channel. And compare among other commenters.
The scammers would definitely move on to something else, but it seems like the scammers are scoring a lot of really easy wins right now.
I hate scammers and I love ruining their day, but this is one of many areas where I can't give more attention to it than the platform can.
Steam requires you to spend at least $5 on a game before you can comment. I guess that's why so many accounts get banned for nothing at all
This feels like it could work in a way that would make Youtube money on scammers for a short period of time.
Could grandfather in existing accounts, and have existing accounts give references for new accounts to continue getting "free" accounts. The cost being a loss of the ability to "mint" new accounts if any of your existing referenced accounts start spamming.
So you want to comment with your new account, you either need a "mint" from an existing account, or you need to fork over... let's say $50 to activate comments. (Amount could be whatever).
Isn’t this sort of what Twitter is trying with the $8/month thing? Forcing a cost to increase the barrier to posting as well as tie it to a human to go after when they do bad stuff.
All this really does is make freedom of speech into a paid feature instead of a human right.
This has always been a dilemma to me as identity proofing costs some money and that filters people out.
I remember when domain names went from free to $70 under network solutions. This was meant to be cheap but really limited people who could register domains.
This happens to a lot of Youtube channels. Zack from JerryRigEverything did a video on the same topic a few days ago: https://www.youtube.com/watch?v=iROF9Dd7FXA
Farming this out to humans via CAPTCHAs ("Flag the scam comment") risks the Cobra Effect.
Viewing it through an economic lens the other end of the problem to attack is naïve users - if they're identified and don't see the comments the scammers stop making money.
Another creators-centric model is to disable comments on YouTube and only have comments on another platform that embeds the videos.
The YouTube API has `list`, `setModerationStatus`, and `markAsSpam` endoints. Could a service automatically moderate comments using its own heuristics?
Yes, there are third party solutions for this - and they work a LOT better than YouTube's official moderation.
Also... if you think this is problematic for people who aren't really old... imagine how it plays out for the elderly.
Your parents & grandparents should NOT be on social media. They're just going to get ripped off.
If you have elderly family who you care about, spend time helping prune services and educate them on how to spot a fake post.
A spam filter is basically a hello world of machine learning. Most people here could solve it in a day. Maybe youtube gets some kickbacks from scammer operations?
Feels like YouTube could create an official mechanism for giveaways and information exchange to combat the problem.
Also, they could disallow external links in comments.
> Also, they could disallow external links in comments.
Bots will now post a link to another YouTube video (internal) while pretending to be the creator. The linked channel will look identical to the creator's channel. The private video played will be "You've been selected as a winner! Please check the description for details on how to redeem your gift". Heck use AI to fake the creator's voice and play that as a voice over.
Etc.
The fundamentals are hard to fight here. Automating these processes are low effort / high-enough reward for scammers, and it's plugging holes in a leaky boat.
Someone (YouTube) should just run a recurring ad campaign. Instead of traditional ads it's "Congratulations, you're one of today's lucky 10,000 and you're going to learn about bots and boundaries you should establish while on this platform." [0]
The number could be scaled up. They could select out populations that seem 'with it' and bias it toward anyone who interacts with bot comments other than to report them. I'm sure there are other useful signals.
Without getting into the 'how to block bots' side of the problem, this is one way YouTube could help with the user education without individual creators having to make videos like this or recurring community posts. As noted in other comments, a purely technical solution to ban bots probably isn't going to work.
I like the idea of using advertisements to progress technical literacy and reduce the proliferation of scams.
That's why I bought the domains LearnComputersFast.com, EasyComputerGuide.com, and BestComputerAdvice.com. Instead of pointing grandma towards sponsored content, dark patterns, and scammers wanting her credit card number, they would point to FOSS/OSHW/Linux content. Maybe that's not what people are really searching for, but maybe we should also live in a world where FOSS/OSHW/Linux is positioned as the mainstream.
Only problem is I don't know how to make websites.
We could just get rid of comments on videos altogether. The benefit of squelching the mechanism scammers and trolls have available to them seems to greatly outweigh whatever is lost by just getting rid of comments.
In the same vein, let’s just stop using email.
impement a verified checkmark using youtube red, so they cannot endlessly spawn new verified accounts
Same thing with elon musk giveaway scams too. Youtube does not care, or a least does not fast enough the prevent scammers from making money and repeating the scam
This is the "Tolarian Community College" (a play on words from the MtG card game).
Without having to watch the video, this Youtuber does a lot of reviews and content on the game. During the course of reviews, he gets a lot of free merchandise. To that end, he then gives it freely to his subscribers.
The problem: there's a deluge of bots that nearly-instantly post on comments and threads in Youtube that redirect unsuspecting users to scammer Telegram channels, Discords, and etc. The common scam is "pay for shipping and you get free stuff". (this youtuber pays for the shipping if sending free product)
To compound this, this is definitely an automated bot driven attack on not only current videos, but also all his historical videos. And Youtube/Google/Alphabet doesn't provide anywhere near the tools to counter these types of botstorms.
He pleas for anyone working at Youtube etc to get in touch and/or make tools available to disable these scammers.
Someone from YouTube creator team commented back but on the thread but it's pretty generic -- that they're working on new tools and they're just as frustrated.
It's a well done video explaining the problem and really the frustration of creators who are fighting these bots and haven't given up.
I've seen this sort of telegram bot attack on dozens of channels. It's not targeting him so much as its across all of youtube for channels with enough subscribers. Youtube is completely failing to keep the bots out of the comments.
literally every public instagram post has a number of "first 5 people to DM me gets 5000 bucks!" with links
After an ad, and a minute and 30 seconds of talking head, he still hadn't even started getting to his main point.
Did you even watch the video? There were no ads in it, and the part where he warns his viewers that he will never ask them for money is the whole thing he's talking about, that scammer bots are impersonating him in his comments and trying to trick his viewers.
I had to watch an ad before his video played. I don’t think it was him placing the ad so much as youtube just doing its thing.
I suppose they share that ad revenue with TCC.
I paused the video at 1:24 to come here and I already know exactly what this video is about. Maybe your listening comprehension skills are lacking
First, use a good adblocker, like Ublock Origin. There's no ads in the actual video the Professor posted.
Secondly, his audience is tabletop card gamers. He does set the scene as to why he's making a video. Most of his audience is not of a technical nature.
I also made a synopsis as a comment so you didn't have to watch the video, since some (including myself!) despise video-only content. But I do follow him on YT. And his problem applies to a LOT of areas on YT, Twitter, Facebook, Reddit, and elsewhere.