A Message About Vanguard From Our Security and Privacy Teams
riotgames.comExplaining your rationale doesn't change the fact that gamers (many unwittingly) are potentially giving the keys to their computer kingdom to Riot. This behavior on a console would be completely acceptable but unless you're running a dedicated PC for gaming, I wouldn't install this software.
As a thought experiment, I wonder what happens when the FISA court orders Riot to install a modified version on a suspected terrorist's computer. No need for privilege escalation when you can just ask the user to install it at ring-0.
> unless you're running a dedicated PC for gaming
That's the approach I've been taking for a long time now.
If you don't, you will always a) have your fun ruined by trying to be security conscious b) in the end, most likely give in and allow things you really shouldn't allow on a trusted machine because otherwise you can't achieve your task (getting a game to run).
So I have a game box, try to make sure that nothing important ever touches it (which is a huge PITA when game clients insist on forcing email-based 2FA on you), but in exchange I don't worry too much about its security.
That also fits nicely with games requiring Windows 10 and Windows 10 being so outright privacy- and user-hostile that I can't imagine running it on my primary machine.
My next gaming PC will run a Linux hypervisors and use PCI passthru to run Windows as a full-performance guest. Then if I need to use a web browser, I can switch to a Linux guest without interrupting the game.
Honestly, you're just risking getting banned then; some games already ban wine users and a hypervisoris basically the peak of hiding direct memory access so I imagine anticheat engines look for them.
Also, I did this around 3-4 years ago. It works, but once you have it set up its basically the same as if you had two computers effectively on your desk with a kvm switch in software. It also has a tendency to be unstable as all sin and some iommu isolated hardware may misbehave when assigned to a virtual machine.
Most anti-cheats and some "DRM solutions" do not allow you to run inside a VM, trying to mask the fact you do might be enough to get banned. Even with PCI passthru you can't expect full performance (CPU is also still virtualized).
It's much simpler to just have a second PC/laptop or dual-boot (less secure).
Simpler in a technical sense, but given high end gaming PC's run into the thousands of £s, It's not really the right solution to just buy another machine; the better choice is probably to not play their game (in all senses of the phrase).
Maybe a viable option is to hot swap your drives, and use something with firmware you can sign personally and verify on boot.
> CPU is also still virtualized
IOMMU also grants the guest hardware access to the CPU, although it does have to be shared between the host and guests.
>or dual-boot (less secure).
There shouldn't be any risks to that if your main OS is encrypted and the keys are sealed by a TPM.
The untrusted system could flash malicious firmware to a component with DMA (e.g. GPU VBIOS) to infect the second system.
I wanna try this but PCI pass through seems hard. And with KVM win7 guest, I get too many cert invalid error during accessing https, which is annoying.
I've done it all.
It's a hassle, mostly because you need to disable the GPU from the linux host; before passing through; which means you need a second GPU to power the linux host (integrated GPU is fine).
Then there's a bunch of config regarding IOMMU groups and other shit to make sure it picks it up fine, and when it finally does you get 90-95% of the performance for average FPS and then 60-70% min-fps (spikes are way worse).
This was my exact plan for my (current) computer a few years ago. But after learning about all the real-world complications I became lazy and abandoned the idea. Is this in a realm of "easily achievable out-of-the-box on a standard Linux installation" now?
Assuming you have recent hardware and a compatible UEFI firmware, yes. https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVM...
Are you sure that you've researched this thoroughly and there would be no problems with that PCI thing (compatibility, unknown errors, performance regressions)?
It's hardware-level passthrough with zero performance or compatibility hit. The catch is that the guest needs exclusive access to the device, i.e. you need two GPUs, one for the host and any other guests and one dedicated entirely to the passthrough VM. There's a few applications like Chromium that incorrectly detect the GPU configuration and need manual overrides.
Also, it helps to use a recent AMD card and the in-tree amdgpu driver instead of the out-of-tree nvidia driver.
Overall, you trade software problems for hardware problems (UEFI firmware versions can break the setup), but if you get it working it works great.
> Vanguard does not collect or process any personal information beyond what the current League of Legends anti-cheat solution does.
They are tiptoeing quite carefully there.
I wonder if a lot of "collect or process" can be blocked by users, but with a kernel module actually prevents opt-out attempts and identifies everyone.
this sounds like language (and, presumably, implementation choices) made to comply with their privacy policy and GDPR.
All software that you install on the main desktop operating systems is given the "keys to their computer kingdom": there is no privilege separation or sandboxing, except for the "user vs root" division, which can be trivially bypassed in countless ways (and anyway, most installers require root privileges).
And yes, obviously you need to have a dedicated gaming PC and certainly not install any games or any software that isn't strictly necessary on the systems/VMs with important data.
To some degree that's true. I keep an eye out for programs that insist on running as root. And if someone breaches my account, they've still got to put the work in to escalate their privilege through one of these programs.
I've also been installing more and more software into ~/bin rather than the more traditional /opt and /usr/local/bin. I think that the trend towards usermode software will take over in the next five years.
Usermode software might be far more dangerous though. Any software you run on your machine can change the files in ~/bin, and you won't know the difference.
The user vs root division does not need to be bypassed for a game. Riot does bypass it with their kernel-mode driver for the anti-cheat mechanism.
As parent comment noticed there is no need to bypass anything. Just ask the user for root permissions like any other installer and the user will accept.
That's how Riot installs their thing in the first place and that's how everybody can install their own thing.
Your though experiment involves targeted government attack. And they in theory can order any game company to install any virus on some computer during update. That's hardly an argument against this thing.
What are realistic security issues with ring0 access on personal computer? I bet most interesting stuff on personal computers is easily accessible with normal user privileges that every game client has.
> I bet most interesting stuff on personal computers is easily accessible with normal user privileges that every game client has.
Which is why the current tendency is towards more sandboxing, not less; things like flatpak on Linux, the app stores on Windows and Mac, the heavy sandboxing on phones, and so on. Running an in-kernel component for an application goes against that.
>I wonder what happens when the FISA court orders Riot to [...]
FISA? Try the CCP.
Yes, Riot Games, which is owned by Tencent, which is an arm of the Chinese Communist Party. Hmmmm. Just the people we need to install rootkits on millions of computers.
>we wouldn’t work here if we didn’t deeply care about player trust and privacy
Bold message from a chinese company. People freak out about Huawei but Tencent is 1000% worse. And here they are installing a kernel driver on your PC.
This is being downvoted, but this is an important point. The Chinese government has repeatedly shown they'll work with Chinese companies to carry out the government's agenda.
Do you really think that after 100M people install this kernel driver that the Chinese government won't lean on Tencent to gain access, or use it beyond its original purpose?
So let me ask you a question then..
Do you feel the same way about Microsoft and Apple, and every other company that provides a hardware driver for a modern computer, and whether state governments (USA included) put pressure on them to let them advance their agenda by using back doors in their drivers or software?
Why is Riot special in all this? What, in your view, makes them more likely to be so secretly and so deeply corrupted in the manner you suggest?
Note I'm not asking you if you run MacOS or Windows.
Your argument boils down to, "If one country has access, then every country should have access."
I don't agree with that.
It's clear the US has backdoors. That doesn't mean it's wise to invite China to add backdoors as well.
I am not arguing anything, and would never say anything that ridiculous.
I just find it tedious and irrational to see people up in arms about this contrived and unlikely scenario (a video game company is going to spy on you - a random nobody - for a big bad foreign power), while not being up in arms about the much bigger and more likely vectors of compromise they are exposed to constantly (like your operating system or cell phone).
But of course protecting yourself from those possibilities would require real sacrifice and inconvenience, so let's not talk about it.
You've thrown out two, new arguments:
1. "Nobody playing this game is important enough to be spied upon."
It might surprise you to learn that some people in the military, congress, the DoD, and even important individuals in significant companies play video games.
2. "Some vulnerabilities exist, therefore any new vulnerabilities should be ignored or not discussed."
All vulnerabilities should be considered, especially new ones that will affect 10s or 100s of millions of people. That's why we're discussing it. Since you find it tedious, you're free not to participate.
I'm not sure if you lack comprehension, or if you are just really paranoid and can only see things in absolutes, or if I'm writing poorly. But yet again you've taken what I've written and somehow twisted it into something ridiculous.
> It might surprise you to learn that some people in the military, congress, the DoD, and even important individuals in significant companies play video games.
Anyone in this scenario who is using the same computer to run any untrusted software (like all games) as they are using for their national security work is already compromising themselves.
> "Some vulnerabilities exist, therefore any new vulnerabilities should be ignored or not discussed."
This would be a more productive conversation if you addressed my points at face value, and made your own without twisting my words into whatever convenient position you want to argue against. That's the part I find tedious.
Everything is degrees.. you seem to only be willing to consider extremes.
Of course if you work in a sensitive position or are a likely target of foreign spying, you should take many more precautions. But that's not most people, in fact that's almost no one, statistically speaking. So if we're going to discuss likely compromise scenarios, the risk-reward on using a high-profile video game company as a vehicle for APT state-level actions starts to fall into "movie plot" territory, in my opinion.
And I never said that new vulnerabilities should be ignored or not discussed . Again, possible <> plausible.
In fact, you are basically contradicting yourself at this point because I first brought up way more plausible vulnerability scenarios (your underlying operating system being compromised) and you dismissed that in favour of some narrow and much more implausible scenario (a US-based video game company as a deep-state plant for a foreign government).
Keep moving those goal posts..
Where do you think drivers for your hardware come from? You know, the ones that already silently update through Windows Update?
It's absolutely not clear that the US has backdoors into any Apple product. Apple has fought pretty hard to ensure that their devices remain something that a user can feel safe and secure storing their private data on.
I have no insider information here.
But if we're talking about plausibility, then it's much more likely that your underlying operating system, regardless of vendor - Microsoft and Apple are the major players - has been compromised in some manner, or contains the hooks for on-demand compromise if compelled by a state actor.
China passed a law in 2017 requiring all Chinese citizens and organizations to comply with their intelligence departments in relinquishing any information it needs, as well as to keep it secret.
See https://en.pkulaw.cn/display.aspx?cgid=313975&lib=law
A US agency may put pressure on a US company, but the company would be perfectly within its rights to refuse to comply. The only exceptions are well documented and go through the judiciary which is separate from the executive branch of government.
A Chinese company has by law no choice but to comply.
You've heard of the FISA Court right? And all the details that Snowden released about it? How do you see those secret requests as not effectively the same thing as what you are describing about China's laws?
I don't consider myself a tin-foil-hat wearing type, but even I don't believe that our (western/NATO/5-Eyes etc) governments don't have their own secret powers they can use to compel businesses to comply with information gathering requests without divulging that they did so.
> ...some of you want to know more about the tech behind Vanguard. We can’t get too deep into the technical specifics without potentially compromising Vanguard...
That in itself tells me enough about the efficacy of the system. Security through obscurity is only a hand wave of security. Making the trade off of all the security architecture put in place over the past decades for something that needs to be hidden to remain secure is a really poor value statement.
I understand why they want this in place, it does raise the level of effort on cheating but there are other ways this can be accomplished without compromising a user's security.
The inherent issue with anti cheats as compared to anti-virus software is user intention.
A user who installs a anti virus program wants that program to do its job and find bad actors. The virus on the other hand is completely unwanted by both the user and the software- Its existence is threatened by all fronts.
However, a anti-cheat lives in a extremely adversarial environment. The cheater (and the cheat) wants the cheat on its computer. As such, the user will be willing to do extra steps to assist the cheat. This makes the anti-cheat software in this case, the 'un-wanted' virus, so it has to exist in the most hostile of environments and somehow detect programs which have higher privileges than itself.
That said, Cheating is something that will not go away. Years and years ago, I developed with a friend of mine a completely undetectable cheat for all games on the HL2 platform. It involved a second computer, which man-in-the-middled all network data to the client computer. This second computer then would display a 'radar' of where enemies were. As the anti cheat would have no possible way of knowing the existence of this second computer, there was not much they could do.
If you wanted to get more aggressive with the system above, you could have that second computer modify outbound requests as well. So if you shoot your gun and it would have hit the ground, it will now instead shoot a enemy in the head- as such even something like a aimbot is entirely possible with this setup.
However, there is indeed a anti cheat which can detect all known cheats and its basically what Valve did/does for CS:GO - Allow users to report suspected cheaters and then have the community analyze the reports. This catches all blatant cheats, but unfortunately will never get rid of radar/esp cheaters, only aimbots and the like.
Honestly, it sounds to me like there is a business model in the above. Years ago we had companies like evenbalance/punkbuster, easy anticheat, etc.. which provided software based anti-cheat systems. As you would expect, most would by bypassed and a daily cat and mouse game would ensue. The solution imo is to create a SaaS where you essentially provide a reporting + monitoring tool. Users of your game can report suspected cheaters (which includes the demo file / vod / replay / whatever) and your trained wet-ware staff would review all reports and take action where necessary. No invasive software necessary. Actually, no software on the end users computer at all would be necessary- It is all done on another users PC.
In fact, if someone is interested in doing the above, hit me up. Sounds like a easy win.
> Years and years ago, I developed with a friend of mine a completely undetectable cheat for all games on the HL2 platform.
> It involved a second computer, which man-in-the-middled all network data to the client computer.
Out of interest, was there no transport level encryption to deal with here? Or did you need to do something special to capture keys on the client?
I believe newer Valve multiplayer games (e.g. Dota, CSGO) use Steam Networking instead of the game sending UDP itself. Packets sent with Steam Networking are encrypted[0].
Before CSGO moved to Steam Networking, the game itself encrypted the packets. I can't remember exactly when this was introduced, but it's still in place - see https://github.com/alliedmodders/hl2sdk/blob/acf932ae06b64b7...
[0] https://partner.steamgames.com/doc/features/multiplayer/netw...
In order for your game to render other players you have to know their position, so the game server has to send them to all players.
As an example, for CSGO in the past, the server always sent all player positions from anywhere, so it was possible to create cheats to draw players anywhere in the map. They changed the way it's done, coordinates are only sent when other players are nearly visible, although distant, or close by. This limited the way that wallhacks work, it's not possible to see where players are from far away :)
What needs to be done is reverse engineer the communication protocol. If encryption is made, some kind of key to decrypt has to be somewhere in your game client. Then you can convert 3D coordinates to 2D and even draw a radar on your smartphone if you make an app.
>In order for your game to render other players you have to know their position, so the game server has to send them to all players
I know nothing about game engines, but I have always wondered why is that the case. The server could compute visibility and only send the opponent position if there is a chance the player might see it. Computing visibility server side is not cheap, but it would still be significantly cheaper than fully rendering a scene, right?
Riot's Fog of War for Valorant does exactly what you describe.
https://technology.riotgames.com/news/demolishing-wallhacks-...
That heavy lifting wasnt done by myself so I unfortunately dont have a answer for you. This was around a decade ago however, so I would not be surprised if the traffic was unencrypted.
Source did not encrypt network traffic until DeepBlueSea released NetShark for CS:GO.
Now it uses ICE, a 64-bit block cipher from the DES era. The key is obtained from the Steam servers over the normal Steam encrypted channel.
The future of anti-cheat is machine learning specific to the game. CS:GO already does this, where it used the Overwatch community review program to train it, and it can now automatically ban some cheaters.
I don't think it's a viable model because players are willing to do it for free, as CS:GO's Overwatch shows.
Valve also put a significant amount of work into this system. Asking every game developer to build that system for their game seems like alot to ask- Especially when they can just drop in a few lines of code // third party software package and have cheating 'handled'.
'Not invented here' is a blessing and a curse.
Public relations for a startup like this would be hard to manage. I can already see the front page Reddit post with 4,000 upvotes on the game's subreddit asking why they lost $400 worth of items because one of the outsourced employees being paid $3/hour illegitimately banned them. Easy target to blame a company like this.
Cheater effort and quantity scales roughly with game revenue and popularity. So the first tier of games, the most popular and long-term ones, like League of Legends, CS:GO, Overwatch, maybe Valorant, Apex Legends, Fortnite, can afford machine learning. The next tier down can afford to implement community review programs, where players earn in-game rewards and the satisfaction of improving game experience.
To be fair, this happens regardless, every day. Nobody believes anyone suspected of cheating, ever. And I mean ever. Just go look at the steam forums and the thousands of "i was falsely banned posts". If what you are saying was true, we would have seen this already happen for steam on reddit every day, which it doesnt.
Also, thats not to say you cant have a second and third tier of support to escalate your case to if you think you were wrongly banned, which wouldnt go to the grunts.
This is because most of the cheating bans currently are not human reviewed, it's technical evidence. Closest to this issue I can remember is that some pro CS:GO player was banned by the Overwatch program a few years ago and a fuss was made until it got fixed.
Trying to review a replay to determine if a player is using wallhacks? This would take intimate knowledge of every game the SaaS reviews.
Maybe this can work out and I'll be like the one 2007 HN comment about Dropbox, but it takes an average of maybe 5-10 minutes per case to review if you're not being super thorough. It could be an open platform where players can sign up, but at that point I think game developers would just implement it in-house. The harder part of this technically is the replay functionality in the first place, which they'd have to do anyway.
How do you know if someone is lying on a virtual forum? All of the cheaters that got rightfully banned have just as much reason to write the same forum posts as those who were wrongly banned. That combined with the fact that the large majority of people (probably >99%) who don't cheat never get wrongfully VAC banned makes it hard to believe that the system made a mistake.
To my knowledge radar cheaters are now dealt with by not sending information about enemy positions if they are behind walls. It's not a perfect solution ofc but it seems like it works reasonably well.
The current one in Valorant is simple wallhack
https://www.youtube.com/watch?v=ATkpqYmWt8k&feature=youtu.be
Depends, there are many methods of doing it. Many games let you hear gunshots/grenades/etc.. that are far away. You can use those sounds to show a radar spot.
I skimmed, but it seems none of this addressed why the service (edit) runs at boot-time? Also, expecting a service to not not look at your data if they have access is not security.
If Valve can mitigate hacking in CSGO without such an intrusive service, I am sure Riot can. I, myself, did a very, very, very poor job with an autoencoder to detect anomalous matches in Dota and caught a large amount of players abusing the system. As far as I know, CSGO anti cheat does involve an ML component.
My point is that a non-intrusive anti cheat, advanced analytics, and tracking of user feedback goes a long way.
Ofc, none of this matters. If the playerbase actually cared, they'd boycott or stay away. And I cannot remember the last time gamers ran a successful boycott campaign.
edit: Also read that uninstalling the game will not always uninstall the ring 0 anti cheat. I can't verify since I would never install this on my system, but for what it is worth: That is terrible IF true.
Hackers in standard CSGO games are rampant from what I understand.
Serious players pay extra to queue up in a dedicated service for high tickrate servers and anti-cheats which I believe are rootkits as well.. not sure about any of this though.
It's not a solved problem for CSGO, but surprisingly the situation now, as a F2P game, is much better than what it was before F2P. It's really quite rare to run into cheaters, most people are just smurfing.
rampant years ago. It took a long time to get where it is now. There are still cheaters here and there, but that is to be expected, and relatively rare in my experience.
It is absolutely still rampant. I could count at least 5 cheater encounters in the last 30 days (blatant cheaters, btw).
They try their best to isolate cheaters with a "trust factor" system but the reality is, unless you pay an external service with their own anti-cheat software (that's probably as bad as Valorant's) you will get a high amount of cheaters.
Given they have zero transparency on the trust factor system, I could have a lower factor than you (I definitely rage too much), so because of it I see them more often. But there's no way to know if I'm in the cheater bubble, or you're in the no-cheater bubble.
I agree that it could be more transparent. I haven't faced a single suspicious player in quite some time (and similar with my friends that I talked to about this since this came up). Sorry that your experience is worse. Player "toxicity" should not be involved in this since it might be used as a proxy.
To be clear, that's not because there are no cheaters. They've just finally implemented a behavior score and shadowbanned suspected cheaters and toxic players that way, with some help from the player- trained ML mentioned upthread.
If non cheaters do not play with cheaters, mission accomplished.
It actually doesn't look like the service is always running (although the kernel driver is). I haven't played Valorant since I started my PC, so the `vgc` service for vanguard has no status (and is in "manual" mode).
edited to be more careful with my words. thanks. Some users have reported it as always running, but I haven't seen any confirmation.
I think a Riot employee stated that cheaters would try to start cheats before the anticheat starts, so the anticheat running at all times is trying to prevent this.
CS:GO has a lot more hackers than games with more intrusive anticheats like Overwatch in my experience. Only solution is an invasive anticheat, machine learning, and trust factor systems.
I have not seen anything conclusive on CSGO having more trusted players playing with cheaters than overwatch. My anecdotal evidence for both games also goes along with this.
League of legends is a real pain in the ass to play even when you’re doing everything right. Personally I don’t even like the game, it’s just popular so I played it to hang out with friends. The way their launcher handles updates is crazy inefficient and so it always takes hours to launch if it launches at all. It also runs terribly in wine.
They recently improved their updater: https://technology.riotgames.com/news/supercharging-data-del...
...Hours? The longest update I've ever had for League of Legends (in the ~6 years I've been playing casually) is about 20 minutes. And I'm on Mac—not exactly the high-priority platform for them.
Since they changed their launcher system a few months ago, it's been unusual to have to wait more than ~2 minutes for a new patch.
That's true if you update often.
If you're like me and only played occasionally the updates would build up and take very long.
Whenever I hear/read lots of words about how secure something is and how strong their commitment to security I think “they don’t know what they don’t know”.
We should all admit that we don't know what we don't know. But the default behavior afterwards should be to assume that the software/system is insecure, fixing the defects we can find and surrounding in by rings of moats (defense-in-depth). When you don't know what you don't know and then declare it to be secure, there's an extra layer of indirection and perhaps a bit of hubris.
I was concerned about the index funds for a moment...
This will always be a cat and mouse game. There are some anti-cheat software more intrusive than others. Even Valve Anti-Cheat (VAC) which is considered by many to not be very intrusive, used to intercept DNS queries to detect communication with paid cheats DRM.
Most anti-cheats also scan all processeses memory and even files to detect know cheat signatures. They tend to run with high privileges and some take in-game screenshots for analysis. Basically they have permissions to do anything and receive silent updates.
I wonder if statistical methods to detect cheaters result in too many false positives.
> Even Valve Anti-Cheat (VAC) which is considered by many to not be very intrusive, used to intercept DNS queries to detect communication with paid cheats DRM.
I was surprised hearing this. It seems like what they actually did was if VAC already found something, it checked the hashes of the contents of the DNS cache against a list as a second check. That's quite a bit different from "intercepting DNS queries".
Overall VAC always made a reasonable impression on me as far as privacy and security are concerned (no SYSTEM services, no kernel driver, no screenshots, no scanning and uploading random files etc.), although this non-intrusive approach naturally limits the kinds of cheats it is able to discover. I feel like the approach taken by Vale is, on the whole, well balanced.
Source: https://www.pcgameshardware.de/Steam-Software-69900/Specials...
Yes, thanks for clearing up the intercept part, I didn't remember how they did it exactly. They do make right decisions in my opinion to balance security/privacy issues at the cost of less ability to detect cheats. I think they also have a pretty good record of not banning inocent people.
If people want to play games in anti-cheat environments, the only sensible solution I can see involves the reinvention of the cartridge.
In this case, make the cartridge a bootable SSD which entirely avoids touching any other disk in the system (perhaps with the exception of an SD card or USB storage stick for saves.)
The downsides include:
- the game company now has to ship a complete OS and do hardware support. They nearly have to do that anyway, so whatever.
- you'll need to reboot your computer for each game.
The upsides, I think, are obvious.
The other option that is touted a lot is cloud gaming, with services like Stadia.
There are outstanding issues to resolve there, like input lag and visual fidelity, but it certainly removes the ability to cheat at the system level by hooking into game processes and memory.
Aimbots would be still be theoretically possible through MITM video feed analysis (as has been speculated) but that would also work in your cartridge scenario.
Or just ship on consoles with keyboard/mouse support. Current gen consoles have not been jailbroken in their 7 years on the market.
yeah, a chinese company will gain root of your pc to stop you from tampering with memory but it's totally fine guys don't worry
Potentially dumb question: how do cheats even work in a game like LOL? I understand aimbots in a FPS and how they can give a pure mechanical advantage, but the LOL equivalent isn’t obvious to me. Does the client have access to data that’s not supposed to be exposed to the player?
Aimbots work in LoL too, there are champs that are balanced around lots of skill shots (Xerath) who you’d see hitting every single shot all game. There’s also a lot of scripting, both for account leveling or just to automate boring parts of the game. You’d see people afk playing their lane for 20 mins and not responding to anything happening in the game, then suddenly running into the other team for a big fight.
There are still aim bots in lol because there are aimed skill shots for most of the champions. Scripting is abilities is probably the main method of cheating though.
This anti cheat software is for their new game valorant which is a counter strike like shooter.
As someone who mainly deals with web services, this all seems really weird to me. I was told from very early on "never trust the client". There was a lot of emphasis on server-side validation; client-side validation was only ever for UX, e.g. highlighting the field in red instead of making the user submit the form first.
Reading through this, it seems the game development world is doing the exact opposite and pushing all the "security" measures to the client. Is that incorrect? If it's correct, does anybody have any idea why?
You’re a bit out of your depth. Of course “trust the server” is preferred but many forms of cheats are purely client side. For example an aimbot that steadies your cursor on someone’s head or dodges automatically when a projectile is inbound. Maybe the client hijacks the UI to hide terrain and walls.
I’m not saying what valorant has done here is right, there are other things you can do. But you’re oversimplifying the problem.
I understand that but it feels like there's a lot of focus on client-side anti-cheat while cheats that should be trivially detected server-side still exist (like flying through the air in a game where that shouldn't be possible).
Plus, there seems to be a lot of focus on client-side anti-cheat when a lot of it could be addressed server-side:
> For example an aimbot that steadies your cursor on someone’s head or dodges automatically when a projectile is inbound.
This sounds like a similar problem to "like" fraud and things like that. Couldn't it be addressed by measuring the number of incidents? If someone is able to headshot or dodge at an abnormal/superhuman level, that can be detected server-side and the user banned (or flagged for human review).
> Maybe the client hijacks the UI to hide terrain and walls.
Someone mentioned a solution for this elsewhere in the thread: don't send positions of important resources to the client if it doesn't need them. Keep the client about as blind as the player.
And again, you should be able to detect this server-side. If somebody has an abnormally high kill-rate for enemies coming around a corner, flag them for review.
Humans can and do in fact do all the things you suggest. False positives are generally to be avoided, and mitigations for reverse engineering are still required (anti debuggers, anti dll injection measures).
All the stuff mentioned like not sending positions of people who aren’t visible are typically already done, but sometimes the position is needed for reasons you don’t understand. Like some gameplay ability to suddenly see through walls, etc.
This thread just has a lot of backseat programming.
I think I would find your post a little less irksome if you approached it from a neutral questioning tone as opposed to “what about these obvious things every junior engineer learns” :/
I'm not trying to condescend or be a backseat programmer, I apologize if my tone suggested otherwise. I know that I have no idea what I'm talking about and I know that there are plenty of competent game developers in the industry.
The problem is that I don't know what I don't know, so I can't directly ask it. The best thing I can do is to present the flawed results of my current understanding so that somebody more knowledgeable (such as yourself) can tear them apart and show me what it is that I'm missing.
> False positives are generally to be avoided
This sounds like the biggest difference to me. Generally in my limited experience in handling abuse on web platforms, the value of a single user is so low that a false positive doesn't really matter too much.
I suppose when it comes to games, each user represents a ~$60 investment and potentially a lot of time and emotional investment, so a false positive can't be so easily tolerated and there's an incentive to go to extreme ends (like intense client-side validation) that wouldn't make sense for say Twitter likes.
Also worth highlighting, Riot Games belongs to Tencent.
Considering that client will always find a way to cheat, isn't it more logical to do all anti-cheat detection on server side? Gather data from trusted players during closed beta test and after launch just look for abnormalities on data coming from clients.
Folks should build alternative clients for Riot's games. Riot has demonstrated that they should not be trusted to write clients.
That is a lot more effort than re-building the game, isn't it?
At that point, just make your own game, or easier yet, play another one.
I don't think it's a feasible plan, but there's probably some demand for that specific solution--people have friends who play riot games, so they want the ability to play those those specific games without the invasive anti-cheating software.
Surely the servers enforce the Vanguard requirement
Doing something like this would probably be illegal if you want to connect to Valorant servers or distribute it, would be easy for Riot to detect and ban users of, and if it worked well it would be a haven for cheaters to ruin Riot games from.