Settings

Theme

New malware abuses Microsoft IIS feature to establish backdoor

symantec-enterprise-blogs.security.com

72 points by xook 3 years ago · 61 comments

Reader

jimbobimbo 3 years ago

FTA: "In order to use this technique, an attacker needs to gain access to the Windows system running the IIS server by some other means. In this particular case, it is unclear how this access was achieved."

See also "It rather involved being on the other side of this airtight hatchway" series by Raymond Chen:

https://devblogs.microsoft.com/oldnewthing/20181219-00/?p=10...

https://devblogs.microsoft.com/oldnewthing/20211207-00/?p=10...

  • gerdesj 3 years ago

    "it is unclear how this access was achieved"

    Not a good line in a write up like this. Windows does write n store an awful lot of logs by default. However thanks to circular logging with log sizes from the 1990s on critical logs, you can easily lose information.

    I can't remember what the defaults are (connects to 2016 AD DC) ... 20Mb for %SystemRoot%\System32\Winevt\Logs\Security.evtx . On a tiddly setup like mine (20 odd users), that will last ... less than a day.

    I ship the logs elsewhere for proper evaluation etc but 20Mb? Yes, you can fiddle with the default sizes via group policy and you probably should but 20Mb really is off of the 1990s. OK so all the "core" logs seem to be 20Mb each and there are the rest under /Microsoft/Windows with varying sizes. I probably ought to look at what a PC logs these days - probably the same silly sizes.

    • Dylan16807 3 years ago

      My desktop has a 20MB security log that goes back 16 days, which seems like enough. If anything, stop spamming tens to hundreds of duplicate messages when credentials are read or group membership is enumerated.

      System has 8 months, application has 10 months, and setup has 26 months.

      • gerdesj 3 years ago

        Yes (20Mb), but so do AD DCs which is frankly lazy on MS dev's part. If a DC is such a big deal that it requires rather more cash to buy than a "workstation" edition of Windows, then I'd like to see more attention to detail.

        By contrast a Linux box running systemd/journald by default will leave 10% disc space free when logging. That's enough to keep a filesystem honest!

        20Mb on a DC - even one for a small site like mine will cycle quite often.

        I really recommend that you extend your logs to cover six months or more. It will cost you maybe a gigabyte or 10. Very little these days (my first HD was 20MB, yes: megabytes). However if you need to get some details from the past - very handy.

      • msm_ 3 years ago

        It's absolutely not enough for APT investigation. Average attacks lengths are in months, infections sometimes span multiple years. Especially since we're talking about a backdoor (ransomware operators tend to move more quickly)

        • Dylan16807 3 years ago

          It's not enough for that, but that bar is too high. Unless the logs are very small, you should not be keeping years of them. I still say 20MB is enough for a desktop.

          • gerdesj 3 years ago

            Depends on the desktop. Mine does quite a lot of stuff. /var/log is 8.3Gb and the journal is probably a monster.

            Your use case is probably different to mine - I'm a security officer for my firm.

    • msm_ 3 years ago

      Analyses like this are not usually performed by insiders. People that write them are external researchers (in this case, Symantec's) that have limited (or zero) insight into actual logs on the target system. There is some coordination with the attacked organisation, but requesting "please find the attack vector for us" or "just send us all your logs" is out of the question. So, unless the attack is really high profile, sometimes it's easier to just accept that you don't know something.

  • tbrownaw 3 years ago

    But I don't see this being presented as a way to break in. I'd presented as a way to be sneaky about listening for commands once you're already in.

    The "airtight hatchway" series is about things being presented as ways to break in, when they actually require you to already be in.

  • bilekas 3 years ago

    This.. As far as I see it, if there is already access to the machine, all bets are off.

    However in fairness they didn't say it was an exploit more that it's just a stealthy 'clever' malware.

  • dhx 3 years ago

    I don't think "airtight hatchway" applies here because what is to say the entry point isn't exploiting w3wp.exe remotely and executing code from the stack. Then let's say a memory page containing FREB code has permission PAGE_EXECUTE_READWRITE set (a plausible possibility for JIT compiling akin to eBPF), providing a convenient (and plausibly deniable) location for an extended amount of malicious code to be stored and executed from. Or w3wp.exe has permission to create a new memory page with PAGE_EXECUTE_READWRITE set and again this is plausibly deniable because FREB may need to do similar for JIT compiling. It it were almost any other process, a memory page set to PAGE_EXECUTE_READWRITE would sound alarm bells (or at least it should).

    The second aspect is how well is w3wp.exe isolated? Can and does it use "AppContainer" (or equivalent) isolation and is it strict? For example, could code executed by w3wp.exe create a new network socket, execute another process, write to a file in any path even though it shouldn't have a need to do so? Perhaps a different process sample123.exe is compromised, which by itself isn't too much of a problem due to its high degree of isolation. However, sample123.exe has permission to write to a pipe shared with w3wp.exe and can use this permission to exploit a bug in w3wp.exe (not exposed remotely) to allow code to execute with different/higher permissions of w3wp.exe, or using a plausibly deniable PAGE_EXECUTE_READWRITE memory page of w3wp.exe to store and execute code from without immediately sounding alarm bells.

    _If_ strong process isolation was in place and working for w3wp.exe and/or sample123.exe, the "airtight hatchway" may not have been breached because whilst malicious code may have been executable from a stack, the malicious code wouldn't have been able to achieve much or anything of concern (can't read files from disk, can't access memory of other processes, can't login to a SQL database and start pulling data of other users, etc, etc).

    I'm not sure what the equivalent of "systemd-analyze security" is for Windows, but it'd be well worthwhile for Windows system owners to demand similar easy-to-use tools for auditing the level of isolation of and required interfaces between applications (spoiler: just like a typical Linux system, the results will not be comforting, but seemingly with Windows you wouldn't know). Windows process isolation features introduced over the years are poorly documented, hard to use due to lack of tooling and often not used except for a few high profile applications such as Chromium and Adobe Acrobat. Chromium possibly has one of the best overviews of how sandboxing/process isolation can be achieved in Windows because they would have gone through a lot of pain in being amongst the first to figure it out[1].

    [1] https://chromium.googlesource.com/chromium/src/+/HEAD/docs/d...

Dwedit 3 years ago

So in other words, this is something that someone has to run on the computer, then it injects itself into IIS. Not a remote vulnerability, just an entry point for monitoring HTTP requests once you have code execution in there.

sublinear 3 years ago

People use IIS?

  • CompuHacker 3 years ago

    Yeah! I run an unpatched IIS FTP server on Windows 10 from 2016. When FileZilla stopped connecting to it for reasons I can't even imagine, I enabled SSL using some slightly modified arcane commands I found on a forum post from 2011 to generate a self-signed certificate, convert it between two different formats, and finally use it to accept connections from only some of the still available FTP clients for Windows, which excluded Windows Explorer, but only some of the time.

    Why, should I not be using IIS?

  • tombert 3 years ago

    I used to work for a .NET shop (pre .NET Core) doing F#, and we used IIS.

    In the company Slack, I said something like "Serious question; is there something that IIS does better than something like Nginx or any other open source server?"

    One of the most senior engineers responded back with "crashing".

    • DaiPlusPlus 3 years ago

      IIS can be configured by non-expert users easily, and without necessarily compromising security, thanks to the well-designed (I’m being serious) administration tools that MS has (thankfully) not butchered-up over the past 15 years.

      It’s an “old-world” web-server (like Apache, etc) which defaults to “filesystem-first” which is great for quickly making a directory available on the web, and its architecture employing recyclable worker-processes (since IIS 6) with limited privileges gives it the performance benefits of in-proc code-execution (vs CGI/FastCGI) without the risk of a vuln compromising the entire web server. Oh, and HTTP.sys is pretty nice and fast too. I’ve never had reliability or crashing issues with IIS: if your worker-process goes down it means your application code has a crashing bug in it, not IIS.

      Yeah, nginx is nice - but is also a relatively recent tool (since 2004, I didn’t start seeing people prefer it for projects until after NodeJS gave them a reason to use it - so around 10 years ago). While nginx supports Windows, there’s a big fat caution saying it’s performance is sub-par still: https://nginx.org/en/docs/windows.html

      So if you’re on Windows - because you’re a (non-Linux) .NET shop, or want/need to run on on-prem Windows Server boxes (especially SMB scenarios) it just makes sense to use IIS: it’s already there and certainly is not an underperforming, insecure, or otherwise “bad” web-server.

  • ocdtrekkie 3 years ago

    It's usually the right choice if the software runs on a Windows host. Why add a whole bunch of third party problems on top?

    A lot of enterprise software is built in .NET for Windows, and as the expectation of web-based UIs for said software has increased... honestly I'd be surprised if IIS usage wasn't increasing in overall uses (though not in market share, for certain).

    • sebazzz 3 years ago

      OP is obviously in some kind of bubble. IIS isn't as used as much as it used to be but it is still used a lot and not the COBOL of Web servers.

  • robotnikman 3 years ago

    Yep. From what I've seen, usually it's a case of a company building something which uses it a long time ago, and not bothering to switch to an alternative because 'If it works don't fix it'. It may not be the newest and shiniest, but if it's working well then no need to move to something else.

    • gerdesj 3 years ago

      I have customers with DOS (6.2, so quite modern) controlling machines.

      I ended up wedging a Samba daemon between that and PCs running Windows 10. I mount the DOS box's share via mount.smbfs and re-export it via Samba. The machine is on a rather sparse VLAN!

      I have many other horrors on isolated VLANs across the UK to worry about ...

    • calvinmorrison 3 years ago

      And here's a great point to install a proxy web server like nginx, add rate limiting, filtering, whatever you want, at least the ancient IIS/whatever http server is not publically facing

  • joenathanone 3 years ago

    What's wrong with using IIS?

    • gerdesj 3 years ago

      Nothing at all. It is an extremely capable webserver.

      In common with all web servers, advice found via search varies in quality and unfortunately, being Windows based: IIS really suffers. The GUI is pretty intimidating (IIS Manager - both of them) and there are things that can only be done via registry, config files and dark magic.

      IIS gets a lot of undeserved stick in my opinion.

      • throwanem 3 years ago

        You're not really making a strong case here for that "undeserved".

        • gerdesj 3 years ago

          Fair enough. I'm an Apache, Caddy, nginx, HA Proxy "fan" and I generally only worry about IIS when it hoves into view - Exchange for example, or whatever weird and wonderful nonsense a customer comes up with.

          I am almost perversely going to get into IIS but I probably won't. Following logs on Windows is a right old ballache. The bloody things don't seem to get written to disc for quite a while for those many systems that ignore the Windows Events system and dump to .log. I've tried various log viewers. Where the hell is lnav or even less for Windows?

          My snags with Windows is opacity. I fire up a daemon on a Linux box and then in another tab/window or whatever, I follow logs - I can use less (is more) or something fancier like lnav. That workflow does not translate very well to Windows.

          The taskmanager on Windows is much improved these days - you can now with a GUI work from a network port to a binary (PID) and even associate it with a particular service.

          However, text logs are still second class citizens.

          • throwanem 3 years ago

            Fair disclosure: I administered and developed for IIS fairly heavily in the first decade of my career - not so much by choice, but when you're the star engineer of a three-man contracting firm, you take the jobs that come and learn how to do them on the fly, or else you end up looking for work because your boss went out of business. He didn't, not while I was there, so I guess I must've been good enough at it.

            I'm perfectly willing to confide IIS is a fair bit faster, more reliable, and better to work with these days than it was in those - but that's a hell of a long way from imagining it's anything like good.

            (Also, a pedant's unrelated note, because I've seen a lot of this in particular lately and I'm going to say something about it somewhere: "hove" is an archaic past tense of "heave", as in the related nautical phrase "heave to," as might be found in a commerce raider's injunction to "heave to and prepare to receive boarders". So something properly is said to heave over the horizon; only after it has done so may it rightly be said to have hove.)

    • rbanffy 3 years ago

      It really depends on your use.

      If you already paid for your Windows license, not that much. The way it ties into the OS is a bit concerning to me but, apart from that, it's OK.

      I think the question can be formulated in a different way: what's RIGHT with using IIS? What does IIS offer you that other web servers don't? Easy AD integration is the one thing that crosses my mind, and I can't think of anything else besides "it's already there".

      If you plan on scaling out, however, licensing costs will grow quickly. If you run .NET Core apps, the built-in HTTP server is very fast and runs on Linux as well. Same story with Spring apps - using Netty/Jetty or even Tomcat is easy and makes your app very self-contained.

      I think the big nope for me is that it is from the "Pets" era, before servers were "cattle", which was compounded by containerization and tools like Kubernetes and OpenShift. IIS just doesn't look like it fits into that new model.

    • louthy 3 years ago

      Nothing, it’s just standard tech snobbery

      • alexjplant 3 years ago

        It's snobbery to have an opinion? In terms of both static and application web servers I've personally administered nginx, Apache, IIS, Tomcat, Wildfly and Websphere and mention them here in descending order of preference with regard to capability and DX. As you can see IIS falls squarely in the middle of the pack and is actually a distant third in my opinion. The only compelling reason to use it a decade ago was to host .NET applications in an officially-supported manner (and even then it meant contending with licensing, weird logging behavior, arcane MMC-controlled XML-based configuration, Windows Server itself, etc). In the age of .NET Core there's no reason at all.

        • louthy 3 years ago

          > It's snobbery to have an opinion?

          Of course not and I didn't say any such thing. The original comment "People use IIS?" is clearly a passive-aggressive dig at MS, its tech, and those that use it (as is so often the case in tech circles). It's quite pathetic and childish. If I misunderstood that, then I apologise to the author, but I'd argue it still adds nothing to the discourse even if it was asked honestly.

          It doesn't mean you can't hold an opinion on the relative merits of any one piece of tech. But this "my computer is better than your computer" immature schoolboy nonsense is pervasive in tech circles and is extremely tedious.

          I have no love for IIS, but it's a perfectly capable webserver and is clearly still used. The idea that the .NET world have all moved over to .NET Core is also a wishful one unfortunately, I still maintain my open-source libraries for the legacy framework as I know there's plenty of places that can't just 'flip the switch' to .NET Core. It's not quite as bad as Python's V3 moment, but it's up there.

          • rbanffy 3 years ago

            > "People use IIS?" is clearly a passive-aggressive dig at MS

            Feels a bit snarky, but not too aggressive. Windows is not a popular choice for cloud platforms and those users seem to be overrepresented here. I can imagine someone being genuinely surprised it's used for more than serving documentation that's already on a Windows server.

            That said, as I mentioned earlier, it's hard to find a use case where IIS (or Windows) is a better choice than any of the popular open source http servers and app platform runtimes.

            There is a colossal corpus of .NET Framework code out there and I wouldn't be surprised it achieves the status of COBOL (but with a lot less charm) at some point in the future - where code on it is maintained ad infinitum even though almost nobody would deploy a greenfield app using it.

    • cerved 3 years ago

      it's kind of shit

  • unxdfa 3 years ago

    Yes. It’s pretty unavoidable if you have the usual corporate behemoth ASP.Net ball and chain which has been dragged along begrudgingly for the last 20 years. And no it’s not going to be ported to .Net Core for ages because it has some proprietary component or library plugged into it and the vendor ceased to exist ten years ago and your entire business relies on it being patched by a involuntary black hat with mono Cecil to remove the licensing code. Plus everyone who wrote it is either dead or left so even small changes require a week of reverse engineering.

  • thedougd 3 years ago

    Even worse, some companies choose to ship software that require the use of IIS.

  • pram 3 years ago

    Unironically: the fact that Microsoft has spent millions of man-hours building a complete alternative server ecosystem to UNIX/Linux continually blows my mind. Web servers, containers, virtualization, databases, languages, automation, security, etc. It's like NIH maximalism.

    • jiggawatts 3 years ago

      Conversely, from the perspective of people that started with computers in the 1990s, it's bizarre how Linux keeps failing to copy Windows. At one point something like 95% of PCs were Windows, and the rest were mostly Apple Macs. Similarly in the server space, you would be surprised to hear that the majority of servers were Windows for quite a while. Note that I didn't say web servers, because not all the world is HTTP.

      There is still no equivalent to Microsoft Exchange, Group Policy, Enterprise PKI, and a bunch of other things in the Linux world. SAMBA copies Active Directory, but it's a direct clone, not a unique product.

      Not to mention that SQL Server isn't somehow "copying" UNIX. Its performance and feature set blows most of the open-source databases out of the water, with only Postgres having superior features (but not performance).

      Microsoft essentially invented OLAP with SQL Analysis Services, and they still have the most popular products in that space, such as Power BI.

      Etc...

      • tester756 3 years ago

        >Not to mention that SQL Server isn't somehow "copying" UNIX. Its performance and feature set blows most of the open-source databases out of the water, with only Postgres having superior features (but not performance).

        The tooling around SQL Server is decent

      • pram 3 years ago

        If we're talking about the 1990s, SQL Server/Sybase ran on UNIX. Not to mention it's just a fork of Ingres lol

      • avidphantasm 3 years ago

        Not sure about the numbers, but there were also A LOT of NetWare servers serving Windows (and Mac) clients back in the day.

      • the8472 3 years ago

        > There is still no equivalent to Microsoft Exchange, Group Policy, Enterprise PKI, and a bunch of other things in the Linux world.

        Which also means fewer attack vectors.

        • AnIdiotOnTheNet 3 years ago

          Why yes, by leaving our computers off and not using them we are at significantly less risk than if we actually employed them to do meaningful work.

    • adolfojp 3 years ago

      >It's like NIH maximalism.

      UNIX was not an option for most people which is one of the reasons why NT killed the Unix workstation market and Microsoft software like IIS, MS SQL and Exchange was already established when Linux gained mainstream adoption.

  • scarface74 3 years ago
  • tenebrisalietum 3 years ago

    I think Stackexchange does.

    • Someone1234 3 years ago

      In 2009, they certainly did.

      Since then, they migrated to .Net "Core"[0] which can run on any OS/web-server. Per [0] they initially kept IIS, but once you're on .Net Core+ that certainly isn't a requirement and there may be good licensing or performance reasons to migrate (even with headless Windows Server/IIS).

      I did find an article from last year that said they still had a monolithic architecture and were still on-prem (as opposed to cloud/Azure). So, maybe, still on IIS? They certainly did for YEARS.

      [0] https://www.infoq.com/news/2020/04/Stack-Overflow-New-Archit...

  • jacquesm 3 years ago

    I had the weirdest experience recently. A tech stack with IIS on the front, Linux in the middle and MSSQL on the back.

    I still don't get it.

  • enfaun 3 years ago

    certainly, working as an intern and they use IIS for serving Java webapps with Resin, honestly not surprised for enterprise applications that runs on Windows Server

  • boston_clone 3 years ago

    CarbonBlack App Control required it as of 2021 I believe.

  • traceroute66 3 years ago

    IIS = It Is Shit.

danielodievich 3 years ago

Nobody sane runs FREB at full prod load on public sites. It's not installed by default. It is highly useful for troubleshooting but not at production traffic. Seems like if you're inside IIS already by some mystic hack you already own the space.

  • Someone1234 3 years ago

    This malware is enabling FREB then injecting malware into it. The point is to hide the exploitation better than simply injecting a custom module. You don't need to be running FREB previously.

    Plus I don't find the "nobody does [XYZ]" when talking about a supported feature of a popular product reassuring, there's always a somebody or the feature would have been removed since it costs money to support and maintain it.

greatgib 3 years ago

It's 2023, if you are still using IIS you are highly incompetent and you deserved to be rooted...

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection