Settings

Theme

Hunting for Nginx alias traversals in the wild

labs.hakaioffsec.com

534 points by celesian 2 years ago · 164 comments

Reader

evgpbfhnr 2 years ago

FWIW gixy (nginx configuration checker) catches this: https://github.com/yandex/gixy/blob/master/docs/en/plugins/a...

(and nixos automatically runs gixy on a configuration generated through it, so the system refuses to build <3)

  • BitPirate 2 years ago

    If a webserver requires additional tools for the user to avoid all these pitfalls, maybe just maybe it should re-evaluate its defaults.

    • jgalt212 2 years ago

      Yeah, the config checker should be built-in, and if it does not pass, then one must use --force or similar to start the server.

    • ndocjdn 2 years ago

      But then how will nginx continue to pretend that it is still 1995?

      nginx was once amazing, but it’s decidedly bad now when compared to modern webservers.

      • nubinetwork 2 years ago

        What is a modern webserver? I only use Apache or nginx... anything cobbled together with nodejs or go doesn't count.

        • TheFlyingFish 2 years ago

          Caddy has been my default choice recently: https://caddyserver.com

          Among other things, it features automatic TLS via ACME and dead-simple configuration for my most common use cases: namely, serving a directory of static files and reverse-proxying to an app server.

          It is written in Go, but I certainly wouldn't describe it as "cobbled together."

          I'm also a fan of Traefik but it's strictly a reverse proxy, there's not even built-in support for serving static files. But it's great if you have e.g. a bunch of containers on a single host and you want to front them all with a single load balancer.

  • tsak 2 years ago

    Thank you. I didn't know about gixy and ran it on my home server which found a vulnerability ($uri in a 301 redirect)

  • wredue 2 years ago

    I just gave nix a go and so far it seems great.

    But do you know, if they’re a nicer options finder? The one I found where you just search all several thousand options kinda sucks. I want to just see my package (say, ssh) and just the ssh options, but the results get littered with irrelevancy.

  • GlitchMr 2 years ago

    NixOS doesn't run Gixy anymore, see https://github.com/NixOS/nixpkgs/pull/209075.

542458 2 years ago

At risk of asking a dumb question, is there any good reason that you’d want nginx to allow traversing into “..” from a URL path? It just seems like problems waiting to happen.

Edit: Actually, I’m a bit lost as to what’s happening in the original vuln. http://localhost/foo../secretfile.txt gets interpreted as /var/www/foo/../secretfile.txt or whatever… but why wouldn’t a server without the vulnerability interpret http://localhost/foo/../secretfile.txt the same way? Why does “..” in paths only work sometimes?

  • lyu07282 2 years ago

    That has been a known issue in nginx for a very long time and its a common attack vector at CTFs:

    https://book.hacktricks.xyz/network-services-pentesting/pent...

    • magicalhippo 2 years ago

      There is a LFI vulnerability because:

          /imgs../flag.txt
      
      Transforms to:

          /path/images/../flag.txt
      
      I've only implemented a handful of HTTP servers for fun, but I've always resolved relative paths and constrained them. So I'd turn "/path/images/../flag.txt" into "/path/flag.txt", which would not start with the root "/path/images/" and hence denied without further checks.

      Am I wrong, or, why doesn't nginx do this?

      • hanikesn 2 years ago

        It does when you use the root directive. Alias should hardly be used if possible for those reasons.

  • pravus 2 years ago

    The problem is that a URL isn't actually a path. It's an abstract address to a resource which can be a directory or file (or an executable or stream or ...).

    In this case part of the URL is being interpreted by nginx as a directory (http://localhost/foo) due to how that URL is mapped in the configuration to the local filesystem. Apparently it references a directory, so when nginx constructs the full path to the requested resource, it ends up with "${mapped_path}/../secretfile.txt" which would be valid on the local filesystem even if it doesn't make sense in the URL. Notice how the location of the slashes doesn't matter because URLs don't actually have path elements (even if we pretend they do), they are just strings.

    This is a very common problem that I have noticed with web servers in general since the web took off. Mapping URLs directly to file paths was popular because it started with simple file servers with indexes. That rapidly turned into a mixed environment where URLs became application identifiers instead of paths since apps can be targeted by part of the path and the rest is considered parameters or arguments.

    And no, it generally doesn't usually make sense to honor '.' or '..' in URLs for filesystem objects and my apps sanitize the requested path to ensure a correct mapping. It's also good to be aware that browsers do treat URLs as path-like when building relative links so you have to be careful with how and when you use trailing '/'s because it can target different resources which have different semantics on the server side.

  • SahAssar 2 years ago

    Not in any "normal" use-case, no. It'd make sense to make this behavior opt-in, like having a `allow_parent_traversal on;` flag in the location.

  • aidenn0 2 years ago

    Just guessing, but NginX probably either checks for "/foo/bar/.." and disallows it, or normalizes it to "/foo/" but "/foo/bar.." is a perfectly valid file name, so it doesn't get caught by the net checking for this.

  • dumpsterdiver 2 years ago

    > Why does “..” in paths only work sometimes?

    That fully depends upon the file permissions. In this case, let's assume that a user has permissions to read files all the way from the web index directory (../index.html) back to the root directory (/). At that point, since they have permission to traverse down to the root directory, they now have permission to view any world viewable file that can be traversed to from the root directory, for instance /etc/passwd.

    In other words, imagine a fork with three prongs, and your web server resides on the far right prong. Imagine that the part of the fork where the prongs meet (the "palm" of the fork) is the file system. If your web server residing on the far right prong of that fork allows file permission to files and directories that lead all the way to the palm of the fork, at that point you could continue accessing files on other prongs once you have reached the palm.

    • komali2 2 years ago

      Isn't setting correct permissions for www-data like, the first note in a bunch of "secure your web server" tutorials? I thought if read is only set for the directory with actual public files, and not for the parent directory, there should be no traversal possible like this?

      • dumpsterdiver 2 years ago

        > "Isn't setting correct permissions for www-data like, the first note in a bunch of "secure your web server" tutorials?"

        It is indeed. And yet here we are.

amluto 2 years ago

How is this not seen as a vulnerability in nginx? This behavior is utterly absurd, seems to have no beneficial purpose, and straightforwardly exploitable.

  • phendrenad2 2 years ago

    It's done for speed. Straightforward text replacement is so much faster than checking to see if a path is properly terminated by a slash. And remember that Nginx became popular due to benchmarks that showed that it was more "web scale" than Apache2.

    • amluto 2 years ago

      I find it hard to believe that searching for “..” would even show up in a benchmark.

      In any case, it seems that nginx does try to search for .. but has a bug in the corner case where the “location” doesn’t end with a slash. I assume there’s some kind of URL normalization pass that happens before the routing pass, and if the route matches part of a path component, nothing catches the ..

      If I’m right, this is just an IMO rather embarrassing bug and should he fixed.

      • D13Fd 2 years ago

        Yeah, this whole thing reads to me like a bug in nginx. There is no obvious reason users would need that functionality.

    • okeuro49 2 years ago

      Your comment makes nginx sound like some fly-by-night server that only achieved its performance by making lots of tiny-yet-dangerous "optimisations" like this one.

      More likely it is an omission, which could be rectified with a warning or failure running nginx -t (verify configuration).

      The actual performance comes from an architectural choice between event vs process based servers, as detailed in the C10k problem article. [1]

      [1] http://www.kegel.com/c10k.html

      • phendrenad2 2 years ago

        False, the actual performance comes from architectural differences and optimizations.

    • sofixa 2 years ago

      > And remember that Nginx became popular due to benchmarks that showed that it was more "web scale" than Apache2.

      More like because it was much faster out of the box, and came with many batteries included while Apache2 required mods to be separately install.

    • hedora 2 years ago

      They could simply normalize the paths when parsing the configuration file. The overhead wouldn’t show up in benchmark because it only happens once at startup (and maybe when the conf file changes)

technion 2 years ago

OK hear me out: a Linux capability like option that removes the .. option from the kernels file name parser.

Like web apps have been seen various bypasses involving somehow smuggling two dots somewhere since we were on dial up modems. It's time to look for a way to close this once and for all, as the Linux kernel has done with several other classes of user land bugs.

  • loeg 2 years ago

    https://man7.org/linux/man-pages/man2/openat2.2.html RESOLVE_BENEATH

    (FreeBSD has this in ordinary openat(2) as O_RESOLVE_BENEATH.)

  • m00x 2 years ago

    That would break so many things that it would be insane to do.

    You could just run nginx as a separate user with very limited rights, or just run it on Docker. This, plus updating regularly usually fixes 90% of security issues.

    • archi42 2 years ago

      Most (I hope all) distributions already run nginx as a separate user. It's best practice.

      But that won't help if you alias to "/foo/bar/www" and the the application has a SQLite database at "/foo/bar/db.db", which the nginx user has to have access to. Same if you run it in a container (or lock down permissions using systemd).

      • franga2000 2 years ago

        There is no reason the web server needs to have access to the database file, the application that needs it should be running under a different user.

        • archi42 2 years ago

          If that's an option then that's the right way to go. There is a reason some MTAs have been doing something like this for decades now (I'm thinking of qmail).

          To be honest, I'm not sure if it's even possible to run the application/interpreter/cgi (e.g. php) as a child of the nginx process - though with Apache I'm still seeing that occasionally.

    • martinflack 2 years ago

      But the issue is -- would it break the things a web server is doing? It doesn't have to be a universal solution.

  • ilyt 2 years ago

        /some/../path 
    
    should pretty much 100% of the time be disallowed, there is no sensible use case that is not "someone wrote ugly code"

    ../some/path makes sense sometimes at least

    ... but I'd imagine it wouldn't as useful as you think it is, because many apps resolve .. before passing it to the OS

    • vbezhenar 2 years ago

      I don't agree. Those kinds of paths are often result of concatenation of several configuration options. Like APP_DIR=/some/app/bin; LOG_DIR="$APP_DIR/../logs". And APP_DIR comes to you from distro scripts, so you're not going to fork those scripts and support your own fork across updates, you just build upon those scripts.

      • im3w1l 2 years ago

        The whole point of having an APP_DIR option is so that you can change it and things will just keep working. By doing $APP_DIR/.. you invalidate that by making assumptions about the parent structure. In particular something that could easily happen in the future is that you may not have write access to "$APP_DIR/.." You gotta do what you gotta do, but it is smelly.

      • ilyt 2 years ago

        Then you have fucked up your app config.

        If user gives your app a directory to play with, exiting that dir via ../something is the last thing you should do, it's horrible malpractice that just causes annoyance

        "distro scripts" near always just show direct path to /var/lib/something for data and /usr/something for rest.

  • junon 2 years ago

    That makes no difference. Code often normalizes paths before they ever touch the filesystem API

  • ikekkdcjkfke 2 years ago

    It's something else in the kernel, there we have the permission system which we rely on.

    If you are serving files to web from the folder, the web framework should handle not taversing the public root folder it was tasked to serve. If are rolling your own, well now you have to consider all kinds of stuff, including this.

  • dxuh 2 years ago

    I don't think this would have prevented it. Removing ".." segments from paths is part of URL parsing and required by the HTTP specification. Nginx very likely does this too.

HenriTEL 2 years ago

> The Google VRP Team recognized our work by awarding us a $500 reward for uncovering this vulnerability. They believed the impact on the application wasn't severe enough to warrant a larger reward.

Exposing email and private keys of GCP accounts only gives you $500 reward? WTF. Google being Google I guess.

Decabytes 2 years ago

Glad that the leaks are still encrypted. Even companies that specialize in this sort of stuff are not immune to leaks, so this is honestly the best case scenario.

gostsamo 2 years ago

The title is significantly editorialized. The post title is:

Hunting for Nginx Alias Traversals in the wild

and the hn submission highlights the bitwarden vulnerability while there is a google one discussed as well.

  • dang 2 years ago

    Ok, we've reverted the title. Submitted title was "Leaking Bitwarden's Vault with a Nginx vulnerability".

kibwen 2 years ago

If all you need is a simple way to serve static files that minimizes resource consumption and is reliably secure, what is the state of the art these days? In the past I would probably reach for Nginx, but I wonder if a more focused/less configurable tool would be preferable from a security standpoint.

  • cyrnel 2 years ago

    I use https://static-web-server.net/

    Cross-platform, written in Rust, straightforward configuration, secure defaults, also has a hardened container image and a hardened NixOS module.

    I wouldn't recommend Caddy. Their official docker image runs as root by default [1], and they don't provide a properly sandboxed systemd unit file [2].

    [1]: https://github.com/caddyserver/caddy-docker/issues/104

    [2]: https://github.com/caddyserver/dist/blob/master/init/caddy.s...

    EDITED: phrasing

    • trillic 2 years ago

      I use this...

          [Unit]
          Description=Caddy webserver
          Documentation=https://caddyserver.com/docs/
          After=network-online.target
          Wants=network-online.target systemd-networkd-wait-online.service
          StartLimitIntervalSec=14400
          StartLimitBurst=10
      
          [Service]
          User=caddy
          Group=caddy
      
          # environment: store secrets here such as API tokens
          EnvironmentFile=-/var/lib/caddy/envfile
          # data directory: uses $XDG_DATA_HOME/caddy
          # TLS certificates and other assets are stored here
          Environment=XDG_DATA_HOME=/var/lib
          # config directory: uses $XDG_CONFIG_HOME/caddy
          Environment=XDG_CONFIG_HOME=/etc
      
          ExecStart=/usr/bin/caddy run --config /etc/caddy/Caddyfile
          ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile
      
          # Do not allow the process to be restarted in a tight loop.
          Restart=on-abnormal
      
          # Use graceful shutdown with a reasonable timeout
          KillMode=mixed
          KillSignal=SIGQUIT
          TimeoutStopSec=5s
      
          # Sufficient resource limits
          LimitNOFILE=1048576
          LimitNPROC=512
      
          # Grants binding to port 443...
          AmbientCapabilities=CAP_NET_BIND_SERVICE
          # ...and limits potentially inherited capabilities to this
          CapabilityBoundingSet=CAP_NET_BIND_SERVICE
      
          # Hardening options
          LockPersonality=true
          NoNewPrivileges=true
          PrivateTmp=true
          PrivateDevices=true
      
          ProtectControlGroups=true
          ProtectHome=true
          ProtectKernelTunables=true
          ProtectKernelModules=true
          ProtectSystem=strict
      
          ReadWritePaths=/var/lib/caddy
          ReadWritePaths=/etc/caddy/autosave.json
          ReadOnlyPaths=/etc/caddy
          ReadOnlyPaths=/var/lib/caddy/envfile
          [Install]
          WantedBy=multi-user.target
    • ptx 2 years ago

      What's wrong with the unit file?

    • mholt 2 years ago

      If you want a sandboxed unit file, why not just sandbox it yourself?

      • cyrnel 2 years ago

        Sandboxing it yourself is fraught because any new feature could cause things like a syscall filter to crash the app. It has to be part of the application build/test/release process to prevent that, like it is in SWS.

        Besides, we should be creating and using software that is secure by default: https://www.cisa.gov/sites/default/files/2023-06/principles_...

        • mholt 2 years ago

          Ah yes, I agree Linux should not let processes have a set of permissions that large by default.

  • francislavoie 2 years ago

    Shameless plug: Caddy does a great job here. Automatic HTTPS, written in Go so memory safety bugs are not a concern, has a solid file_server module.

  • pepa65 2 years ago

    I have used Caddy for years, automatic SSL certificates, does file serving, does reverse proxy, very easy and clear to configure. Single-binary (Go) so easy to "install", single configfile.

  • adventured 2 years ago

    Caddy is pretty simple to configure and serve static files from.

  • housemusicfan 2 years ago
    • pepa65 2 years ago

      Last release 2016??

      • housemusicfan 2 years ago

        OP wanted a simple web server for serving static content. Are you aware of open CVEs? No? It's possible for software to be done you know. Just because something isn't a rolling release of change for the sake of change (like most Google crapware) doesn't mean it isn't fit for purpose.

        • crote 2 years ago

          Considering the vast majority of commit were made after 2016, I don't think it is "done".

          And a C program, written by a single developer, with only 27 issues ever being filed? With all due respect, that's guaranteed to have some nasty bugs in there.

  • calvinmorrison 2 years ago

    werc, shttpd, etc.

    Treat any web request like you would a real user on a Linux system you'd need to give access to to download files via scp. Chroot, strict permissions, etc. Can't escape what you can't escape. A ../ should return the same as expected in the shell, permission denied

  • dylan604 2 years ago

    how is a static site served from S3 considered in these parts of the interweb? i've never done this, but see it as an option, yet i never really hear others using it either.

    • sofixa 2 years ago

      In my view, it's perfect (okay, maybe slightly less than perfect, and dedicated platforms taking ot to the next level like Netlify, CloudFlare Pages, Firebase Hosting, etc are for their added related services and tools, as well as their generous free tiers). It's pay as you go, scales from zero to infinite, and has zero attack surface or maintenance.

      I've run a couple of websites (WordPress or Hugo based, including my personal blog) like that and it's great.

    • crote 2 years ago

      You probably want some kind of CDN to avoid a HN frontpage link from making you go bankrupt, but it's a pretty decent solution.

      I personally prefer something like Github Pages, though - it doesn't get much more hands-off than that!

    • chrisweekly 2 years ago

      Good Q. Using S3 as origin behind Cloudfront seems like a pretty standard AWS CDN setup for static assets... but S3 isn't a traditional web server.

  • BOOSTERHIDROGEN 2 years ago

    Could you give a commentary to traefik also ? In terms security and reliability, thanks

whiskeymikey 2 years ago

This is probably a dumb question but why would Bitwarden allow unauthenticated requests to /attachments at all? Even with the Nginx bug, wouldn’t the request have failed if that URL required authentication?

  • Someone1234 2 years ago

    This is an exploit against the web server's configuration, so never executes Bitwarden's authentication code or any Bitwarden code at all. It isn't unusual or incorrect for projects to use their own authentication rather than Nginx or a module.

    It is still Bitwarden's responsibility since they shipped a dangerous configuration via Docker. Which they seemingly acknowledge and have since fixed.

    • autoexec 2 years ago

      > It is still Bitwarden's responsibility since they shipped a dangerous configuration via Docker. Which they seemingly acknowledge and have since fixed.

      The screenshot makes it look like the docker setup option was still in beta and the page had warnings all over it saying there could be possible issues. I can't really judge Bitwarden too harshly here for releasing something in beta that was later found to have a vulnerability in it.

    • whiskeymikey 2 years ago

      Ahh okay. That explanation makes sense. Thanks!

jand 2 years ago

Please excuse the silly question: Would proper directory and file ownerships not prevent this traversal?

If nginx does not run as root, how can it read other files than the ones explicitly assigned to the nginx user?

  • ilyt 2 years ago

    It would absolutely prevent it. Run app as one user, nginx as other, go-rwx on all app files, set the group of the "static" files as www-data and g+r on them and now web server can't access app files.

    It's LITERALLY app hosting 101 and people did it that way 20+ years ago.

  • PhilipRoman 2 years ago

    Ah the wonders of 022 umask. Personally I would always recommend making files unreadable to other users. If not for all files then at least significant directories like everything under /home, etc.

    It may require more fiddling with group memberships, but it's well worth it.

  • NoMoreNicksLeft 2 years ago

    I don't know about everyone else, but at this point I'm no longer doing a proper installation of nginx for personal stuff. I always just spin up a docker image... and I'm not checking if it runs as root or not, really.

    Probably really screwing things up. Ouch.

  • oefrha 2 years ago

    Typical umask is 022 so most things are readable by nginx workers but not writable, they don’t need to be explicitly assigned (e.g. to www-data). If your application generates sensitive data of course you should probably use a 077 umask.

    • Jolter 2 years ago

      You could make an argument that bitwarden vaults constitute sensitive information.

  • frays 2 years ago

    You are correct.

    Unfortunately, nginx (and other web servers) generally need to run as root in normal web applications because they are listening on port 80 or 443. Ports below 1024 can be opened only by root.

    A more detailed explanation can be found here: https://unix.stackexchange.com/questions/134301/why-does-ngi...

    • duijf 2 years ago

      > Ports below 1024 can be opened only by root.

      Or processes running with the CAP_NET_BIND_SERVICE capability! [1]

      Capabilities are a Linux kernel feature. Granting CAP_NET_BIND_SERVICE to nginx means you do not need to start it with full root privileges. This capability gives it the ability to open ports below 1024

      Using systemd, you can use this feature like this:

          [Service]
          ExecStart=/usr/bin/nginx -c /etc/my_nginx.conf
          AmbientCapabilities=CAP_NET_BIND_SERVICE
          CapabilityBoundingSet=CAP_NET_BIND_SERVICE
          User=nginx
          Group=nginx
      
      (You probably also want to enable a ton of other sandboxing options, see `systemd-analyze security` for tips)

      [1]: https://man7.org/linux/man-pages/man7/capabilities.7.html

    • guraf 2 years ago

      Nginx is started as root but it does not run as root, it changes its user after opening log files and sockets. (unless you use a lazy docker container and just run everything as root inside it).

    • oefrha 2 years ago

      Nginx workers shouldn’t run as root and certainly don’t on any distro I know. Typically you have a www-data user/group or equivalent. Dropping privilege is very basic.

Xophmeister 2 years ago

OT but this isn't the first time I've seen someone mistake the verb "delve" with "dwelve":

> ...we started dwelving into the code base...

The author may not be a native speaker, but this is far from a judgement on their English. I'm just curious about the provenance of this mistake, given the scarcity of words that begin with "dw". At first I thought it was a typo -- especially on a QWERTY keyboard -- but I've seen it often enough to question this.

  • leonheld 2 years ago

    >I'm just curious about the provenance of this mistake

    Because of English pronunciation (pronounciation? :-P). English is extremely irregular, there are a thousand of footguns in the language - both spoken and written -, so as non-native speakers we tend to make small mistakes that stick to our brains like glue, and it's very hard to get rid of (rid off? :-P).

    For me it kinda makes sense to say "dwelve" because it reminds me of "dwarfs" (dwarves? :-P) that live underground!

    • Xophmeister 2 years ago

      Dwindling dwarves dwell dweep :)

      • leonheld 2 years ago

        btw, as a non-native, I also cannot understand why some native speakers confuse the use of "you're, your" or "there, their" or even "through, tough". To me they sound completely different!

  • Majromax 2 years ago

    A Reddit thread on r/grammar (https://www.reddit.com/r/grammar/comments/fxahta/does_the_wo...) involves a poster asking a genuine question about the alleged word 'dwelve'. The answering commenter speculates that the author is conflating 'delve' and 'dwell'.

    Another comment, added years later, admits the same confusion.

  • wccrawford 2 years ago

    IMO, people learn language by seeing/hearing it used. And the internet is rife with misuse of language.

    My particular pet peeve is using "weary" instead of "wary" or "leery". I've started to hear it spoke in youtube videos now, too, so it's not just a typo.

    • sdoering 2 years ago

      And I learned something. Just threw it into deepl to understand the translation. Thanks for pointing it out. Would have tripped me probably.

andrewstuart 2 years ago

I dropped nginx because it was really fiddly to configure and misconfiguration has potentially bad consequences.

phendrenad2 2 years ago

This has nothing to do with bitwarden. This is a generic directory traversal attack (enabled by Nginx's configuration language being full of serious gotchas).

  • ComputerGuru 2 years ago

    It does have to do with BitWarden: they wrote and shipped the buggy config.

    • autoexec 2 years ago

      It looks like they did say it was still beta and warned there could be issues though. I'll give them credit for that much.

brigandish 2 years ago

The article didn't mention permissions, would this still work if the nginx user is denied permissions on things like `/var/log`? I suspect it wouldn't but isn't the most common cause of security flaws going to be unchecked assumptions?

As an aside, I didn't know Github code search accepted regex.

  • VWWHFSfQ 2 years ago

    no it wouldn't work if the user nginx is running as didn't have read access to the directory or files

    • komali2 2 years ago

      Ah then I just realized, it probably does have access to all nginx log directories, because nginx needs write permissions to them anyway, right? Now I really want to go double check all my permission setups...

      • crote 2 years ago

        It depends on how nginx is designed. In theory you could separate log writing into a different process, and drop those permissions from the worker process.

        Or just write to stdout and have systemd handle the logging for you, that'd work too.

kentt 2 years ago

If I understand correctly, this is a vulnerability in self-hosted Bitwarden only. Is that correct?

  • zeeZ 2 years ago

    This is for the single image self-hosted setup method, which is still in beta. The current supported self-hosted setup is a script that creates a bunch of individual containers for the different services.

  • emaciatedslug 2 years ago

    Yes, per the article: "Bitwarden also offers a self-hosted option for those who want to maintain their own server, which is the one we are going to examine."

TedDoesntTalk 2 years ago

> Nginx, a versatile web server pivotal to numerous internet infrastructures, has held a dominant market share since its inception in 2004

Horse pucky. In those days, Apache httpd held dominant market share. Nice historical hijacking.

sneak 2 years ago

Note that this leaks the vault with secrets encrypted - a leak of the cyphertext.

> This vulnerability has been disclosed to Bitwarden and has since then been fixed. Bitwarden issued a US$6000 bounty, which is the highest bounty they issued on their HackerOne program.

That's a ridiculously low payout.

  • andersa 2 years ago

    Small companies can't just give out $50k bounties, even if it would be deserved.

  • dghlsakjg 2 years ago

    I don’t know enough about bounty programs to comment on the amount, but my understanding is that leaking encrypted secrets isn’t really dangerous?

    • NoZebra120vClip 2 years ago

      It's generally a question of time.

      If you want to play the long game and collect a lot of encrypted data now, you can simply wait until it is possible to trivially decrypt, and/or start cracking now and let the years work on it.

      Most encryption decisions are framed as a tradeoff of the time and resources it would currently take to brute-force your way through it, and how many years before a simple attack becomes feasible, vs. your $5 wrench attacks in the present day.

      • nyolfen 2 years ago

        BW uses 100K rounds of PBKDF2 for the master password so I don't think that will be any time soon

        • SV_BubbleTime 2 years ago

          BW now uses Argon2 over PBKDF. I can’t remember if that is by default, opt-in, or new accounts. But barring an argon vuln, this is even less of a concern.

          Also, I think BW has been using more than 100k for some time now. Last I saw 600K was the recommendation.

          • emaciatedslug 2 years ago

            The default for new Bitwarden accounts from Feb 2023 on is PDBFK2 HMAC SHA 256 setting at 600,001 iterations on the client and 100,000 on the server with the option to use Argon2id. These settings are above current OWASP recommendations. https://cheatsheetseries.owasp.org/cheatsheets/Password_Stor... https://bitwarden.com/help/kdf-algorithms/

            • NoZebra120vClip 2 years ago

              All the replies have given random statistics, but these don't shed much light on the length of time it may take an attacker to brute-force a password, or find a chink in the armor of the vault's encryption algorithm.

              Now as I said, a significant threat actor with lots of time in their future plans can collect encrypted stuff such as vaults and bide their time. Someday, the decryption may be cost-effectively cheap. Someday, a flaw may be uncovered in the cryptography. Someday, a vault owner's secret key(s) may leak and can be correlated.

              As I said, it's just a question of time, and the ability to hold on to your cards for long enough that they can be played in the proper manner. It may take 5 years, 10 or 20, but if the payoff is valuable enough, it's worth the wait for the threat actor.

              • SV_BubbleTime 2 years ago

                There is practically zero scenarios where hacking ANY bitwarden account 20 years from now nets you anything useful.

                If the concern is general encryption when you were concerned about a 20 year from now scenario, don’t send it.

                • NoZebra120vClip 2 years ago

                  > There is practically zero scenarios where hacking ANY bitwarden account 20 years from now nets you anything useful.

                  Bitwarden is a password manager, yes? What about cloud accounts of someone's employer, like an AWS account that runs $1,000,000 of monthly assets? That wouldn't be valuable in 20 years?

                  What about VPN credentials for some big tech intranet? Yeah, hopefully they use MFA and they expire passwords before 20 years, but just in case, right?

                  I can certainly see nation-state actors hanging on to juicy encrypted password manager vaults, just on the off-chance they could hit the jackpot. I can think of plenty of accounts that would still be valuable and enabled 20 years from now.

                  • SV_BubbleTime 2 years ago

                    Twenty years ago we had Windows XP.

                    You think AWS accounts are going to have a simple password requirement in the same time?

                    You don’t think twenty years from now that everything is a multifactor / immutable likely-bio hardware key?

    • rcxdude 2 years ago

      a password vault contains a lot of long-lived secrets protected by a human-provided key, so it's really not something you want out there, even encrypted.

      • Bluecobra 2 years ago

        I would assume most people that are doing self-hosted are securing it behind a VPN like Wireguard instead of opening it to the whole web. (at least I hope so)

        • diarrhea 2 years ago

          I am not. Working well so far. My instance is behind Caddy, behind a secret URL path. To talk to the instance, this “pre-shares secret” needs to be known first. So far I haven’t seen any abnormal hits. I’m closing in on 3 years of using it in this setup, via Vaultwarden.

          I’m aware that this is security through obscurity. The instance’s accounts use strong passwords and MFA.

          • BOOSTERHIDROGEN 2 years ago

            Is this can work for mobile devices ?

            • diarrhea 2 years ago

              Yeah, the full URL can be specified in Bitwarden clients (browser extension, mobile app) and then never touched again. The secret path only leaks if users use Bitwarden's sharing feature. It's not a "pre-shared secret" in that sense, as it can publicly leak by design.

              • BOOSTERHIDROGEN 2 years ago

                Any pointer how do you setup this ? Thanks

                Sharing features did you mean organization, bitwarden send ?

        • donutshop 2 years ago

          I thought so too. But then did a quick search on Shodan and found these:

          https://www.shodan.io/search?query=bitwarden

          https://www.shodan.io/search?query=vaultwarden

        • berkes 2 years ago

          I'm afraid not. I've seen some really dumb setups of BW when helping selfhosted.

          I do think that while selfhosting is admirable, in the case of your password vault, it's not. It's one thing where I'd always advice against selfhosting or DIY, because the downside risk is just too big.

          The chance of fng up may be tiny, bit if you fck up, it's bad. Potentially bankruptcy or jail bad.

  • dw33b 2 years ago

    not compared to the $500 Google gave them

    • gostsamo 2 years ago

      Not sure why your comment is last in the page. Google have significantly more resources and the authors looked to disagree with the amount awarded for the google vulnerability.

qwertox 2 years ago

What would I need to grep my nginx logs for to see if my possibly misconfigured servers were exploited? [^/]+\.\. (not adding a question mark after that regex even though I'm asking if that one would be ok)

ilyt 2 years ago

Don't let web server access app's code, soo many security problems solved...

em1sar 2 years ago

Okay so I self-host Vaultwarden, what do I need to do to fix the vulnerability? The article mentions another flavor of the self hosted docker image though.

  • jve 2 years ago

    I have nginx-proxy docker container on top of vaultwarden - there aren't any alias directives there. Vaultwarden itself appears to use rust with some http framework called "rocket" [1]. Sorry I'm not familiar with rust world.

    But anyways, said vuln doesn't apply to vaultwarden.

    [1] https://github.com/dani-garcia/vaultwarden/blob/19e671ff25bf...

  • remram 2 years ago

    Vaultwarden does not include or use nginx, and neither does its official Docker image. Unless you are using nginx yourself (you'd know) this does not affect you.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection