Settings

Theme

How I learned about corporate firewalls

valcanbuild.tech

139 points by ValCanBuild 3 years ago · 203 comments (201 loaded)

Reader

pilif 3 years ago

My favourite issue caused by a corporate firewall was when it altered an AJAX request to replace a specific combination of digits (in a long product ID) by asterisks.

Turns out that a substring of that product ID matched the client company's phone number and their security theatre intercepting proxy was replacing all occurrences of "sensitive" strings sent to the internet with asterisks.

The irony is, of course, that as the people running the site, I didn't know (and would never have wanted to know) the user's phone number until this incident.

How I loathe security theatre.

  • HPsquared 3 years ago

    Did you let them know? They of course need to patch this vulnerability by blocking anything containing 11 consecutive digits.

    • pstuart 3 years ago

      That's crazy!

      The right thing would be to add a lookup function to first verify the phone number is in use and then call the number to ask for permission to use it; followed by a webhook to send a confirmation back to the database to cache that info because this needs to be efficient!

      /s

  • deepsun 3 years ago

    Now the other site knows the phone number (they know what was sent, and they see what was replaced by asterisks).

    And now they can exfiltrate all the sensitive phone numbers -- just sending clients (you) long strings of numbers, and see what was replaced.

  • pandaman 3 years ago

    They have just implemented this protocol http://bash.org/?244321

  • slt2021 3 years ago

    this is because you deployed your somewhere in the public cloud and testing it from your workstation over public Internet? This is policy violation, and you need to learn how to develop and test properly over secure channels. Reach out to your Director of Engineering and request proper instructions how to develop and test software.

    public Internet facing channel is rightfully scanned and screened for these kind of patterns to prevent unauthorized data loss

    • pilif 3 years ago

      I think you misunderstood my tale. I'm running a SaaS business and one of our customers users had this issue when they were interacting with the site because that end user's proxy server was arbitrary altering AJAX requests made by their browsers.

      This is a end user making a request to an online shop and the POST request to "add product 123456 to the basket" gets changes to "add the product 12***6 to the basket" by a security* proxy between the end user and the web site.

      This isn't specific to the site we run. This would have happened on any site they were posting to.

      • alexvoda 3 years ago

        Shouldn't HTTPS prevent this unless the client has the certificate of the MITMer installed?

        This being security theatre, it is entirely plausible that the "security" proxy actually decrypted trafic and required the user to have the certificate installed.

        • alexvoda 3 years ago

          As I was saying, (from uncle comment):

          https://news.ycombinator.com/item?id=33095888

          > I work at a government agency and here are my tales.

          > 1) They install a root certificate on all machines and use that to MITM all TLS connections using a firewall appliance. They turn this MITM on one day without notifying any developer. Overnight, all our builds (run on-prem) fail because npm install, pip install etc fail and we spent a long time trying to figure it out. They are still failing to this day and I have to get off the VPN every time I need to run these simple commands. IT absolutely doesn't give a flying * about developers.

    • Fredej 3 years ago

      Or, hear me out ... Get a different job.

    • ornornor 3 years ago

      You must be fun to work with.

      • slt2021 3 years ago

        Had to deal with too many interns who go to company server and start updating packages and installing random stuff from internet using wget | sudo bash, like it is their college laptop, just to run some of crappy python snippet they found over at Stackoverflow

huy-nguyen 3 years ago

I work at a government agency and here are my tales.

1) They install a root certificate on all machines and use that to MITM all TLS connections using a firewall appliance. They turn this MITM on one day without notifying any developer. Overnight, all our builds (run on-prem) fail because npm install, pip install etc fail and we spent a long time trying to figure it out. They are still failing to this day and I have to get off the VPN every time I need to run these simple commands. IT absolutely doesn't give a flying **** about developers.

2) They ban all non-Chrome browsers from being installed. As in, if you install such a browser and try to launch it, the system will say "browser X is banned. Contact IT." They would have banned Safari too had it not been part of the OS. Furthermore, they also disabled private browsing in Chrome (probably the ability to do this is why they allow Chrome). I think they're preventing people from hiding their internet browsing.

  • sillystuff 3 years ago

    The MiM might not be your IT folks, but rather management. I was in a meeting which included folks from Palo Alto (PA) and management where PA was hard selling their ability to MiM all https connections and link all activities of the users to their usernames through various methods from directory integration to log scraping on radius servers. The managers were super excited about the possibilities. Management not only wanted to implement this, but wanted to do so in secret. IT folks were pushing back-- hard.

    Firewall as bossware.

    Firefox being banned is because it uses its own certificate store, so Firefox users would see a browser warning every time they visit any https site notifying them that their traffic is being MiM'd. Chrome and chrome reskins like MS Edge use the OS store which MS Windows centric organizations can easily (centrally using MS tools [GP]) add the trusted CA for MiM into. For the Macs, it probably wouldn't matter since the 3rd party mgmt tools could probably push out either.

    • justsomehnguy 3 years ago

      > Firefox being banned is because it uses its own certificate store

      FYI You can instruct FF to use system trust store: https://support.mozilla.org/en-US/kb/setting-certificate-aut...

    • throwaway743 3 years ago

      Caught an ex employer using sslstrip and they definitely used bossware. Management would imply they were reading work and personal messages, emails, browsing through thinly veiled threats to workers (self included) and through gossip.

      They also used push notifications on the desktops to know when people were active or what they were doing, and had keyloggers installed/active. Once caught a manager's personal laptop on the network running mitm software. A friendly coworker in IT confirmed all of this with me in private.

      Tried warning a couple coworkers, but got brushed off. People don't seem to care nor believe even though they're being manipulated.

      That place was a nightmare to say the least

  • TheRealDunkirk 3 years ago

    Oh, how I have learned the hard way on this.

    Our IT now blocks outbound SSH entirely. You know, the secure way to access VM's in, say, our cloud? Sigh. I'm sure there's a "jump" server somewhere that I'd have to log into, `sudo` to another account, THEN SSH to my target box. Whatever. I just avoid the VPN.

    I used to use `cntlm` to tunnel requests through our firewall for things like Ruby's bundler, as it required NTLM authentication. Now they've also gone the additional mile, and installed a certificate (Cisco Umbrella) in all of our computers, and require its signature to pass the firewall. Unfortunately, it took me a long time to sort this out: why `cntlm` no longer worked, and why none of the usual suggestions on SO fixed it. I finally figured out that RubyInstaller for Windows included a nice facility to deal with this. You just place additional certs in a directory, run a Ruby script, and it will bundle the whole stack into a single .pem, which it will reference for all network-related commands. Thankfully, bundler's error messages were telling me the specific certs I needed, and I could download them from Cisco's web site.

    Just about a month ago, my company started requiring that cert for ALL traffic, not just HTTP(S). Like for, say, Postgres connections on port 5432. I finally realized that I could reference that same SSL bundle in my Postgres client connections, and get through.

    I've spent about 8 years here now, and it's been a cat-and-mouse game the whole time. I'm always wondering what's coming next.

    • xist 3 years ago

      The way you brush it off is insane. Using a jumpbox IS more secure. I understand this may cause problems for your workflow (though there are many ways to work within the confines) it sounds like you're stubbornly insisting your way is the best (and therefore most secure) way. This reeks of entitlement. Work with people and stop being a prima donna, you're not above security concerns.

      • rtev 3 years ago

        As a security engineer, most developers I work with are like this.

    • slt2021 3 years ago

      your practices are the epitome of Shadow IT that company management doesnt like and fights

  • dspillett 3 years ago

    > IT absolutely doesn't give a flying ** about developers.

    They are not paid to. Their performance is judged against how close they get to zero compliance issues, not how close they get to zero times developers were unhappy!

    > I think they're preventing people from hiding their internet browsing.

    Without delving into the “do you have the right to privacy even on a company machine”, who would be daft enough to do something they want to hide from the company on a company machine, or the company network at all? Though there are valid useful uses of pron^H^H^H^Hicognito mode so it seems silly to ban it.

    • wahnfrieden 3 years ago

      You’re just arguing for surveillance (by capital or state) with the tired line of you should have nothing to worry about unless you deserve it which is absurd/reactionary

      • Spooky23 3 years ago

        You’d probably feel differently if some developer at the IRS compromised your identity because he used some compromised library to avoid internal processes.

        Security, public trust, etc requires controls and audit to deliver. You don’t want a startup “fake it till you make it” mentality in government or banking.

      • ornornor 3 years ago

        More like you can’t possibly expect your work machine to have any kind of privacy, being in the trade and knowing all the ways companies can (and will) use anything they managed to gather about you should they need to obliterate you.

        It’s simply good practice not to use your work machine for anything personal at all ever. Because depending where in the world you are, anything stored or viewed on a work machine gives your employer de facto access to it, legally speaking.

        • wahnfrieden 3 years ago

          You can be careful as a worker and still be against workplace surveillance

          • ornornor 3 years ago

            For sure, but let’s face it… we’re getting more surveillance and bossware at work, not less unfortunately.

      • dspillett 3 years ago

        I'm not saying it is right for people to be monitored, but that I would never trust that I wasn't being so I'd not be daft enough to do something I don't want the company to know about using their resources.

        And there are perfectly valid reasons for companies to monitor traffic: data exfiltration, accidental or malicious, is a significant concern for companies that hold and process PII and for the people who have their PII held/processed by those companies. It is not as black & white as “monitoring and surveillance bad” unless you only care about your personal privacy.

        • sillystuff 3 years ago

          Some of these "security" products that MiM TLS traffic allow configurations that objectively reduce your security. You can configure Palo Alto devices to accept a self-signed cert from the Internet, but present your trusted MiM cert to the on-site user. Now the user isn't aware that they are the victim of a second MiM outside the organization.

          The organization also exposes itself to greater liability. E.g., a rogue employee could use the trusted MiM CA cert for their own MiM e.g., capturing banking credentials of co-workers or accessing user/employee PII they would otherwise not have access to.

          Yes, monitoring traffic by MiM https to external sites can alert you to / possibly prevent accidental exfiltration, but it doesn't prevent intentional exfiltration. It is, however, very effective at monitoring employees. The thing it is best at, might be its true purpose in an organization.

          • dspillett 3 years ago

            > but it doesn't prevent intentional exfiltration

            It can prevent accidental exfiltration, or deliberate exfiltration by a relative incompetent, which are the majority of such problems.

            You are right in that they will not stop deliberate actions by a competent disgruntled or a competent external attacker who has access (but you have a much wider set of problems in this latter case).

            Maybe I'm old-fashioned (I am definitely a “working in an office, living at home” person which seems to mark me out as a dinosaur in the coming remote-work age!) but I don't think it is my employer's responsibility to provide me with unfettered unfiltered internet access to do personal stuff with. Work stuff on employer provided Internet which they can monitor all they like, personal stuff on my own devices & connections which they can keep the hell out of.

            • Xelynega 3 years ago

              Does this mean that it's OK for an employer to put cameras in employee bathrooms? The argument can be made that it's not the employers responsibility to provide me with unfettered access to a space to do personal stuff with, just like internet access, so why not?

              • xist 3 years ago

                Cameras in bathrooms? Complete hyperbole. If you're going to argue, atleast offer logical escalation concerns. day 1, inspect your ssl. day 2, cameras in bathrooms.

            • sillystuff 3 years ago

              If the employer needs to back-door encryption to discover your personal activity, how is it their business what you are doing? Your activities obviously caused them no public issues. If they did, the encryption back-door would not have been necessary for the discovery.

              In more civilized areas of the world privacy rights are explicit, and even things like employers snooping on employee email accounts on company owned email servers is illegal. When at work, you are selling your time to your employer, but that doesn't imply that the employer owns you while you are at work.

              As to the sibling comment about cameras in workplace bathrooms, yes employers did this and now there are laws prohibiting it. Now, employers just account all your time using bossware leading to e.g., folks at Amazon having to pee in bottles or wear diapers on the job to not get fired. There is no line that some capitalist employer will not cross unless we place limits with consequences to reign them in-- e.g., we no longer have employers forcing small children to crawl into running machine tools to clear a jam while risking a limb being sucked into the mechanism and turned into hamburger meat-- but, we did, it was common-- lives of the poor (especially children) were cheap, but stopping an assembly line was expensive.

          • slt2021 3 years ago

            I am afraid you are talking about stuff you have absolutely no idea about.

            Palo Alto appliance should be configured with both Forward Trust and Forward Untrust CA certificates, and the issue you described will not exist. If some people misconfigure - thats their fault for not following instructions.

            Secondly, rogue employee doesnt have access to CA key that is stored in Palo Alto appliance. Only your firewall admin will have it, but if your main firewall admin went rogue, capturing colleagues’s data is the least of your concerns. Insider threat of that calibre is equally applicable to rogue CEO or CFO stealing all money from the bank. Or your ActiveDirectory admin getting CFO’s credentials and corporate bank credentials.

            • sillystuff 3 years ago

              You seem to acknowledge that you currently can configure the device in the manner described while simultaneously being extremely aggressive. A conversation I had with PA support gives me the impression that PA didn't have 'Forward Untrust' when they first started back-dooring TLS i.e., the PA support person did not counter my point of negative security implications of their MiM back-door for invalid certificates encountered externally. This conversation was something of an on-site debate between PA reps and a few of our tech staff. PA pushing for spying on users and tech staff trying to come up with technical reasons why it was a bad idea (management already loved the idea of spying on the users, so no appeal to decency was going to work. Management arranged the debate without telling staff it would happen until the last minute while it was planned ahead with PA for weeks; IT staff at that college were good people who had a history of advocating for user privacy).

              Having PA MiM TLS connections is the organization back-dooring itself as well as the external sites the users connect to. This back-door is available for abuse by IT staff, management and/or an attacker(internal or external).

              There is a rule that seems to eventually always be proven-- if you provide infrastructure that can enable abuse, eventually it will be abused. Even if you and everyone else involved in the decision at your organization have good intentions, your future coworkers / management may not. Presumably the FBI and NSA have more thorough back ground checks of their employees than the average employer, and both have had employees abuse their access to surveillance data to e.g., stalk ex-girlfriends. And, even if the employee isn't rogue themselves, when the order comes from above, many will obey immoral/illegal orders-- e.g., Ronald Reagan, as president, had the FBI spy on his daughter's boyfriend. The safest option is to not to install the back-door in the first place.

              PA's ability to tie Internet activity to specific users' identities was central to their sales pitch-- our tech staff assumed this was targeted at windows shops, but we used non-MS stuff including our LDAP servers and hoped this could kill the surveillance project-- PA countered that they could, at a last resort, do things like e.g., scrape radius logs to associate identities.

              PA appears to be a competently run company that probably knows what messages are most effective at selling their product, and they really pushed user surveillance. Therefore, I suspect that many organizations who purchased PA products based that decision on the user surveillance capabilities (explicitly to enable abuse by management).

              PAs main feature seems analogous to an illegal phone wire tap, and IMO should be illegal (especially without notification to the victims-- both on-site and off-site). It is curious how corporate circumvention of encrypted communications without permission of the external site hasn't been seen as a CFAA violation while a simple 'view source' on a browser can result in SWAT pointing a gun at your child.

              • slt2021 3 years ago

                Forward Untrust Certificate has been a feature since day one, the earliest document mentioning Forward Untrust I was able to find online is for PANOS 6.0 which is like 8 years ago?

                in company of thousands users nobody has the energy to spy on employees - it is simply not worth the effort. Why would company spy on own employees, it is not something that brings profit for the company.

                The only purpose of SSL decryption is to decrypt traffic and enforce policies: prevent users from going to shady websites, downloading malware, clicking on phishing links, stop viruses, trojans and hackers' command&control comms. It is because majority of http traffic is TLS encrypted, that security vendors no other choice other than decrypt and inspect.

                Nobody is looking over zillions of logs, looking at what pages a random employee is browsing in a given day - aint nobody got time, energy, nor infrastructure to do that.

                User identity (also device identity, and app identity) is used as to classify traffic and it is then up to company admins to create policy for enforcement.

                Whatever the policy is - it will be enforced, and it is the same policy&terms you agree to by signing employment contract.

                Which says something like - your work laptop and corporate Internet connection can only be used for work related stuff and not personal stuff, etc, etc.

  • thaeli 3 years ago

    Why are they allowing you to run npm, pip, etc from public repositories at all? That's a huge supply chain risk. If builds are worth doing on prem they also need to be pulling solely from internal, vetted repositories.

    • dspillett 3 years ago

      > Why are they allowing you to

      Maybe “you shouldn't be doing that anyway” is a key part of why they don't care to spend effort resolving the problem.

    • SSLy 3 years ago

      Bold of you to assume the developers were given a budget and staff to run on-prem repo mirror.

    • TheRealDunkirk 3 years ago

      The idea that some team is "vetting" that the entire stack of stuff you'd pull from npm for a React front-end app is "safe" is ridiculous. Forget the mirroring; that's trivial. What criteria or process would make you think you had a "vetted" snapshot, beyond what they already do!?

      • slt2021 3 years ago

        This process is automated by static code analysis tools, once it is deployed then absolutely no meatbag effort is required

    • flumpcakes 3 years ago

      This is a big worry of mine.

      I've worked in various places from big finance, public sector, to privately owned. It was only the big finance institution (the biggest) that actually seemed to care about supply chain attacks. Everything was locked down super well and you could not! use a random library without it getting vetted by a central team. In fact, we were even locked down to specific versions of programming languages.

      People see this as annoying and in the way of developers, but it really is the only way to secure your development "supply chain". When people cry about this I always ask: do you really want the entire financial industry grinding to a halt because someone took down left-pad?

  • Spooky23 3 years ago

    For good reason. Stuff like that is a really high risk and won’t meet audit standards.

    I’m in charge of the IT goons somewhere. We aspire to provide a better level of service and maintain local repos of things you’re allowed to use. Stuff like Node isn’t allowed near anything important though.

    I would be careful. An agency doing stuff like that is probably running an EDR that will detect and report on what you’re doing. If it catches what you’re up to, you’ll be jammed up.

    • _fat_santa 3 years ago

      It might very well be for a good reason. But in my experience, it's never the policies but the communication. IT was right to make whatever policy change they needed to, the fucked up by not telling any of the dev teams.

      • Spooky23 3 years ago

        100% agree. Most fubar things are caused by poor communication, lack of empathy and poor understanding.

  • jjoonathan 3 years ago

    The #1 value proposition of the cloud is escaping dogshit expense processes and the #2 value proposition of the cloud is escaping dogshit IT.

  • deepsun 3 years ago

    Well, if we're talking about security, their ban on NPM is a good thing, that's a huge supply chain risk.

    If you don't have a budget for the vetted repositories, it means you don't have a budget for the project within the security requirements. You shouldn't be circumventing the security requirements, you should escalate the issue.

    PS: of course I'm not talking about other things like MITM certs, that only reduces security.

  • raxxorraxor 3 years ago

    > They ban all non-Chrome browsers from being installed.

    There are portable apps for other browsers, Firefox for example. FF has its own certificate store that overwhelms IT.

    about:config -> security.enterprise_roots.enabled and it uses the system store.

    Overall Firefox is a very good browser to configure for different machines. The out of the box Chrome or Edge are just a bit more forgiving to MITM attacks. So the user doesn't notice perhaps? Aside from that they are horrible browsers with horrible priorities.

  • ornornor 3 years ago

    Worked at an insane place that did 1.

    “It’s more secure” they said.

    The “solution”? Disable certificate checking of course! What could go wrong?

    Same place that ran a vulnerable instance of nexus (the package manager) for all internal npm and maven packages for a whole year before patching. It was publicly accessible. And it had a banner on the homepage that said “this version is vulnerable (severity 10/10 anonymous RCE), update NOW”. Anyone who went to https://nexus.1337company.com would see it.

    That company did software for the government. I’m sure I wasn’t the only one who noticed the vulnerability and that some packages got tainted. But we’ll never know because no audit was ever performed and there were no backups of that server anyway.

    Like I said, absolute joke of a workplace.

  • bombcar 3 years ago

    That first one indicates something is being injected and the checksums are failing, that's ... worrying.

    Or npm and pip use their own certificate stacks and refuse the firewall's cert, which is ... good I guess.

    • marcosdumay 3 years ago

      It could be just the problem of the certificate being invalid for those tools because the MITM one was installed only for Chrome up to the firewall replacing all of the files with html pages of some antivirus with internal links where the user can download them.

      Corporate middleboxes come in all shades of stupid.

    • thayne 3 years ago

      > Or npm and pip use their own certificate stacks and refuse the firewall's cert, which is ... good I guess.

      Combined with the fact that chrome is the only allowed browser, I suspect it is the other way around. Chrome uses its own certificate stack, and I would guess IT only added the MITM certificate to the chrome trusted CA list, not the system one.

    • Karunamon 3 years ago

      I would be willing to bet the first one is caused more by those tools not being aware of the firewall appliance CA rather than failing checksums. Doing your own certificates at scale is a pain in the ass because every tool/container has its own way of handling the trusted list.

  • slt2021 3 years ago

    this is because "pip install" is insecure, this is supply chain risk. Your IT team should have provided local artifactory proxy through which you can pip install.

    you should use this command "pip install -i http://artifactory.mycompany.local pandas" and get url for artifactory from admins

    • TheRealDunkirk 3 years ago

      If "your IT team" has merely created a snapshot of an external repo, how is this any more "secure?" I've asked a similar question below. I really want to understand the thinking here. No IT department is going to go line-by-line through all the packages in "artificatory" or Ruby gems or NPM packages or NuGet's repo, checking them all against known vulnerabilities. No one's going to vet the actual code. If there's a public advisory for one of the packages, the parent repo is going to fix it first, and the internal repo may already be compromised, and the "IT team" is going to have to duplicate the work that the repo runners are already doing, and do it slower. I'm lost here.

      • slt2021 3 years ago

        there are IT security vendors that provide static code analysis and scanning for known signatures, that can detect and block malicious packages. Just target SCA at local artifactory and this will be a solved problem. CISO just needs to buy solution and IT admins just needs to deploy that software once and it will keep scanning. Absolutely no extra work from meatbags is required

        • richbell 3 years ago

          > Absolutely no extra work from meatbags is require

          Unfortunately this is rarely true in practice. There is always some degree of friction or error that ought to be managed; ignoring it is how shadow IT proliferates, e.g. a dev is tired of their builds failing due to a false positive and decides to circumvent artifactory altogether.

          You're spot-on otherwise.

        • TheRealDunkirk 3 years ago

          So these bulk scanners exist, and the issue is a solved problem, but none of the "root" repos for the popular language stacks are using them?

          It seems that Microsoft has built an internal tool that runs such a scan on NuGet (https://devblogs.microsoft.com/nuget/how-to-scan-nuget-packa...), at least against your individual app's packages. (That would be a very rare h/t to Microsoft from me.)

          EDIT: Apparently, you can also do this with npm packages (https://docs.npmjs.com/auditing-package-dependencies-for-sec...). I don't see any facility to do this with Ruby gems.

          It looks like the common practice would be to outsource the issue database to GitHub, and let whatever scanner you're using cross-reference that list?

          What happens when it finds a reported problem? Does it automatically delete that mirrored package, and/or block it from being downloaded or used from the on-prem repo?

          This is all new to me, and has helped put this in context, but what actual software are you talking about using for analysis?

          EDIT EDIT: Running `yarn audit` in my main Rails app (just using webpacker to bundle the JS):

              97 vulnerabilities found - Packages audited: 1074
              Severity: 2 Low | 34 Moderate | 52 High | 9 Critical
          
          I just did a `yarn upgrade` about a week ago, so it's not like I'm completely out of date. What would a centrally-managed SCA do about this situation?
          • richbell 3 years ago

            > So these bulk scanners exist, and the issue is a solved problem, but none of the "root" repos for the popular language stacks are using them?

            I could talk at length about this; unfortunately, I'm on my phone with a shotty connection.

            The tl;dr is that companies like Snyk make money by requiring companies to pay to check for vulnerabilities once they've been downloaded. There's not necessarily anything wrong with that, but a philanthropic company could make things significantly safer for everyone if they weren't concerned about making money. Initiatives like the OSSF will hopefully have a positive impact, for this reason.

            • slt2021 3 years ago

              You need money to staff people with security knowledge that constantly keeps product up to date with latest malicious signatures in all codebases.

              This is why only commercial company can build a great product. The constant cat and mouse game between threat actors and defenders/researchers. Every new malware/trojan/cryptominer strain needs to be found, identified, signature written, and all clients need tk get latest signatures asap, and product has to work flawlessly with as few false positives as possible

              • richbell 3 years ago

                > You need money to staff people with security knowledge that constantly keeps product up to date with latest malicious signatures in all codebases.

                Of course; it's just funny to think about how much money gets spent detecting and fixing things downstream instead of fixing it at the source.

  • xani_ 3 years ago

    And that kids is how it looks when security team just sits in their ivory tower and shits on everyone else in name of security theathre they're paid to play

    > Overnight, all our builds (run on-prem) fail because npm install, pip install etc fail and we spent a long time trying to figure it out. They are still failing to this day and I have to get off the VPN every time I need to run these simple commands. IT absolutely doesn't give a flying ** about developers.

    Add their cert to system store ? Won't help inside containers tho... without much fuckery

    • slt2021 3 years ago

      You are not supposed to mess with certificates inside containers.

      Security team should have provided you with golden image with hardened config, latest patches installed, and corporate certs installed in certificate store.

      If they didnt, they aint doing correct DevSecOps/SecDevOps or whatever the fancy term is for integrating security within development team.

      It is a big red flag that any developer can pull whatever image for container running in production, possibly with unpatched vulnerabilities and loose config and ports open, and running with root privileges, etc.

      Usually stuff has to be vetted and checked prior to being deployed in production environment

    • richbell 3 years ago

      > Add their cert to system store?

      That's the fun part: every technology and tool has its own bespoke way of handling certificates, and it often isn't as simple as adding a certificate to the system store.

    • bornfreddy 3 years ago

      Why not? Just mount a volume with certs and you're good to go. In every container, of course.

cheschire 3 years ago

"Aha, so an overzealous IT network decided to block the request before it even reached my server."

What classifies this as an "overzealous" act of network configuration? There may be a subjectively legitimate reason the user's network was configured this way.

"I had no idea I was ever going to get anything different."

There's an entire list of HTTP status codes. That was your clue that you would get something different. You made a decision to not have handling for them all. Not implementing handling for 418 is understandable, but forbidden and service unavailable responses are common enough.

  • richbell 3 years ago

    > What classifies this as an "overzealous" act of network configuration? There may be a subjectively legitimate reason the user's network was configured this way.

    Worked at a large FI.

    Our corporate firewall used to block any website or payload that contained the word "hack". At one point, the security team decided to roll out a change that blocked all verbs except GET and POST without telling anyone. I could go on.

    • thedougd 3 years ago

      And probably replies with a 200 and a blocked page.

      What you tend to see is the web firewall is administered by someone who has only one duty (manage this firewall) and very narrow set of skills (certification in this appliance). They probably have a very shallow understanding of the http protocol.

    • ballenf 3 years ago

      And the nearby Burger Shack wondered why their online orders plummeted.

    • erinaceousjones 3 years ago

      Wow, that's whack. I couldn't PUT up working in a place with such a hackneyed firewall limiting my OPTIONS so much, really raises my hackles. I'd HEAD out the door so fast in such a ramsackle establishment, I wouldn't even ask for a reference, I'd just kindly ask that they DELETE my number

    • ntrz 3 years ago

      > Our corporate firewall used to block any website or payload that contained the word "hack".

      How else are you going to stop employees from downloading and playing NetHack at work?

  • jeroenhd 3 years ago

    Allowing this person's gift card shop but not allowing POST requests is clearly overzealous in my book.

    I understand that some companies want to block certain websites. However, if you're in such a restricted network, I wouldn't expect a website like "Thankbox" to work at all.

    An overzealous filter like this prevents normal POST requests (logging in to websites, etc.), lets through random websites (gift card website) and allows all manner of data exfiltration and other nasty stuff. The goal is laudable, the implementation is laughable.

  • jjoonathan 3 years ago

    There's a subjectively legitimate reason to consider blocking POST (but not GET) requests ruder things than "overzealous."

    • adev_ 3 years ago

      > There's a subjectively legitimate reason to consider blocking POST (but not GET)

      No, just no.

      In a world where many website use GraphQL (POST request with content) (or gRPC) that's complete garbage decision.

      - This kind of brain-dead admin decision is exactly what bring protocol abuse: people would just use GET query with a ton of parameters and violate semantic just to avoid stupid middle box problems. Same goes with TLS which is used everywhere (even in VPN) just to bypass the crappiness of corporate firewall and stupid managerial decisions.

      • jwolfe 3 years ago

        The rest of the sentence that you left off in your quote is saying that blocking POST requests is worse than overzealous. You are in agreement with them.

    • concordDance 3 years ago

      What kind of reason? You can have plenty of communication via GET requests.

bornfreddy 3 years ago

While I am sympathetic with the developer, a large part of fault lies with them. Firewall actually behaved very nicely.

Always check status codes. Don't assume that backend (even if it is your own server) behaves as you think it should - complain when the response is not what you would have expected.

This is why I hate those error responses that encode the error message into JSON and return status 200. Gee, thanks - your backend is so special that it is an honor to write custom error handling for it. /s

Glad OP solved it in the end, but I would suggest reacting to all 4xx and 5xx statuses. It's a standard, if you get 418 you know what "your" backend is saying.

hyperman1 3 years ago

We once had to fight for stackoverflow access. Security responds: you devs should only require the manual provided by the vendor(in this case: Oracle javadocs)?

  • easton 3 years ago

    If anyone is stuck somewhere like this, Stack Overflow has dumps regularly updated on archive.org -- https://archive.org/download/stackexchange/

    Dash (or its Windows equivalent, name escapes me) can be used to view and search these dumps (as well as dumps from GitHub, language docs, etc) offline: https://kapeli.com/dash

  • P5fRxh5kUvp2th 3 years ago

    I was recently told by an old-timer at my current company that at one point security tried to remove Visual Studio from developers machines because it had reported security incidents.

    The problem with security people is that they think security is the most important thing.

    • unethical_ban 3 years ago

      The problem with devs is they think all security admins are reductionist. <3

      A good security admin will work within the bounds of compliance to make the business work. And any good blocks will be apparent to the user. Trust me, security doesn't enjoy pissing people off, we just accept that it happens sometimes.

      • P5fRxh5kUvp2th 3 years ago

        last week a former co-worker called me laughing.

        He was on the phone with the CISO who was explaining it's impossible to give him access to SPLUNK because of the network segmentation.

        While he's ON THE PHONE, he received an email from the IT group with credentials to access splunk.

        And to be clear, I left specifically because of their security stance. I was once told they couldn't automate pulling data from production because of the same reason as mentioned above, the network segmentation wouldn't allow it.

        So no, developers aren't just whining because they can't directly access PAN.

        Security people always think their concerns should trump everything else. I would almost be willing to bet 70% of the mind-numbingly stupid decisions made across the industry had some security justification behind it.

        If human beings took the same approach to safety that Security people do to security, they'd insist the wheels on your vehicle should only be able to turn straight and right. That the vehicle should _actively_ prevent you from turning your wheels left because left turns are more dangerous than right turns and they can show that you can _always_ get to your destination with just right turns.

        • unethical_ban 3 years ago

          I'm sorry you're so traumatized by a handful of experiences, and seemingly at only one or two places, that you can't comprehend a workplace or institution with a reasonable security team. They exist. Maybe one day you'll find one.

          One of my former employers has developers, network admins and security professionals working together to maintain a deployment pipeline using Github, terraform and AWS to let developers do as much as possible without having to request anything from security, ever. All the guardrails and checks are built in. Labs get to deploy just about anything, test and prod are identical, and prod has implicit restrictions on requiring encryption for data, prohibiting excessively powerful roles, and so on. But they've worked directly with development to get them everything they need ahead of time, in order to make IT and the business as effective as possible.

          Security is necessary, and good security does what it can to stay out of the way.

        • slt2021 3 years ago

          Security people usually hate adhoc and one-off requests for random stuff from random people. If you are part of the required business process - then there is 100% established and approved way of doing things. For example for Splunk - CISO simply needs to be added to a AD group that is designated to have Splunk access, something like SOC-analysts group.

          For pulling data from prod - this is often discussed. Data in production should not be pulled in lower environments (dev and test), because of segmentation, but you can absolutely operate with Prod data within prod environment, like by using approved production datalake or data warehouse or something.

          Believe for every security decision that you think is stupid - there are many incidents that happened, and every rule and ban has happened because of these incodents/breaches/data corruption, etc.

          It is like workplace safety instructions, they were written because of workplace injury, same for traffic laws.

          • P5fRxh5kUvp2th 3 years ago

            yeah, lets equate someone getting a limb ripped off with allowing developers to have local admin rights.

            That's everything that's wrong with the security mindset.

            • slt2021 3 years ago

              Developers dont need local admin rights to develop software, plenty of devs at regulated industries work with user rights.

              And statistics of developers falling victim of phish attack, credentials stealing that leads to major breach - there are plenty. The most recent Uber hack or Okta hack - were all tied to developer clicking on stupid stuff, opening executables from Internet and getting his a$$ owned.

              You just gotta accept the fact that developers are not security specialists, most of them cant even create a software without introducing plenty of bugs and vulnerabilities. They mostly google stuff and copypaste from stackoverflow, install shady barely working packages and copypaste directly into production whatever code snippet they found on the first page of Google results. Thats why they need extra control from security specialists

              • P5fRxh5kUvp2th 3 years ago

                > Developers dont need local admin rights to develop software

                And now I'm going to quote myself from earlier to make it clear you're displaying exactly the silliness I was speaking to, with added emphasis.

                "If human beings took the same approach to safety that Security people do to security, they'd insist the wheels on your vehicle should only be able to turn straight and right. That the vehicle should _actively_ prevent you from turning your wheels left because left turns are more dangerous than right turns and THEY CAN SHOW THAT YOU CAN ALWAYS GET TO YOUR DESTINATION WITH JUST RIGHT TURNS."

                ---

                You see, you can still get to your destination with no left turns, it's just really damned inconvenient and has costs in terms of happiness and time.

                It's a classic case of security people making decisions they themselves don't have to pay the cost of.

                And don't get me wrong, you'll often hear security people _CLAIM_ they do, in fact, adhere to all of the security practices they insist on. And they may even do so.

                But ...

                THESE SECURITY PEOPLE ARE NOT DEVELOPERS.

                There's no critical thinking in these decisions. A phone agent working in a very specific application all day doesn't need access to the PC the way a developer does.

                ---

                > And statistics of developers falling victim of phish attack, credentials stealing that leads to major breach - there are plenty. The most recent Uber hack or Okta hack - were all tied to developer clicking on stupid stuff, opening executables from Internet and getting his a$$ owned.

                uber hackers got through using slack, okta was a technician RDPing in.

                Neither were developers, and unless you're prepared to claim slack wasn't sanctioned by the company, it's all just a long worded admission that removing local admin rights didn't actually help.

                Then there's the question of, if someone steals a developers credentials, what do they have access to?

                THAT is where the rubber hits the road. I've literally seen the following:

                - Disallow developers from running powershell, but they can log directly into DB's with PII and PHI data ("they had a legitimate business need").

                - Force developers making 6-figure salaries to "request access" for admin or the installation of software, said requests being granted by support teams of people making a little over minimum wage.

                There's a reason why so many people call it security theatre.

                > You just gotta accept the fact that developers are not security specialists, most of them cant even create a software without introducing plenty of bugs and vulnerabilities. They mostly google stuff and copypaste from stackoverflow, install shady barely working packages and copypaste directly into production whatever code snippet they found on the first page of Google results. Thats why they need extra control from security specialists

                The reason your company is full of such developers is because you took away local admin rights and the ones with options left. You don't even have any left who could mentor the ones that need mentoring, they left too.

                Put yourselves in the shoes of that developer who can access PHI at will, but cannot update their Visual Studio in the name of security because it requires local admin rights to do so.

                • xist 3 years ago

                  > Put yourselves in the shoes of that developer who can access PHI at will, but cannot update their Visual Studio in the name of security because it requires local admin rights to do so.

                  One possible thought - they think very highly that you will do the proper thing, but they cannot and do not trust every single vendor out there. Have you heard of SolarWinds?

                  And honestly updating Visual Studio is something that can be arrange but would take probably 1 hour of IT time to solve and I'm sure they have other things they need to do.

                  Developers are not special to NEED admin access. they may WANT it because it's more convenient, but convenient is not secure. Maybe you're the most 1337 developer out there, or maybe you're a corporate spy.

                  Perhaps instead of lashing out and getting angry, approach this like a developer - what are they trying to achieve? Do they have technical debt like you? Is this a good enough solution for most use cases?

                  Inconveniencing you is not the main goal, so perhaps understand what their main goal is.

                • slt2021 3 years ago

                  I've been on both sides, was a developer and then security engineer, now back to dev work. I know there are quite a few very well talented engineers, but there are also quite a lot of mediocre developers, including interns, new grads, or startup folks who are used to cowboy style edits directly in production and no tests. You always want to plan your security controls for the weakest link, for the dumbest person, prepare for the worst case. This is how you have some assurance that your security will work regardless of who is sitting in front of keyboard: teenage intern or gray beard guru.

                  using your car analogy - car designers created steering wheel, blinkers, and mirrors to increase safety, but you insist that since you are power user - you want to be able to turn left/right by drifting using parking brake. This is obviously safety risk on public roads, and understandable how corporate fleet employer like Greyhound might not allow drifting when driving company bus with passengers.

                  You are free to drift on your personal equipment though, during non-work hours and without wearing company clothes.

                  Developers really dont need admin rights, Visual Studio and any other software is updated automatically these days using tools like SCCM. This is not an issue at all. If you need full control over OS - install free VirtualBox, or get a lab VM and do whatever you want inside that isolated VM, but not on the host machine. Because your machine is tied to AD, email, bunch of other corporate stuff - IT cannot risk giving admin rights, so that you can disable all necessary security protections.

                  Just because you are power user, doesn't mean your colleague in next cubicle is as smart and doesn't click on phish Linkedin emails.

                  PII is not an issue at all, because of security endpoint agents, network traffic inspection, data loss prevention, and network segmentation, and bunch of other security controls.

                  Just because you make over 6 figures doesnt make you any better than minimum wage IT support folks, they are following scripts and established procedures very well, most of them do their job well.

                  I agree that a lot of places have security theatre, because Security engineering is even rare skill than software engineering, it is much harder to find skilled seceng than SDE.

                  But things like SQL injection, shell command injection, url traversal, and zillion of other attacks - are made possible by software developers, and it becomes then SecEng's problem to protect company against whatever crap they coded and pushed to prod.

                  • Fredej 3 years ago

                    > Because your machine is tied to AD, email, bunch of other corporate stuff

                    Most of the time, I genuinely wish it wasn't. There's so much I have access to, that I would never, ever need. And because I have that, I can't access things I actually need.

                    Just give me a iPad for all the corporate stuff and let me work on an open PC.

                  • P5fRxh5kUvp2th 3 years ago

                    None of the things you're describing are protected by removing local admin rights. That's the point.

                    First you compare the risk of losing limbs to having admin rights, now it's drifting on public streets is like wanting to install python.

                    You can't find anything reasonable because there isn't any.

                    • rtev 3 years ago

                      I’ve personally abused unauthorized developer python installs for privilege escalation > 3 times while red teaming.

                      Consider the possibility that you may be wrong.

                    • slt2021 3 years ago

                      You havent provided a single valid reason why developer needs admin privilege.

                      • P5fRxh5kUvp2th 3 years ago

                        Should I quote myself a 3rd time with the analogy pointing out that just because you can get somewhere using only right turns doesn't mean that's how you should do it?

    • happimess 3 years ago

      The most secure option is to bury everyone's computer in concrete.

    • denton-scratch 3 years ago

      Well, for them, it is and should be the most important thing. The problem bites when management shares that view.

  • reisse 3 years ago

    We once had an idea about integrating libtorrent for distributing binaries. Turned out libtorrent website was banned by corporate firewall due to "piracy".

    Decided the idea was not worth it to fight with infosec guys.

  • BizarroLand 3 years ago

    I would have taken that as my queue to start finding another job. Not that I can't puzzle everything out from scratch every single time I need to do anything, but why should I reinvent the wheel when off-the-shelf is both faster and higher quality?

    • thedougd 3 years ago

      These types of policies and mismanagement drives out the best talent and leaves the organization filled with coasters who love any excuse to not do their job.

    • nightpool 3 years ago

      (cue, as in "a signal (such as a word, phrase, or bit of stage business) to a performer to begin a specific speech or action", e.g. "That last line is your cue to exit the stage". See https://www.merriam-webster.com/dictionary/cue)

  • arethuza 3 years ago

    About 2003/2004 I was working onsite at a customer that blocked access to Google - that was fun.

    • jiggawatts 3 years ago

      As recently as 2015 I was working at a customer site where the web proxies were so misconfigured that Google was effectively blocked. The main page would load about a third of the time after maybe ten or twenty seconds. This was a huge org with 15K users, including dozens of developers and hundreds of general IT staff.

      Turns out that a one-checkbox-tick fix was all it took to make that go away. The woman in charge of the web proxies panicked, thinking that this change had "broken something", reverted the change, and then refused to change it back.

      Fun times.

  • incomingpain 3 years ago

    As a security guy, we are taught the CIA triad early and it's easy to forget.

    The A stands for availability and if you don't make things available, you're failing at your own job.

    • microjim 3 years ago

      Well, that depends. You certainly want to ensure the availability of the information under your remit isn't compromised by a threat actor, but reducing your attack surface by, say, shutting down external internet access is certainly a valid mitigation in some circumstances.

    • zasdffaa 3 years ago

      CI = ??

      • zweifuss 3 years ago

        I didn't know either, so I looked it up: The three initials stand for the three most important IT protection goals, often referred to as the "pillars of data security":

        Confidentiality,

        Integrity,

        Availability.

        There are other IT protection goals, including authenticity, privacy, reliability, and (non)repudiation.

        • microjim 3 years ago

          >authenticity, privacy, reliability, and (non)repudiation.

          These fall under integrity, confidentiality, availability, and integrity respectively! The CIA triad is pretty comprehensive!

        • doubled112 3 years ago

          I think CIA covers everything in the list, right?

          Authenticity and non-repudiation falls under integrity.

          Privacy falls under confidentiality.

          Reliability in context is another word for availability.

      • easton 3 years ago

        Confidentiality, Integrity

  • deathanatos 3 years ago

    A previous employer of my blocked the XTerm escape sequence reference. Like, okay, may your terminal output be plain and boring, I guess.

    Also, the link to "request an exception" lead to a 404 and the IT team responsible for the blocking didn't respond to email.

  • bluedino 3 years ago

    There are a number of sites categorized as 'file sharing or download' that we can't get to, here. Ugh. Bad idea when your userbase runs on free software.

    Oddly enough, I can get on imgur

    https://imgur.com/CUncIEP

  • zasdffaa 3 years ago

    Little experience with javadocs, so how do they fall?

    • n4r9 3 years ago

      Also no experience with javadocs or Java, but in my experience Stackoverflow is a huge productivity boost for a junior dev.

      For example I wonder if javadocs shows you how to convert an InputStream to a string, as per this question: https://stackoverflow.com/questions/309424/how-do-i-read-con...

      • karatinversion 3 years ago

        I’ve used javadocs plenty, and really like them, but they are organised by package and class, so figuring out how to do something when you don’t know what package to use is very painful. Say you want to know how to delete a file at a given path. I’ve been around the block a few times, so I’ll know that it’ll probably be an operation on java.nio.file.Path, so I can find the Java doc for that, hit “Uses”, and search for “remove” (nothing) and “delete” (ah-hah, there it is).

        If you don’t have a starting point like that from prior experience or stackoverflow, you’re stuck clicking around the package lists, hoping to land on something useful

    • duckydude20 3 years ago

      i am a junior developer and now i am rely heavily on them. esp, multi-threading stuff. but in most cases i know what i am looking for. the particular interface or at least i have some idea. this in true for most jdk framework. others java docs indirectly. ctrl + q. but i do sometimes go back and search uses/example of some implementation i found via docs. eg. selector and channels... but rn, i am liking reading docs first.

    • tikkabhuna 3 years ago

      Javadoc I find is excellent, but its firmly in the "Reference"[0] quadrant of documentation. I find it very readable and useful when you know what you're looking for (finding subclasses of Collection, for example). However, Stack Overflow is excellent when you don't know where to start.

      [0] https://documentation.divio.com/

    • richbell 3 years ago

      JavaDocs are like the owner's manual included in a car: useful for many things, but if you need to figure out what route to take to get from point A to point B it probably won't help you.

  • raxxorraxor 3 years ago

    RE: RE: stackoverflow access

    CC: Management

    With such a strategy for learning everyone would end up in IT security.

    Regards,

    hyperman1

coldcode 3 years ago

I worked in a financial company in mid 2000s where the network head did not believe in internal firewalls so that all internal users were on the same network as all the web app servers and database servers. If someone was downloading a movie then customer web access slowed; since everyone used Windows everyone was required to run virus scanners on their computers and that included the app and database server machines since they were not isolated from the rest of the network. If a vendor came to demo something they were unable to since there was no way to isolate their laptop from everyone else so they could not access the internet.

Good thing I never put any of my money in the company accounts...

  • icedchai 3 years ago

    This sounds like most of the early ISPs I worked at. No firewalls, and switches weren't popular yet, so we had hubs. The "backbone" of the ISP network was the same as the main office network. Any employee could just tcpdump all the traffic. Actually, we had a couple of customer-owned servers that were colocated, that could also dump all the traffic. Eventually someone set up a firewall (Linux box with dual ethernets) to segment the colo traffic.

  • deepsun 3 years ago

    You won't believe it, but the "one network" came back nowadays. It's called "zero trust", basically treating your internal network as public.

    • PLG88 3 years ago

      Zero Trust pricniples may implie having a flat underlay but explicitly access to applications and services should be microsegmented, least privilege, and authenticate/authorised on strong identity before any connectivity can be established - i.e., the overlay is closed by default and does not trust the underlay. Ideally you put ZT inside an application so you do not need to have any inbound ports, public DNS, etc etc.

    • slt2021 3 years ago

      that's right, except for all traffic is TLS encrypted, all authN/Z is at least two-factor, all services are least privilege/white-list, even intercepting traffic/session keys/or even user/pass credentials wont give you anything important

gwillz 3 years ago

I have a government client that has locked down all outgoing access for a web server except though a socks proxy.

It makes simple things really hard - like a links checker, package dependencies, remote servers or integrations with Google.

We can't even run test scenarios on the machine because we're also locked _out_ of the server. Instead, we rely on their IT department to run test scripts that we send them via email.

We were debugging an elastic server connection for 2 weeks that was working perfectly fine in their "QA". It's a horrible existence.

  • dijit 3 years ago

    Used to work at Ubisoft and they had this same policy, they used an authenticated http proxy, so you either expose your entire SSO credentials to your environment (HTTP_PROXY=http://user:password@proxy:3128) or you don't get access to the internet for all your console applications.

    Even then, if you were using certificate pinning, it wouldn't work as the HTTP proxy would serve a "are you sure you want to continue" HTML page, which is of course not expected.

    SSH is out of the question.

    it's amazing what "simple" things break; like kubectl, gcloud, go get.

    So frustrating. Countless development hours lost to bypasses.

  • bheadmaster 3 years ago

    > I have a government client that has locked down all outgoing access for a web server except though a socks proxy.

    If you're running Linux, there's a utility called "tsocks" which wraps any other command and redirects all network servers through a SOCKS proxy defined in /etc/tsocks.conf, e.g.:

        tsocks pip install somepackage
    
    One downside is that since it relies on some linker magic, it doesn't work for static binaries. But for most common usage, it served me just fine.
  • roflyear 3 years ago

    Hopefully you can bill them for all of this, but yeah, totally ridiculous and costing the taxpayer a ton of money.

    Reminds me of a friend who started a government job, and they went 6m before they were fully onboarded and able to work. ????

gabesullice 3 years ago

> It helpfully spits out this HTML response in return but, of course, my frontend code was expecting a JSON response. I had no idea I was ever going to get anything different.

I wish more front-end devs recognized that they're building HTTP clients whenever they make HTTP requests. There's a whole specification written about how to do that well so one doesn't have to learn things like this the hard way. Specs may look old and esoteric, but following them bakes hard-earned wisdom into your apps for free.

  • PcChip 3 years ago

    Isn't there some kind of library to handle this without having to code it each time?

    • raxxorraxor 3 years ago

      There is a HTTP status code in any response and a 403 like in this case should already inform you about the problem. Pretty high level already.

      I prefer to let the users actually see such errors, although that seems to be an anti-pattern today.

      Usually any message receiver should first check the status code and only proceed if it is 2xx and handle errors in any other case.

      But such edge case errors (a 403 usually isn't an edge case) getting swallowed still happens on the most prominent on thoroughly tested sites. Had similar stuff on Amazon and Microsoft pages for example. Saw the errors in the console but they weren't displayed to the user.

    • gabesullice 3 years ago

      Yes and no. A library might check for the status codes, but then what? Will it throw an exception? You'll have to catch them.

      Will it fail silently? If so, is your code prepared for null to be returned instead of parsed JSON objects. You'll have to check for that.

      Using a library isn't going to save you from handling all the error states and unexpected response bodies. It'll just change the documentation you're reading and the name of the abstractions you're dealing with (e.g. status codes -> exceptions)

  • ValCanBuildOP 3 years ago

    I've found often in my career that, sadly, learning things the hard way is usually the best way to remember the lessons.

    • gabesullice 3 years ago

      Hey, me too! Thanks for writing up your experience and publishing it btw.

      I think you did a great job cementing the "why"—usually this topic is very hypothetical. I also liked how you tied it to real end users. After all, that's who the internet is for!! [1]

      My intention wasn't to criticize your post. I hoped my comment would help one or two readers recognize the underlying problem space a little sooner, which might help them learn a more broadly applicable lesson when the time comes.

      [1] https://www.rfc-editor.org/rfc/rfc8890.html

tanepiper 3 years ago

If I want to push to GitHub when I am in the office, I have to VPN out of the office connection because Port 22 is blocked.

And they wonder why I prefer to work from home?

  • gsu2 3 years ago

    For anybody running into a similar problem: most git hosting services set up a subdomain that will allow SSH traffic over port 443, e.g. ssh.github.com, altssh.bitbucket.org, altssh.gitlab.com, etc.

  • ValCanBuildOP 3 years ago

    Oh god this is horrible!

    Yeah, I can't believe how stupidly locked down some of these networks are.

    I once had an employer said they needed a "whitelist" of websites we wanted to visit instead of a "blacklist" of ones we shouldn't. That was an interesting day...

    • davewasthere 3 years ago

      I had exactly this.

      We run a Saas and someone wrote an email saying that our server was down, and when we'd expect it to be up. Not having had a notification, I double checked from a couple of geographic locations that our application was indeed up and responding.

      After a bit of investigation, it turns out that they have to whitelist every unique address with their corporate IT. And had only whitelisted our primary client-app URL (talks to a couple of different API endpoints), hence the strange error message.

      It's been a long time since I've worked somewhere with whitelisting.

    • shireboy 3 years ago

      I’m dealing with this now. Company got hacked and so now are over the top locking down everything to the point it’s unusable. I told them the other day that the most secure thing they could do is just turn it all off.

  • Kim_Bruning 3 years ago

    My solution was to run an sshd on port 443.

    I currently no longer need to do so right this minute, but sometimes people do keep asking me why I still have that.

    ---

    Not sure if this still works on modern corporate networks. These days I tether to a mobile phone with unlimited internet; which is all-around easier to work with.

    • aaronax 3 years ago

      As an example, best practice for Palo Alto firewall setup is to create a rule that allows the "application" known as "SSL" and then use "application-default" as the setting for which ports to allow it on. This would inspect the traffic to determine that it is SSL (actually TLS in most cases I guess) and then allow it if on port 443.

      If you don't have other relevant allow rules, your sshd traffic would just be dropped, regardless of port.

      If the firewall administrator does things poorly, they will create an allow rule for port 443 and your sshd traffic on port 443 would be allowed (no inspection of traffic to determine if it is SSL or SSH).

      BTW this is inspection, not decryption. Two very different things.

      The business of developing algorithms to effectively detect various applications must be very interesting. You can see all the different "applications" here: https://applipedia.paloaltonetworks.com/

    • iso1631 3 years ago

      My wireguard UDP endpoints are available on a high port, 443 and 53. I've often have one of them blocked, very rare to have them all blocked.

    • hackernudes 3 years ago

      That worked for me for awhile but then the proxy started checking that all traffic was HTTP. Eventually I used ssh over websocket.

  • _fat_santa 3 years ago

    I worked for a place where they did wired stuff like this. Ended up that to install dependencies for a Node app, you had to:

    1. Disconnect from VPN and run `npm install` until it failed

    2. Connect to VPN "Profile 1" and run the command again until it fails

    3. Connect to VPN "Profile 2" and run the command again until it fails.

    4. Disconnect from VPN and run the command another time to finish installing all dependencies.

    5. Reconnect to VPN to actually run the app.

  • xani_ 3 years ago

    You can just push using port 443

        -> ᛯ ssh -T -p 443 git@ssh.github.com
        Warning: Permanently added '[ssh.github.com]:443' (ED25519) to the list of known hosts.
        Hi XANi! You've successfully authenticated, but GitHub does not provide shell access.
ultrahax 3 years ago

I work on something that requires a reasonably cooperative NAT and unmolested real-time UDP traffic. I've seen varied failure-modes from corporate firewalls over the years - from simple NAT table overflow causing rapid source port switching, to the firewall appliance downloading an update and deciding UDP packets of a certain size ( and ONLY of a certain size.. ) were bittorrent and hence were to be blackholed. That was an interesting one to track down. I've also seen it block diagnostic GETs to varied bits of cloud infra, due to someone at some point in the distant past hosting porn on that particular IP. Not to mention just good old strict NATs..

jve 3 years ago

I wonder what _showHtmlPage_ does? Did he just write something, something that allows 3rd party (corporate firewalls) inject HTML under his domain within TLS protected connection?

Cannot judge by not knowing how he displays errors. But a question to HN public: Is opening unknown HTML under my domain within another window safe? Or is there any possibility to strip down any "permissions" to cookies, requests, resources etc for that dedicated page?

  • shireboy 3 years ago

    Came here to point this out. For non-trivial implementations of showHtmlPage, this is a vulnerability. A malicious user could set up a 403 response with a fake “please re-enter your card to verify” form that sends to the attacker, or possibly even script to scrape the card number. Probably low risk of this actually happening in this scenario, but I’m pretty sure this fix is a bad idea. Better to show a generic error and log.

    • jve 3 years ago

      I think loading HTML within DOM Node and getting .innerText would be pretty innocent way of communicating user about some unknown error condition. Or logging that text so developer can better understand unexpected failures.

  • repiret 3 years ago

    I get where you’re coming from, but keep in mind that the filtering proxy returning 403 instead of relaying the POST is already able to inject arbitrary HTML into all of his TLS protected pages. If the proxy wants to scrape card information, it already can; if it’s malicious, the user is toast no matter what the website operator does.

  • marcosdumay 3 years ago

    I dunno. The frontend just made a TLS connection to his server and downloaded an HTML page. I don't think displaying that page adds any extra vulnerability.

    If he put it in a sandboxed iframe, it will have the same kinds of access as the main page, because it comes from the same domain. Everything is already as messed up as it can be, and there isn't anything the frontend can do to improve it.

  • pigbearpig 3 years ago

    That seems pretty unsafe without running it through some sanitizer. Trying not to judge too hard, but I would be concerned about the implementation of showHtmlPage by the same author that didn't handle non-json responses.

    • ValCanBuildOP 3 years ago

      Hey, OP here - I'm open to advice about how best to handle this! I'm currently just opening a new window and writing the HTML to it.

      What's the safest way to handle this? Open it in an iframe?

  • ValCanBuildOP 3 years ago

    OP here - I'm open to advice about how best to handle this! I'm currently just opening a new window and writing the HTML to it.

    What's the safest way to handle this? Open it in an iframe?

    • jve 3 years ago

      Just look under this thread, I wrote one possible solution of using .innerText from constructed DOM. (Or maybe open window in another domain). However @repiret may be right - corporate proxy is already invasive enough, that means the users are already in mercy of those.

      But still, I'd go with safer practices. Even in the slightly unlikely case someone manages to hack 3rd party (Stripe) and send your users arbitrary HTML for some periods of time... :)

thayne 3 years ago

I was expecting another section about all the other ways a corporate firewall can cause issues. Not all firewalls will give you a 403. Sometimes it will be a 200 with the error in the body. Sometimes you'll just never get a response at all. Sometimes you will get an SSL certificate error, because the error response is signed using the certificate for the firewall vendor's domain instead of yours. And etc.

  • ValCanBuildOP 3 years ago

    Oh god - I hope I don't have to write a follow-up to this. There's probably a bunch of hidden other firewalls I don't handle but most often I've found 403 and 503 to be the most common.

flumpcakes 3 years ago

Firewalls from security vendors with L7 decryption (using MITM root certificates from a company-wide PKI) is pretty standard in any business that needs to care about "cyber security".

I always hear people cry and moan about this but having worked on that side of the fence I would like you to know that I know of instances where people have been downloading illegal material (involving children) and running tor. That's not to mention the 75% of staff who willing give details during phishing campaigns.

Saying that, I find 60%+ of cyber businesses to be a waste of time at best, and at worse just frauds. Core firewalls with L7 capabilities from vendors such as Palo Alto and CheckPoint are legitimate security devices, especially suited for enterprise networks.

I do think it's pretty pointless running those in the cloud though, unless you have admin VMs on vnets for your production resources. But that way lies madness anyway.

  • jsmith45 3 years ago

    The problem with those is that they are often poorly configured.

    Take for example the scenario in question here. Is it really legitimate to allow GET requests to a domain but block all POST requests? That sounds questionable at best. How many sites is it safe to view pages, download files, etc from, but POSTing to them is dangerous? There may be a few, but it is not particularly common. Far more common is sites where any request could be harmful. (Malware, sites spoofing other sites, etc).

    I get fully blocking a domain. That can be reasonable sensible, especially for domains in a known blocklist of porn, malware, etc.

    I can get inspecting content and blocking if there is clear evidence of maliciousness (but this must be done carefully, since false positives can cause a lot of headache!), but for other content-matching scenarios, you may well be better off generating an alert to be reviewed manually, rather than blocking things.

    There have been cases where these system incorrectly block business critical functionality, causing a company to completely shut down, losing huge sums of money while figuring out what is breaking things, before getting it sorted.

  • sillystuff 3 years ago

    The correct solution to phishing is to stop users from receiving phishing email in the first place.

    Yes, blocking phishing mails can be impossible with some hosted providers' spam filtering. But, here the solution should be to push back on e.g., Microsoft to fix their dumpster fire spam filtering, or switch the organization to a different product that works.

    I don't think IT should be pretending at being police. It isn't their job. And, any infrastructure that can be used to catch "criminals" can be used to abuse employees.

    Also, there is absolutely nothing wrong with using tor. I've used it often, at work, to test things as if from off-site.

    I believe the role of IT is to respectfully facilitate users to safely get their work done. This involves a balance of security measures that do not invade the users' privacy, pushing back against management when appropriate to protect the users from managerial overreach, and sometimes just allowing something that could be dangerous because the alternative is worse e.g., MiM provides limited protection from exfiltration, but also enables horrible abuse by management and should be pushed back against.

szszrk 3 years ago

I'm currently in my very first job where running a local silent NTLM proxy is not a vital skill of survival. For similar reasons I somehow always made an opensource sonatype nexus that is doing pypy proxy or similar, so that operators team can actually do meaningful work without triggering security teams daily.

  • criddell 3 years ago

    Can you expand on this a bit? I googled "ntlm proxy" and "sonatype nexus" and still have no idea what it is you've done but I'd like to understand.

    • szszrk 3 years ago

      The nexus part was laid out nicely by others. ntlm proxy is a proxy that can authenticate inside a corporate network with your own credentials and forward all requests, while exposing a simple old school proxy. You hit simple local proxy, it gets forwarded to a small tool that does NTLM auth inside your company's network pretending you are doing that traffic yourself.

      This is hopefully a trend that is disappearing with a wave of modern transparent proxy solutions, but in general companies tend to set up proxies that get automatically authorized by your workstation. It may have some issues with less known browsers and your console tools will not be able to use that at all.

      So when you build something locally, want to download a .deb, or a pypi package to have modern Python tools your are out of luck - you have to download it manually using a browser or not at all.

      This is where such proxy comes into play.

    • richbell 3 years ago

      Nexus Repository (commonly referred to as just "Nexus", which is confusing because Sonatype has several products called "Nexus $name") is a local artifact repository. Running it locally allows you to cache artifacts from external repositories like pypi.org or repo.maven.org, which is beneficial because it cuts down on the amount of outbound traffic required to install dependencies.

      • folmar 3 years ago

        The biggest win is not the bandwidth, it's that you have exactly what was used before in case you need it (audit/postmortem), or origin goes away.

        • richbell 3 years ago

          In the context of GP's comment bandwidth + proxies seems to have been their motivation. Not to mention that external package registries HATE how many large organizations hammer them with the exact same traffic every time a build is running, due to the lack of the local cache.

          But you are correct as well; that is an uncommon yet hugely beneficial reason to have an internal artifact repository.

      • criddell 3 years ago

        Okay, that makes sense and I can certainly see why that often makes sense.

        It seems that running the proxy leaves the original problem uncorrected. I'd be inclined to exercise a bit of malicious compliance to increase pressure for changes to the security configuration.

Neil44 3 years ago

Yep not to mention pushing their SSL root CA to all the clients so they can scan everything without SSL errors.

  • raxxorraxor 3 years ago

    Sensible for the users that really download and execute attachments from the most obvious spam mails. The only protection you have is to put these high threat users in a separate subnet and use some antivirus to scan everything they download. At least that offers some protection. Not possible to scan downloads with TLS.

    Although I still think that breaking it up is a very bad idea in general and it is appalling that this became common practice. Especially because there are exceptions where it fails and you train users to just disregard TLS errors.

    Even worse, the IT security industry shamelessly uses the data to spy on employees. For that alone it deserves its bad reputation. Still, there is no real solution to shield data from the the most careless users.

    • denton-scratch 3 years ago

      > put these high threat users in a separate subnet

      Ideally a subnet belonging to one of your competitors? I thought that nowadays only very ignorant people follow links or open attachments in spam emails. Certainly all the spam I've seen for a few years has been as plain as the nose on your face: only an ignorant person would mistake it for ham.

      • dhosek 3 years ago

        I did almost get caught in a scam—email appeared to come from CEO in my medium-sized company (so it wouldn’t have been out of place to hear from him). First email simply said, do you have a moment to chat, second was, fortunately, an obvious scam request—“can you buy some gift cards for a client?” but everything was disguised enough that I might have gotten caught with a better-conceived spear phishing attack.

      • raxxorraxor 3 years ago

        Companies get pretty sophisticated spam. You only need one compromised supplier and they have your names and usual mail format and just sneak edit some links to lead to compromised sites. But yes, some users also fall for the pretty obvious crap.

Aulig 3 years ago

You really need something like sentry to report all unhandled exceptions. I learned that lesson when I realized everyone on Safari couldn't use my website (submit forms) because a feature I used wasn't supported on Safari. Easy way to drop your conversion rate by 30%.

mxuribe 3 years ago

Ah, yes, the old "the most secure device is one that is not working", or in this case that is blocked from working. :-)

Fredej 3 years ago

> Sorry, you don't have permission to visit this site.

> Website blocked

> Not allowed to browse Shareware Download category

> You tried to visit:https://www.valcanbuild.tech/handling-corporate-firewalls/

The irony.

collinvandyck76 3 years ago

I've seen so many engineer hours burned because of a reluctance to gate behavior on http response codes.

hathym 3 years ago

tldr: do error handling.

  • justin_oaks 3 years ago

    Yes, it's more about error handling than it is about corporate firewalls.

    There are lots of reasons why the request would fail and returning a 403 or 503 from a corporate firewall is just one of them. What happens if the user's wifi is flaky and the HTTP request is canceled? What happens if the connection is slow and the request times out? What if, heaven forbid, the destination server is down or unreachable temporarily?

    As a web developer, never let a user's action lead to nothing happening. Always give feedback. Whenever sending background HTTP requests, always provide a visible error message to the user when you encounter unexpected results or HTTP/network errors.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection