Settings

Theme

Danish cloud host says customers ‘lost all data’ after ransomware attack

techcrunch.com

76 points by jhoelzel 2 years ago · 80 comments

Reader

dusted 2 years ago

Yesterday, All those backups seemed a waste of pay. Now my database has gone away. Oh I believe in yesterday.

Suddenly, There's not half the files there used to be, And there's a milestone hanging over me The system crashed so suddenly.

I pushed something wrong What it was I could not say.

Now all my data's gone and I long for yesterday-ay-ay-ay.

Yesterday, The need for back-ups seemed so far away. I knew my data was all here to stay, Now I believe in yesterday.

--

From usenet

My comment on the situation: Online mirrors are fine, but calling them backup is a stretch of the imagination,since you must assume that an event can compromise all data within a domain (be it The Internet, or a physical location).

A true backup must be physically and logically separate.

  • soco 2 years ago

    Which is why we have the 3-2-1 rule, not only for business but also for personal data: https://www.veeam.com/blog/321-backup-rule.html Otherwise I agree, they are not "backups", just maybe a glorified copy.

    • permo-w 2 years ago

      there are stronger backups and there are weaker backups, but as long as the intention is for an informational failsafe, they're all still backups. arbitrarily deciding what forms are "true" or "not true" or a "glorified copy" seems a bit silly to me. the world is just a bit more complex than that

      what is a backup if not just a form of copy anyway?

  • Ekaros 2 years ago

    Also temporally separated. That is you must have backup that is beyond attackers time horizon. This is only way to get back at least something.

    • westernpopular 2 years ago

      What does time horizon mean?

      • chrisandchris 2 years ago

        An attacker may intrude your environment and slowly destroy data without you realizing. If this process takes e.g. 10 days, you need backups for 11 days to be safe.

        This scenario happens often (as far as I know) with ransomare attacks (on personal devices): Encrypt least used documents first. Probably noone will realize it over weeks that data "is gone".

    • dusted 2 years ago

      Absolutely!

  • tommiegannert 2 years ago

    Online mirrors are fine if they have boundaries that make them very certainly append-only.

    Opening up scp/rsync and saying "our client only writes new files" is bad. Using a dedicated stream-writing interface over TLS is probably fine.

    As for the other attack vector: segregating the admin credentials so that the stream-writing interface cannot be bypassed, yeah, fun. 2FA only gets you so far.

  • nurettin 2 years ago

    > A true backup must be physically and logically separate.

    That doesn't stop it from being targeted by hackers. No amount of hindsight will save your backups unless they are in an offline cold storage somewhere protected by men-at-arms.

tyingq 2 years ago

" there was no evidence that customer data was copied out or exfiltrated from its systems"

...after a thorough analysis of the now encrypted logs?

  • crimsontech 2 years ago

    This statement get me every time I see it. Could just mean they weren’t logging anything so they have no evidence.

  • ndsipa_pomu 2 years ago

    More likely looking at network traffic

    • tyingq 2 years ago

      Over what time period, though? Did they assume the hackers had only been there since obviously bad things started happening?

  • TheHappyOddish 2 years ago

    I miss slashdot comment ratings. This one was for sure 4 (Funny/Insightful)!

bjoli 2 years ago

I set up an append only storage for a friend recently and his son downloaded some kind of game related cheat thing online and it encrypted his harddrive, his backup usb harddrive, his cloud storage and his NAS.

The little restic backup saved him. It pushed one copy of nonsense, but kept several revisions of the old data.

On a similar note: does anyone have any experience with mdisc? They seem like the perfect solution for long time storage for me at the moment.

  • soco 2 years ago

    I use mdiscs privately as part of the 3-2-1 strategy, also keeping two writer units around and testing (mostly the writers) them like once a year. No issues so far, but the oldest are only about 5 years old...

  • figassis 2 years ago

    Restic on object storage outside your infra. It's what I do and I think it should be safe from ransomware.

akasakahakada 2 years ago

Still cloud is better after all. 32GB is enough for all digital device and shooting 8k videos for 1000 centuries. No one should make backups. Storage expansion especially SD card must not be allowed on phone, tablet and laptop. Local storage is not secure and adding sd card to phone will introduce water leak. Meanwhile SIM card slot do not introduce water leak because obviously there is magic. Also SD slot is waste of internal space. You might ask why tablet is big as hell but still no SD card slot. Because those extra space is for storing mana to dispel water while adding the cloud subscription debuff to the users. Magic protect our phones from water, bullet, brick, bad OTA update, damage of USB port, lack of OTG functionality, USB2.0 transfer rate, terrible MTP interface, eavesdroping from wireless, etc.

* Just a rant and parody

  • benterix 2 years ago

    I don't know why you are being downvoted, maybe people don't like the form, but the fact that device manufacturers are removing useful features such as the ability to expand the storage of your device is infuriating.

    • tuhriel 2 years ago

      But, but if you can just add a microSD with 128gb for 20-40 bucks, how can the poor vendor upsell you an upgrade for 200 extra?

    • candiodari 2 years ago

      Because this makes life difficult for developers, for no good reason. Developers simply shouldn't have the access necessary to make these attacks succeed.

    • _joel 2 years ago

      HN doesn't do sarcasm.

      • fuzztester 2 years ago

        They do, quite a bit.

        Just check the number of comments with /s at the end.

        People sometimes even say "you forgot to put a /s" or the like, in reply to an obviously sarcastic comment.

        • _joel 2 years ago

          That's a Reddit thing, not HN. Normally I see stuff downvoted if it has /s.

  • cudder 2 years ago

    > Meanwhile SIM card slot do not introduce water leak because obviously there is magic.

    Don't worry, they're doing their best to get rid of those too.

andrewstuart 2 years ago

Unfortunate turn of events.

I find it really hard to have empathy for serious businesses who don’t have backups and are dependent on a single cloud.

Like for example if you are all in on AWS and do all your backups of your AWS systems to AWS then lose your account. Meh… your fault.

If you run a business then you have an absolute obligation to be able to instantly bring your business back up outside your primary hosting provider.

And if you’ve built all your infrastructure in a way that cannot be replicated outside that hosting provider then frankly that’s negligent.

All those AWS Lambda functions that talk to DynamoDB? Guess what…. none of that can be brought up elsewhere when you lose your AWS account.

If you are a CTO then this is your primary responsibility and priority above everything else. If you are a CTO who has failed to ensure your business can survive losing your cloud then you are a failed CTO.

jhoelzelOP 2 years ago

Time to check the 3-2-1 backups ;)

  • macmac 2 years ago

    According to the company the attacker managed to encrypt both the primary and secondary backup systems.

    • boristsr 2 years ago

      Yes, but as a customer your 3-2-1 strategy should include a backup off that cloud. Not the first time, and won't be the last time a cloud provider has a catastrophic data loss incident. Relying solely on your cloud provider for backups is a risk.

      • yMMe2WYE_D 2 years ago

        You know that after fire in OVH datacenter, they asked their customers to start their disaster recovery plans - and people asked where is such option in OVH admin menu? Not excusing them, but many customers are completely clueless about backups and data security in general.

    • jhoelzelOP 2 years ago

      the 1 in the 3-2-1 should be somewhere on premise or at least not directly reachable from the internet.

      Think: ssh cron job that copies backups from cloud to cold storage

      • theshrike79 2 years ago

        And what if the backup you're copying to cold storage is also encrypted?

        How did the saying go? You don't have backups until you've successfully restored from them or something like that. =)

        Basically any 3-2-1 system is Schrödinger's backup until you've actually used it.

        • bombolo 2 years ago

          So you only have 1 backup that you daily overwrite?

          • theshrike79 2 years ago

            You can have X daily backups in rotation and after X days of infiltration they're all garbage because they were overwritten by the malware-encrypted code.

            A backup isn't real until you've restored from it. That's why you should restore from backups regularly. Firstly so that you know the process and see it actually works and secondly you can confirm you're actually backing up what you think you are backing up.

            We've all set backup scripts and forgot to include new directories or files in the configuration as time went on... =)

          • csydas 2 years ago

            No, I think you're misunderstanding.

            The parent comment is intending to remind people that many things can happen to a backup after it's done. Backups cannot be "set and forget", as just making the backup isn't enough since so many things can happen after you've taken that backup.

            - Bitrot/bitflips silently corrupt your backups and your filesystem doesn't catch it

            - The storage your backups are on goes bad suddenly before you can recover

            - Your storage provider closes up shop suddenly or the services go down completely, etc

            - malicious actors intentionally infiltrate and now your data is held hostage

            - Some sysadmin accidentally nukes the storage device holding the backups or some other mistake (to summon the classic, I'm betting there are a few persons who have stories where an admin trying to clean up some leftover .temp files accidentally hit SHIFT while typing

            ```rm -rf /somedirectory/.temp```

            and instead writes:

            ```rm -rf /somedirectory/>temp```

            - (for image level backups) The OS was actually in a bad state/was infected, so even if you do restore the machine, the machine is in an unusable state

            - A fault in the backup system results in garbage data being written to the backup "successfully" (If you're a VMware administrator and you got hit by a CBT corruption bug, you know what I'm talking about. If you aren't look just search VMware CBT and imagine that this system screws up and starts returning garbage data instead of the correct and actual changed blocks that the backup application was expecting)

            Basically, unless you're regularly testing your backups, there isn't really any assurance that the data that was successfully written at the time of backup is still the same. Most modern backup programs have in-flight CRC checks to ensure that at the time of the backup, the data read from source is the same going into the backup, but this only confirms that the data integrity is stable at the time of the backup.

            Many backup suites have "backup health checks" which can ensure the backup file integrity, but again, a successful test only means "at the time you ran the test, it was "okay". Such tests _still_ don't tell you whether or not the data in the backup file is actually usable/not compromised, it only tells you that the backup application confirms the data in the backup right now is the same as when the backup was first created.

            So the parent post is correct; until you have tested your backups properly, you can't really be sure if your backups are worth anything.

            Combine this with the fact that many companies handle backups very badly (no redundant copies, storing the backups directly with production data, relying only on snapshots, etc), and you end up with situations like in the article where a single ransomware attack takes down entire businesses.

      • edwinjm 2 years ago

        If the data you’re reading is encrypted, you’re still screwed.

        • jhoelzelOP 2 years ago

          A 3-2-1 backup strategy involves keeping three copies of your data, stored on two different types of media, with one copy kept offsite for disaster recovery.

          you are still supposed to have multiple backups =)

        • raverbashing 2 years ago

          Incremental backups and alerts on large deltas seem like a good idea

          (I mean, on large deltas anywhere)

    • beardyw 2 years ago

      Backups need an air gap.

      • Enderboi 2 years ago

        I often have people complain after comparing my works instance pricing to other cloud providers...

        Then try to explain that rotating a few dozen TB of data offsite to cold offline storage every week isn't cheap. Because unlike some vendors, we take pride in data integrity and ensuring that our DR plan is actually.... you know, recoverable :P

      • benterix 2 years ago

        If they are not incremental but append only, an air gap is not strictly needed and can be used as an additional safeguard performed less frequently because of manual overhead. The crux of the matter is to assume the main system has been compromised and preventing overwriting existing data.

        • csydas 2 years ago

          I would not agree with this. Append-only file systems and storages aren't a bad idea and definitely help with accidental overwrites, but these systems have been punked quite frequently in many ways, and I've worked with backup companies that home-rolled their own append-only backup implementations.

          It didn't stop attackers from using extremely common ways to punk the systems even under the best circumstances for the systems. A forgotten password gets leaked, using the backup applications/storage system's own encryption schemes against the victims, just deleting entire volumes or compromising the OS on the systems, the list goes on.

          I wouldn't consider append-only an anti-ransomware technique, it just stops one of many common ways of compromising data. This is good, but I wouldn't rely on it to protect against even a run of the mill ransomware scheme.

        • candiodari 2 years ago

          ... until the next update to these viruses.

          To utterly destroy an organisation you don't erase or encrypt their data. You change it. Slowly. A little by a little. A birthday here, a name there, a number ... Using the normal ways to change this data. In this way you can go undiscovered for years, employees get blamed for making stupid errors for a LONG time and there is absolutely no way to fix things, no matter what the backup strategy is.

          • beardyw 2 years ago

            But for ransomware there needs to be a hope of restoring the data. In this case the value would need to be more oblique.

            • tatersolid 2 years ago

              The ransomware gang buys put options on the victim’s stock. Sabotage-backed options scams have been around for a long time.

      • edwinjm 2 years ago

        Doesn’t matter.

teddyh 2 years ago

They weren’t just a cloud hosting company, they were also a domain name registrar and provided DNS hosting.

The registry for the .DK TLD has published a page on what to do for those affected: <https://punktum.dk/en/faq/if-you-are-a-customer-at-cloudnord...>

tryauuum 2 years ago

from the google translated page of the provider

  What happened?

  It is our best estimate that when servers had to be moved from one data center to another and despite the fact that the machines being moved were protected by both firewall and antivirus, some of the machines were infected before the move, with an infection that had not been actively used in the previous data center, and we had no knowledge that there was an infection.

  During the work of moving servers from one data center to the other, servers that were previously on separate networks were unfortunately wired to access our internal network that is used to manage all of our servers.

  Via the internal network, the attackers gained access to central administration systems and the backup systems.
  • rudasn 2 years ago

    Can someone ELI5 how just having network access is enough to do such damage?

    Were admin interfaces IP whitelisted only (no other auth)?

smarx007 2 years ago

Shoddy journalism strikes again. How do we know that all data was lost and the notice on the homepage was not uploaded by hackers?

"CloudNordic could not be reached for comment." It's a journalist's job to reach either the company or affected customers to verify the facts.

  • utybo 2 years ago

    Because CloudNordic says so on their temporary webpage. https://www.cloudnordic.com/ (in Danish, I used Google Translate to check and the result seems fine)

    I don't think we need fearmongering about "shoddy journalism" for something so easy to check.

    Edit: in Danish not Norwegian, my apologies

    • rypskar 2 years ago

      >>in Norwegian

      The text is in Danish, since it is a Danish company. Norwegian and Danish is close, but some words have totally different meanings

    • smarx007 2 years ago

      How can you trust the webpage of a system that was recently attacked?

      • tommiegannert 2 years ago

        If not attacked and the page says not attacked: trustworthy.

        If not attacked and the page says attacked: not trustworthy.

        If attacked and the page says not attacked: not trustworthy.

        If attacked and the page says attacked: trustworthy.

        As long as the page says "attacked", it seems likely they were attacked? Why would they state it themselves if it wasn't true, losing trust for no reason?

        • smarx007 2 years ago

          It’s not a bad starting logic.

          However, there is a thing called “defacing”. In the process, the attackers share false information implying that more damage was done than in reality.

          My general rule is to stop trusting a compromised digital system until I hear from a person (journalist, in this case) confirming that the control over the system has been restored.

          If journalists do not verify the facts themselves or via trusted (human) sources, it’s not journalism but syndication.

          Realistically, the news was published yesterday and the notice is dated a week ago. I doubt that a company of IT experts would have failed to take a fake notice down. But I stand by my assessment of TechCrunch journalistic standards.

  • smarx007 2 years ago

    This is how it should be done: https://computersweden.idg.se/2.2683/1.779824/danska-molnfor...

    The CEO of CloudNordic confirmed the attack took place and that backups were lost in an interview to Radio4.Dk, as reported by Computer Sweden.

PrimeMcFly 2 years ago

The solution to ransomware? Backups. It's not more complicated than that. It's honestly puzzling that ransomware is the issue it is, crippling entire organizations. It just means they have inept IT teams.

Sucks this Danish cloud host provider didn't back stuff up properly.

  • deadbunny 2 years ago

    > It just means they have inept IT teams.

    More often than not in my experience is that the IT team wants proper backups but management baulk at the price and never authorize it. Until something bad happens of course.

    • soco 2 years ago

      And when the bad thing happens it's of course the IT team painted as "those guys who kept on bitching also earlier about their jobs"

  • qeternity 2 years ago

    At a certain scale, full backups aren't feasible, and people should be implementing their own backups on top of any cloud services.

    Backups of the dataplane should of course exist.

  • tryauuum 2 years ago

    shaming is easy

    maybe they were backing up their stuff properly, but backups were wiped as well. even if you have some fancy append-only storage someone still has access to it and that access can be misued.

    • _joel 2 years ago

      > but backups were wiped as well

      Then they're not offline backups, are they? I know what you mean but backing up to a network drive with R/W is not a backup, it's a copy.

      • tryauuum 2 years ago

        They could have wiped through other means, e.g. through ipmi. Although I don't think that was the case.

        More realistically, it probably boils down to money. I wonder what would be the cost of backing up everything to a competitor's cloud daily, e.g. one PB of data per day. I have no idea how much it even costs to have a 200 gigabit link to another data center.

    • PrimeMcFly 2 years ago

      > maybe they were backing up their stuff properly, but backups were wiped as well.

      You realize this is contradictory?

      • tryauuum 2 years ago

        I believe this is the case of "no true Scotsman". Whatever backup you propose, someone will point out that you could have done it better. You could have disconnected servers with backups from network when they are not in use. You could have hired a dedicated person whose responsibility would be to deny/delay any request from management to delete the backups. And so on.

janoelze 2 years ago

While the absence of 3-2-1 seems like a big oversight, i enjoyed the straight-forward communication.

louwrentius 2 years ago

They did not segment their backup servers from the rest of the infrastructure and people who are this incompetent should not run IT infrastructure.

There is no excuse for this.

Our industry is mostly run by clowns and unserious people.

teddyh 2 years ago

Also Azero.cloud (same company?): <https://azero.cloud/>

TheFragenTaken 2 years ago

There's reason to suggest this is part of a larger attack. Multiple hosting providers were hit with attacks last Friday.

JacobiX 2 years ago

I'm wondering what the future holds for this cloud hosting company. How can a company survive such a dramatic loss?

  • znpy 2 years ago

    > How can a company survive such a dramatic loss?

    It really depends on what SLAs they advertised.

    A (national) local hosting company suffered a datacenter fire and lost pretty much all customer data except for billing (which was rebuilt as they were using third party payment processors).

    After the fire the company just let people know there were no backups unless bought explicitly, as the terms of service clearly stated.

    The company ia still operating (i know because I’m a customer) and not much has happened.

    Yeah some customers were screwed, but it was the kind of ignorant customer that gullibility paid 10€/year for hosting (with php execution), database, domain, traffic and then also expected the same service level as something way more expensive. There’s not much around this: you get what you pay for.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection