Dell announces security breach
zdnet.comIf software development was a true profession, then I firmly believe that many developers would be struck off for extreme negligence or incompetence.
I’ve found and reported serious security vulnerabilities to many companies that I’ve worked with, and become very disillusioned with some of the responses. Companies that operate in fields which materially affect people's lives (such as healthcare, finance and telecoms) will deploy software that is so badly designed that there is often no need to break any technical aspect to get access to private and sensitive data.
Yet, when I report a breach, the same people who deployed software with broken (or sometimes no) authorisation models, access control, etc, are suddenly competent enough to investigate their own failure. Invariably, they always have perfect logging and reporting that could not possibly have been evaded and which proves that no breach occurred or data was exfiltrated before the vulnerability was reported.
If another professional, say an engineer, lawyer, or doctor, had demonstrated the incompetence or negligence in their field that I’ve seen some software developers display (sometimes wilfully - “It’s a feature”), they would never be allowed to work again. Software is now so important that I believe that some of the developers and technical leaders that I have dealt with in resolving security vulnerabilities should never again be allowed to work with software that interacts with personal or sensitive data (or, more generally, with software that could affect human life, safety, or privacy).
The stack is too large, complicated, and abstracted to put the blame on a single engineer.
Vulnerability in struts? Go after the open source engineers.
CPU vulnerability? Go after the engineers at AMD and Intel.
Bad firmware? Go after the network engineer who setup the box.
In a time when even the highest people in companies are basically untouchable, for example Lehman Bros, and you want to start going after the engineers?
I keep harping on it, but civil or nuclear engineers have a world of practice we could draw on in software. We just don't.
> Buildings are too complicated!
> Fabrication problem in struts? Go after the strut manufacturers.
> Badly documented connection in column with resulting bracing failure and buckling? Go after the column connection manufacturers.
> Bad soil conditions led to improper concrete pile hardening? Go after the geotechnical engineers or concrete placers.
And so on. We have building codes with pre-set ways of doing things for a reason. You can go outside of them if you want to, but you take on way more cost. Not just bonding, but design, testing, etc. We also have, gasp, government inspectors. Say it ain't so! But every single domicile or place of work has had them give the thing a look over, but we can't even get them for a company as important as Equifax.
The Economist is right about one thing: Data is the new oil. We're the new oilmen. And if you want to understand how they slept at night sweeping global warming under the rug look no further than our own corporations that are resisting regulation at every turn.
Always on microphones in almost every home. Televisions that spy on us. Cameras everywhere with facial recognition. Companies that track our phones while we walk around. Hospitals that lose bulk patient records or keep Windows unpatched because "airgaps" then WannaCry hits. Children with anxiety and suicide rates that have sky rocketed. Babies parented by YouTube which for years lacked any oversight on content. Completely unregulated cyberarms market with American companies selling iPhone vulns to corrupt, illiberal states that torture journalists.
Hackable cars. Hackable powerplants. Hackable electrical grids. Hackable telephone towers. Hackable satellites. Hackable tanks. Hackable aircraft carriers.
This cannot stand.
I agree we should hold companies accountable for everyone one of your hackabels. Broader and faster moving regulation is probably needed in the US around basic software and networking security.
I absolutely disagree with the OP about holding individual software engineers responsible and even banning them from ever working in software engineering again. Engineers take orders from management and executives. Even with the loudest protest possible they are often shutdown by higher ups. Sometimes the noisy engineers are replaced by more docile yes types or shunned.
I was a structural engineer (EIT) once. I pushed back against a manager that wanted to do something that I knew for certain would degrade the structural capacity that the design engineer had planned for. He could have fired me but it would have made the news if he did because the public has trust in the individual engineers that design our buildings and civil works.
We need the same for software. It doesn't mean mistakes never happen. Mistakes happen even with the best of intentions by the smartest people. We don't blindly strip engineers of their livelihood. Only when an engineer has shown gross incompetence or carelessness or repeated poor judgement does that happen.
> I absolutely disagree with the OP about holding individual software engineers responsible and even banning them from ever working in software engineering again. Engineers take orders from management and executives. Even with the loudest protest possible they are often shutdown by higher ups. Sometimes the noisy engineers are replaced by more docile yes types or shunned.
Both companies and individual software developers should be held responsible.
Professional ethics dictate the behaviour of professionals is almost all fields. Software developers love to use the term engineer, but all other professional engineers have strict professional ethics codes. They usually require evidence of competence, which can be revoked, and require that professional engineers must refuse orders or instructions that they know or reasonably suspect are unlawful, could cause harm, or for which they’re not competent to carry out. If their superiors insist, they must refuse, to the point of termination or resignation.
When a professional engineer makes an honest mistake, they are not prohibited from working (unless it stems from extreme incompetence). However, where they are negligent, they are, usually pending remedial training and assessment. They can be additionally criminally responsible where their negligence causes harm.
I believe the same should be true of software developers. It would create a sustainable incentive structure, where good developers (who are already rare and in high demand) could refuse unlawful or unethical instructions on the grounds that they would be personally responsible. It would also allow technical leadership to make a stronger business case for developing secure, lawful, ethical software.
I also think computing is a human right, and anybody should be allowed to write software. Professional standards and ethics should only apply to the development of software that could affect human life, safety, or privacy.
> Data is the new oil. We're the new oilmen.
You mean overworked, working in dangerous conditions, constantly pushed beyond our limits because the companies and society as well "depend on us"?
> And if you want to understand how they slept at night sweeping global warming under the rug look no further than our own corporations that are resisting regulation at every turn.
Probably wouldn't have commented on this if you hadn't used past tense. Today all this seems obvious. But since you do, -let me tell you that most of us haven't heard a thing about global warming until "An inconvenient truth" or about that time.
Pretending oil engineers kept this under the rug is a bit disingenuous.
As for our role in shaping the surveillance state, -that is a bit worse: we now know. Luckily we are already seeing resistance and I urge everyone to join in: do talk about it at work, do talk to politicians, try to influence decisions where you work.
> Probably wouldn't have commented on this if you hadn't used past tense. Today all this seems obvious. But since you do, -let me tell you that most of us haven't heard a thing about global warming until "An inconvenient truth" or about that time.
Intergovernmental Panel on Climate Change was established in 1988 and would have been established a decade earlier had it not been for aerosol pollution having a countervailing effect on temperature.
> Pretending oil engineers kept this under the rug is a bit disingenuous.
I see HN as less of a place for the equivalent of oil engineers and more of a place for current and future leaders.
> As for our role in shaping the surveillance state, -that is a bit worse: we now know. Luckily we are already seeing resistance and I urge everyone to join in: do talk about it at work, do talk to politicians, try to influence decisions where you work.
I'm doing as much as I can[0] but realistically very few of us are really trying.
[0] I've met with my MP, I spoke at an election hearing, I've sent countless emails to Public Safety Canada and other departments, I did a cybersecurity review of a department well below my normal bill rate. I even joined the Liberal Party. I'm getting somewhere (e.g., the 2018 budget dramatically increased funding for cybersecurity) but the problem is growing faster than the response. Just like global warming.
Wait, that would mean getting government permits to develop software, OSHA and other agencies' inspections, and union workers. Also software development would become orders of magnitude slower.
Maybe that wouldn't be so bad after all.
This isn't an engineering failure, but a failure of management. Non-technical management has no clue how expensive it is to properly maintain a system and design it for security when all they can see is an output of a widget. In almost every case, it is non-technical management who decide when work stops not the engineer tasked with building it.
With that being said, the only way change will come is either through government intervention (but they barely understand the internet, so good luck) or through organized labor movements that then codify it into law. However, there is a large anti-union block within technology so that has it's own challenges.
Realistically, nothing will happen within our life time unless there is a crisis that changes the norms or a particularly likable person makes it their life mission.
I agree emphatically, and it’s why I’m a member of the BCS (British Computing Society). It’s absurd that we have people building essential public infrastructure with close to no repercussions when their failure screws people over.
Controversial opinion, but I'd have little problem with Google (for instance) having anonymised access (and obviously there is _a LOT_ of caveats here) to medical data (again, for instance) in exchange for digitising the NHS (again, for instance). At least you'd have some assurances it worked and was secure.
The amount of low ball quotes for government work that I've seen or heard of that go over budget or barely work is a little worrying. I've also seen teams of developers that are by no means first rate; it's little surprise we end up with these failures.
I agree. I simply think that if people want to use the "Engineer" moniker then they should be required to abide by the profession's code of ethics. I really want our profession to have a set of standards that people can trust. I have my P.Eng in software engineering (Canadian); people tell me that it is "useless" but I want to be ahead of the professional curve. I think we will see demand for traditional engineering rigor in software. I already know my clients take safety extremely seriously (Industrial automation) and being able to say I belong to our provincial body of engineers does mean something (i think)
I think this rests on the architect / technical lead to force their minions to use TDD or something similar. You can't expect a noob out of college to be responsible, that's just asking for trouble.
We know how to build secure software. It's just very time consuming and expensive. For the most part nobody wants to pay for this so we have a constant stream of hacks instead.
Dell's been an open book for years.
One piece of spam I've got on a brand new email account was ~1 day after ordering a brand new XPS. It was a fake tracking code email about my dell order with correct details like laptop, account name, price. I contacted dell and only managed to find out my order wasn't even in the post yet. They weren't interested in anything.
And I also never got any more than that specific 1 piece of spam.
Out of curiosity, how did you/did you confirm it was a fake?
The tracking number was a zipfile I have to run a program to get.
I messaged Dell to confirm who they ship with, who said it's not in the post yet. Once another tracking number came in a week or so it was from dell since it had more branding and did actually contain just a number and did work in the shipping companies website.
Have read stories about Dell support members out of the US doing these scams using their privileged info.
It's insane that companies are allowed to say "yes there was a security hole, but no we don't have logs, therefore nothing was stolen, so stop asking."
Their refusal to give the number of exposed accoundlts makes it seem like it's pretty bad.
Dell redirected the vulnerability press release link to a Christmas Deals page. Heh.
Oh. Wow. That's just awful.
From https://www.dell.com/customerupdate
What is a “hashed password”? Hashing is a cryptographic security mechanism, similar to encryption, that scrambles customers’ passwords into an unreadable format. Dell ‘hashes’ all Dell.com customer account passwords prior to storing them in our database using a hashing algorithm that has been tested and validated by an expert third-party firm. This security measure limits the risk of customers’ passwords being revealed if a hashed version of their password were to ever be taken.
Bleh. Maybe it's too much to hope for a company like that to give any specifics but that's pretty empty by itself. I mean, great, they didn't use plain text(!), but "MD5 with no salt" would fit that blurb just fine too. I really hope Dell was properly using an adaptive hash, but usually when companies do a good job there they want to tout it because it does in some small way show they care somewhat despite the breach. Even if it should be the norm saying "we used bcrypt with 65k+ rounds" or whatever is legitimately reasonable to put in there.
It seems like they could add a parenthetic which is more specific to help those of us who actually understand the question gauge for others who ask.
As it stands if my mother asked whether this means her password is protected, my answer realistically is "No". Her passwords are not great (it is, after all, not a great sign that I'm saying "her passwords" meaning I know what they are) but they're not in the Pwned Passwords list for example, still a reasonable brute force of MD5 would get most of them. Whereas if they said they had even a crummy salted and pessimised hash, say PHK-MD5-crypt, I'd feel comfortable saying that "Yes", nobody is going to break her password. Which isn't to say nobody could in theory, just that salt means they'd need to target her and pessimisation means it'd cost money, and so why her?
I guess the reason not to is that it invites Monday Morning Quarterbacks. "Oh, why did they use PBKDF2 with this many rounds? Why not Bcrypt? Why not not Argon2?" and so on.
They've provided some pretty reasonable information.
Not just, your account details are safe.
To me, it looks like just 'Your account details are safe.'
> Additionally, Dell cybersecurity measures are in place to limit the impact of any potential exposure. These include the hashing of our customers’ passwords and a mandatory Dell.com password reset.
Hashed, how? Still using MD5? Is there even a salt?
Verified, by whom? Tim's brother-in-law's new startup who have no security expert staff? Verified as in had the encryption technique tested for collisions? That Dell were using it in the correct manner? Or just, 'Hey, I know that library, it works if you use it right.'
> Dell also retained a digital forensics firm to conduct an independent investigation
Who? Is this just someone who will tick boxes? Or is it a group who know what they're doing? Or were they just hired by marketing based on a pretty website?
> We are disclosing this incident now based on findings communicated to us by our independent digital forensics firm about the attempted extraction.
Wait... This investigation has already been done? Okay... They would have told you a hell of a lot more than you're telling us... So we can't look forward to more information?
> Though it is possible some of this information was removed from Dell’s network, our investigations found no conclusive evidence that any was extracted.
> Credit card and other sensitive customer information was not targeted.
One cannot be said conclusively, whilst the other can... Why? Tell us that CC data is kept separately, and tell us it is safe too. Just saying it's hashed doesn't mean bupkus, so feel free to say it publicly, you reveal nothing about your security features.
> The potentially extracted customer information is limited to names, email addresses and hashed passwords. There is no conclusive evidence any customer information was extracted. Additionally, Dell cybersecurity measures are in place to limit the effects of a potential exposure.
What additional cybersecurity measures? If the data is gone, it's in the wind. Names, and emails and possibly-breakable passwords. Are you talking about how you closed the hole? Then say how you accidentally exposed your victims.
---
Finally, before anyone says that this is an excessive amount of information for Dell to give out... It's what other tech companies relay in their post-mortems. [0]
All this is, is Dell admitting they had a problem. Not saying what that problem was, and not saying what they're doing to prevent it in future. And assuring their victims that they're taking care of them, despite their victims possibly sitting on lost information (a password, possibly in the wild) for nearly a month.