I found a Vulnerability. They found a Lawyer
dixken.deThree thoughts from someone with no expertise.
1) If you make legal disclosure too hard, the only way you will find out is via criminals.
2) If other industries worked like this, you could sue an architect who discovered a flaw in a skyscraper. The difference is that knowledge of a bad foundation doesn’t inherently make a building more likely to collapse, while knowledge of a cyber vulnerability is an inherent risk.
3) Random audits by passers-by is way too haphazard. If a website can require my real PII, I should be able to require that PII is secure. I’m not sure what the full list of industries would be, but insurance companies should be categorically required to have an cyber audit, and laws those same laws should protect white hats from lawyers and allow class actions from all users. That would change the incentives so that the most basic vulnerabilities are gone, and software engineers become more economical than lawyers.
In other industries there are professional engineers. People who have a legal accountability. I wonder if the CS world will move that way, especially with AI. Since those engineers are the ones who sign things off.
For people unfamiliar, most engineers aren't professional engineers. There are more legal standards for your average engineer and they are legally obligated to push back against management when they think there's danger or ethics violations, but that's a high bar and very few ever get in legal trouble, only the most egregious cases. But professional engineers are the ones who check all the plans and the inspections. They're more like a supervisor. Someone who can look at the whole picture. And they get paid a lot more for their work but they're also essential to making sure things are safe. They also end up having a lot of power/authority, though at the cost of liability. Think like how in the military a doctor can overrule all others (I'm sure you've seen this in a movie). Your average military doctor or nurse can't do that but the senior ones can, though it's rare and very circumstantial.
You'd be surprised how many SE's would love for this to happen. The biggest reason, as you said, being able to push back.
Having worked in low-level embedded systems that could be considered "system critical", it's a horrible feeling knowing what's in that code and having no actual recourse other than quitting (which I have done on few occasions because I did not want to be tied to that disaster waiting to happen).
I actually started a legal framework and got some basic bills together (mostly wording) and presented this to many of my colleagues, all agreed it was needed and loved it, and a few lawyers said the bill/framework was sound .. even had some carve-outs for "mom-n-pops" and some other "obvious" things (like allowing for a transition into it).
Why didn't I push it through? 2 reasons:
1.) I'd likely be blackballed (if not outright killed) because "the powers that be" (e.g. large corp's in software) would absolutely -hate- this ... having actual accountability AND having to pay higher wages.
2.) Doing what I wanted would require federal intervention, and the climate has not been ripe for new regulations, let alone governing bodies, in well over a decade.
Hell, I even tried to get my PE in Software, but right as I was going to start the process, the PE for Software was removed from my state (and isn't likely to ever come back).
I 100% agree we should have even a PE for Software, but it's not likely to happen any time soon because Software without accountability and regulation makes WAY too much money ... :(
The problem with software is that it's all so, so decentralized.
If you're building a bridge in South Dakota, there's somebody in South Dakota building that bridge. That person has to follow South Dakota laws, and those laws can require whatever South Dakota regulators want, including sign-offs by professional engineers.
If you're a South Dakota resident signing up for a web portal, the company may have no knowledge of your jurisdiction specifically (and it would be a huge loss for the world if we moved to a "geo-block every single country by default until you clear it with your lawyers" regime). That portal may very well be hosted in Finland by a German hosting company, with the owners located in Sweden, running Open Source software primarily developed in Britain. It's possible that no single person affiliated with that portal's owner ever stepped food in your jurisdiction.
I work in manufacturing, though this comment is a generalization, and depends on what industry you’re in. What happens in practice is that products are certified by a third party regulatory agency, probably Intertek. They’re the ones who hire the professional engineers. The pushback comes from the design engineers being aware of the regulations, and saying: “This won’t get past Intertek.”
The downside is, bring money. Also, don’t expect to have an agile development process, because Intertek is a de facto phase gate. The upside is that maintaining your own regulatory lab is probably more expensive, and it’s hard to keep up with the myriad of international standards.
As for mom-n-pops, why do you want competition from them? Regulatory capture always favors consolidation of an industry. What happens in practice for consumers is that stuff comes from countries where the regulatory process can be bypassed by just putting the approval markings on everything.
Okay, that was sarcastic, but it’s possible that the vitality of software owes a lot to the fact that it’s relatively unregulated.
On the other hand, I wouldn’t mind some regulatory oversight, such as companies having to prove that they don’t store my personal data.
Note that I’m naming Intertek, not to point a finger at them, but because I don’t know if they have any competitors.
If you actually have that framework, then give it to someone with less to lose & all them to share it with the world.
I'm one of them, and for exactly the reason you say.> You'd be surprised how many SE's would love for this to happenI worked as a physical engineer previously and I think the existence of PEs changes the nature of the game. I felt much more empowered to "talk back" to my boss and question them. It was natural to do that and even encouraged. If something is wrong everyone wants to know. It is worth disruption and even dealing with naive young engineers than it is to harm someone. It is also worth doing because it makes those engineers learn faster and it makes the products improve faster (insights can come from anywhere).
Part of the reason I don't associate my name with my account is so that I can talk more freely. I absolutely love software (and yes, even AI, despite what some might think given my comments) but I do really dislike how much deception there is in our industry. I do think it is on us as employees to steer the ship. If we don't think about what we're building and the consequences of them then our ship is beholden to the tides, not us. It is up to us to make the world a better place. It is up to us to make sure that our ship is headed towards utopia rather than dystopia (even if both are more of an idea than reality). I'd argue that if it were up to the tides then we'll end up crashing into the rocks. It's much easier to avoid that if we're managing the ship routinely than in a panic when we're headed in that direction. I think software has the capacity to make the world a far better place. That we can both do good and make money at the same time. But I also think the system naturally will disempower us. When we fight against the tides things are naturally harder and may even look like we're moving slower. But I think we often confuse speed and velocity, frankly, because direction is difficult to understand or predict. Still, it is best that we try our best and not just abdicate those decisions. The world is complex, so when things work they are in an unstable equilibrium. Which means small perturbations knock us off. Like one ship getting stuck shutting down a global economy. So it takes a million people and a billion tiny actions to make things go right and stay right (easier to stay than fix). But many of the problems we hate and are frustrated by are more stable states. Things like how wealth pools up, gathered by only a few. How power does the same. And so on. Obviously my feelings extend beyond software engineering, but my belief is that if we want the world to be a better place it takes all of us. The more that are willing to do something, the easier it gets. I'd also argue that most people don't need to do anything that difficult. The benefit and detriment of a complex machine is that small actions have larger consequences. Just because you're a small cog doesn't mean you have no power. You don't need to be a big cog to change the world, although you're unlikely to get recognition.
I also come from a more "traditional engineering" background, with PEs and a heavier sense of responsibility/ethics(?). I definitely think that's where it's going, although in my somewhat biased opinion, that's why the bar for traditional engineering in terms of students and expected skill and intuition was much higher than with CS/CE, which means the get rich quick scheme nature of it might go away.
I think you’re taking the professional responsibility that engineers are given too far. They are not given that responsibility to make political decisions, as you seem to be implying. Engineers are professionals in the hard sciences, not in social sciences. They only have power over ethical and safety issues directly pertaining to technical matters. I think ethics in this sense includes only very widely accepted ethical opinions, not anything that people from different political parties would disagree on. Engineering, in other words, is not political. Making the world better, as you put it, is something that requires political decisions. I hope people don’t make this confusion because the last thing most of us would like to see is Engineering becoming a political endeavor, including software engineering.
You're the one that brought up politics. You're right that they're hard to decouple from ethics as that's essentially how the parties form.
But where I disagree with you, and extremely, is that we should not have our own personal ethics and adopt that of what we believe is society's. You're asking the impossible. Such a thing doesn't exist. Whichever country you're in you'll find a diverse set of opinions. The most universal ethics are only the most basic. But if it did exist I'd still disagree as you're asking engineers to not be human. You'd be discriminating people based on religion. You'd be discriminating people based on culture. You'd be discriminating people based on their humanity. I'm extremely opposed to turning humans into mindless automata. Everyone has the right to their own beliefs and this is our advantage as our species.
Engineers are citizens too.
In many countries you are only allowed to call yourself a Software Engineer if you actually have a professional title.
It is countries like US where anyone can call themselves whatever they feel like that have devalued our profession.
I have been on the liability side ever since, people don't keep broken cars unless they cannot afford anything else, software is nothing special, other than lack of accountability.
>> In many countries you are only allowed to call yourself a Software Engineer if you actually have a professional title.
Which countries are those? Are you also only allowed to call yourself a Musician if you a Conservatory Degree?
Portugal, Germany, Canada, Switzerland are the ones I am aware of.
Software Engineering degrees are certified by the Engineering Order, universities cannot call themselves that just because they feel like it, and any kind of legal binding documents when notarised required the professional validity.
First of all, hardly anyone cares (default email signatures etc.pp even if the people don't want that - but you said legally bindign, and I think that just usually never happens.).
And second, at least in Germany it's also somewhat of a bullshit situation that 80% of the people who do a "normal" Computer Science degree don't have that (Diplom-Informatiker/M.Sc), but the 20% who happen to study at a certain uni in a certain degree (that is mostly related, but not the default Computer Science/Software Engineering one) are/were getting their "Diplom-Ingenieur".
Thanks to Hamburg you can call yourself an Ingenieur with a bachelor of science (German source: https://www.bit01.de/blog/informatiker-ingenieur-titel/ ... although it's 5 years old now. Should still be valid.)
They regulate the title not the profession.
I mentioned legal signatures for a reason.
No Software Engineer in title or in real skills will do such a thing.
Why the glib dismissal when you most certainly live in a country where the use of titles like 'doctor', 'dentist', 'officer' or 'lawyer' is most certainly regulated?
This isn't really that exceptional and as someone from a place where not just anyone can call themselves engineer I'm always baffled when people think that it is.
Your comment completely misses the point of my question. Those countries are regulating the title not the profession.
Here is the difference: the Doctors have a liability for their medical practice, the real Engineers meaning those doing Bridges and Buildings that can kill thousands of people if they fall, have a professional obligation and responsability on the outcomes of their designs and implementation.
I can guarantee you, no Software Engineer from Portugal to Germany will be willing to guarantee the behavior and fitness for purpose, of any System or Software product they develop :-) As you very well can see, if you bother to read the full details on the Software License disclaimers of any software from any large company. From Microsoft to Oracle, IBM and others.
As such those are Software Engineers on title only, what is convenient to be hired for post within Government and similar...
That is the thing software can kill, or destroy lives in presence of bugs.
Again, sign any legal documents as engineer, and a court visit might turn into reality.
If Oracle, IBM or Microsoft after 50 years, and employing thousands of Software Engineers ...include the standard disclaimers on their Software, I dont think those in title only should make much fuss of the Software Engineer badge...
Exactly this - I had a role in a multinational, US-founded company, however - I was based in Canada - our title had the name "engineer" contained within it. We were NOT by any means certified professional engineers according to any regulatory body - we were great at our jobs, but that was the reality.
We were NOT allowed to refer to our job title when deployed to the province of Quebec, which has strong regulations around the use of the term "engineer". It was fine - we still went, did our jobs, satisfied our customers and fixed their issues.
And the people of Quebec are much safer for it. /s
This divide between Canada and the US has existed since the birth of software engineering as a thing. Where is the evidence the protected name has done anything useful for either Canadian software engineers or its citizens?
>It is countries like US where anyone can call themselves whatever they feel like that have devalued our profession.
How have they devalued the profession when the labor of that professions is worth the most in the US?
If I start calling "bananas" "apples" then I devalue the meaning of the word "apple". You can't differentiate which I'm referring to.
If I start calling "bananas" "apples" the price at the store doesn't change.
I think you don't understand what the word "value" means. You understand one meaning, but it has more than one.
Professional labour value isn't synonymous with late stage capitalism without ethics or morals.
Now if you mean for own much one is willing to sell themselves to late stage capitalism, producing low quality products and entshtification, maybe that is the bang for buck right there.
How do you explain the low quality of software coming out of all of the other countries you have mentioned with protected titles?
The software is happening regardless of title and you haven’t given any examples of the value of where kissing the ring to get the certification has been critical to Canada/Germany/Switzerland producing better software.
Are all programmers called engineers in these countries?
You've made such a wild assumption that I'm convinced you're more interested in fighting then discussing
There are engineers, and there are brick layers.
You mean Android's great quality, or Chrome CVEs by the way?
Just because you have an engineering degree doesn't mean your code is of better quality and security than someone without an engineering degree.
Signed, someone with an CS engineering degree.
I don’t think the current cost structure of software development would support a professional engineer signing their name on releases or the required skill level of the others to enable such …
We’d actually have to respect software development as an important task and not a cost to be minimized and outsourced.
> In other industries there are professional engineers.
I think this is mostly a US thing.
I wish I would have a rubber stamp like professional engineers do.
We check the output of engineers tjats what infra audits and certs are for. We basically tell industry if you want to waste your money on poor engineers whose output doesn’t certify go ahead.
you could do that with civil engineering. anyone gets to design bridges. bridge is done we inspect, sorry x isn’t redundant your engineering is bad tear it down.
You couldn't do that with civil engineering, because checking if a bridge was built correctly is actually really hard, and it's why it's such a process for engineers to sign off on phases of construction.
You could look at the blueprints and calcs that were used to build it and inspect it, which they do. There’s no fundamental difference. Firms will self enforce engineering rigor because it’s a waste of money not to. Making it more stringent when lives are at stake makes sense, thats the only reason you could use to separate them. Also that can even get blurry in eg avionics software.
A lot of responses below talking about what a 'certified' or 'chartered' engineer should be able to do.
I thought it would be noteworthy to talk about another industry, accountancy. This is how it works in the UK, but it is similar in other countries. They are called 'Chartered Accountants' here, because their institute has a Royal Charter saying they are the good guys.
To become a Chartered Accountant has no prerequisites. You 'just' have to complete the qualification of the institute you want to join. There are stages to the exams that prior qualifications may gain you exemptions from. You also have to log practical experience proving you are working as an accountant with adequate supervision. It takes about 2-3 years to get the qualification for someone well supported by their employer and with sufficient free time. Interestingly many Accountants are not graduates, and instead took technician level qualifications first, often the Association of Accounting Technicians (AAT). The accounting graduates I have interviewed wasted 3 years of their lives...
There are several institutes that specialise in different areas. Some specialise in audit. One specialises in Management Accounting (being an accountant at a company really). The Management accountants one specifically prohibits you from doing audit without taking another conversion course. All the institutes have CPD requirements (and check) and all prohibit you from working in areas that you are not competent, but provide routes to competency.
There are standards to follow, Generally Accepted Accounting Practice GAAP, UK Financial Reporting Standards FRS and the International equivalent IFRS. These cover how Financial Statements are prepared. There are superate standards setting bodies for these. There are also a set of standards that cover how an audit must be done. Then there is tax law. You are expected to know them for any area you are working in. All of these are legally binding on various types of corporation. See how that switches things around? Accountants are now there to help the company navigate the legal codes. The directors sign the accounts and are liable for misstatements, that encourages them to have a director who is an accountant...an audit committee etc.
How does that translate to software?
There are lots of standards, NIST, GDPR, PCI, some of which are legally or contractually binding. But how do I as a business owner know that a software engineer is competent to follow them. Maybe I am a diving company that wants a website. How do I know this person or company is competent to build it? It requires software engineers with specific qualifications that say they can do it, and software engineers willing to say, 'I'm sorry I am not able to work in this field, unless I first study it'.
I’m big on increasing accountability and responsibility for software engineering, but I’ve learned about SEI CMMI, and worked in an ISO 9001 shop.
In some cases, these types of structures make sense, but in most others, they are way overkill.
It’s a conundrum. One of the reasons for the crazy growth of software, is the extreme flexibility and velocity of development, so slamming the brakes on that, would have enormous financial consequences in the industry (so … good luck with that …).
But that flexibility and velocity is also a big reason for the jurassic-scale disasters that are a regular feature of our profession. It’s entirely possible for people that are completely unqualified, to develop software full of holes. If they can put enough lipstick on it, it can become quite popular, with undesirable consequences.
I don’t think that the answer is some structured standard and testing regime, but I would love to see improvement.
Just not sure what that looks like.
> but in most others, they are way overkill.
As an accountant I am able to enforce an accounts regime appropriate to my entity, with concepts like 'materiality' to help. I'm not sure about ISO9001, I'm more familiar with PCIDSS, and I found it to be very proscriptive, and 'all or nothing', compared with accounting standards. For instance in a small company, it is perfectly reasonable to state verbally to your auditor that your control over something is that you are close enough to the transactions to see misstatements by other people sat in the same room. Or even that you have too few people to exercise segregation of duties controls. In a larger company it is not ok. I don't see that same flexibility in other kinds of standards
> PCIDSS
Just got a PTSD flashback...
Regarding your 2), in other industries and engineering professions, the architect (or civil engineer, or electrical engineer) who signed off carries insurance, and often is licensed by the state.
I absolutely do not want to gatekeep beginners from being able to publish their work on the open internet, but I often wonder if we should require some sort of certification and insurance for large businesses sites that handle personal info or money. There'd be a Certified Professional Software Engineer that has to sign off on it, and thus maybe has the clout to push back on being forced to implement whatever dumb idea an MBA has to drive engagement or short-term sales.
Maybe. Its not like its worked very well lately for Boeing or Volkswagen.
FWIW there is no barrier like that for your physical engineers. Even though, as you note, professional engineers exist. Most engineers aren't professional engineers though, and that's why the barrier doesn't exist. We can probably follow a similar framing. I mean it is already more common for licensing to be attached to even random software and that's not true for the engineer's equivalents.> I absolutely do not want to gatekeep beginners from being able to publish their work on the open internetOh there have been many cases where software engineers who are not professional engineers with the engineering mafia designation get sidelined by authorities for lacking standing. We absolutely should get rid of the engineering mafias and unions.
https://ij.org/press-release/oregon-engineer-makes-history-w...
It's kinda wild that you don't need to be a professional engineer to store PII. The GDPR and other frameworks for PII usually do have a minimum size (in # of users) before they apply, which would help hobbyists. The same could apply for the licensure requirement.
But also maybe hobbyists don't have any business storing PII at scale just like they have no business building public bridges or commercial aircraft.
I'm wary of centralizing the powers of the web like that.
Web is already mostly centralized, and corporations which should be scrutinized in way they handle security, PII and overall software issues are without oversight.
It is also a matter of respect towards professionals. If civil engineer says that something is illegal/dangerous/unfeasible their word is taken into the account and not dismissed - unlike in, broadly speaking, IT.
I just don't feel we want the overhead on software. I'm in an industry with PEs and I have beef with the way it works for physical things.
PII isn't nearly as big a deal as a life tbh. I'd rather not gatekeep PII handling behind degrees. I want more accoubtability, but PEs for software seems like it's ill-suited for the problem. Principally, software is ever evolving and distributed. A building or bridge is mostly done.
A PR is not evaluated in a vacuum
The question is who defines security.
I, as a self-proclaimed dictator of my empire, require, in the name of national security, all chat applications developed or deployed in my empire to send copies of all chat messages to the National Archive for backup in a form encrypted to the well-known National Archive public key. I appoint Professional Software Engineers to inspect and certify apps to actually do that. Distribution of non-certified applications to the public or other forms of their deployment is prohibited and is punishable by jail time, as well as issuing a false certification.
Sounds familiar?
The difference from civil engineering is that governments do not (yet?) require a remotely triggerable bomb to be planted under every bridge, which would, arguably, help in a war, while they are very close to this in software. They do something similar routinely with manufacturing equipment - mandatory self-disabling upon detecting (via GPS) operation in countries under sanctions.
It is my understanding that bridges in Switzerland have bombs, or at least holes for bombs.
Worth noting that “PII” is not a concept under the GDPR and that it’s definition of Personal Data is much broader than identifiable information.
GDPR doesn't have any minimum size before applying. There's a household exemption for personal use, but if you have one external user, you're regulated.
I generally agree with you, but:
> If other industries worked like this, you could sue an architect who discovered a flaw in a skyscraper
To match this metaphor to TFA, the architect has to break in to someone else's apartment to prove there's a flaw. IANAL but I'm not positive that "I'm an architect and I noticed a crack in my apartment, so I immediately broke in to the apartments of three neighbours to see if they also had cracks" would be much of a defence against a trespass/B&E charge.
Nah, this is more like “I put a probe camera in the crack and I ended up seeing my neighbor’s living room for a second
Another missing link is here is the stock price relationship to security vulnerability history of the corporation. Somehow, I don't know how, but somehow stock prices should reflect the corporation's social responsibility posture, part of which is information security obviously.
> companies should be categorically required to have an cyber audit
I work with a firm that has an annual pen test as part of its SOC2/GDPR/HIPAA audit, and it's basically an exercise in checking boxes. The pen test firm runs a standard TLS test suite, and a standard web vulnerability test suite, and then they click buttons for a while...
The pen test has never found any meaningful vulnerabilities, and several times drive-by white hats have found issues immediately after the pen test concluded
Agree with the points. Cybersec audits are mandatory for insurance companies in most countries. This list need to be expanded.
There are jurisdictions (and cultures) where truth is not an absolute defence against defamation. In other words, it's one thing to disclose the issue to the authorities, it's another to go to the press and trumpet it on the internet. The nail that sticks out gets hammered down.
Given that this is Malta in particular, the author probably wants to avoid going there for a bit. It's a country full of organized crime and corruption where people like him would end up with convenient accidents.
At least in the US there is a path of escalation. Usually if you have first contacted those who have authority over you then you're fine. There's exceptions in both directions; where you aren't fine or where you can skip that step. Government work is different. For example Snowden probably doesn't get whistleblower protection because he didn't first leak to Congress. It's arguable though but also IANAL> it's one thing to disclose the issue to the authorities, it's another to go to the press and trumpet it on the internet.> it's one thing to disclose the issue to the authorities
That's not how any of this works. You are basically arguing for the right to hide criminal actions. Filing with the CSIRT is the only legal action for the white hat to take. This is explicitly by design. Complaining about it is like complaining the police arrested you for a crime you committed.
Vulnerability Researcher here… Unless your target has a security bounty process or reward; leave them alone. You don’t pentest a company without a contract that specified what you can and can’t test. Although I would personally appreciate and thank a well meaning security researchers efforts most companies don’t. I have reported 0days for companies that HAVE bounties and they still tried to put me in hot water over disclosure.. Not worth the risk these days.
We had a situation in Sweden when a person found that if you remove a part of the url (/.../something -> /.../) for a online medical help line service, they got back a open directory listing which included files with medical data of other patients. This finding was then sent to a journalist that contacted the company and made a news article of it. The company accused the tipster and journalist for unlawful hacking and the police opened a case.
But was it? Is it pen testing to remove part of an URL? People debated this question a bit in articles, but then the case was dropped. The line between pen testing and just normal usage of the internet is not a clear line, but it seems that we all agree that there is a line somewhere and that common sense should guide us in some sense.
This wasn’t a pen test? It was a drive by “oh fuck the platform I’m using is completely insecure”.
This dive instructor was using this insurance company for his clients, and thus had a responsibility to prevent any known risk (data privacy loss in this case).
So he had two options: take his clients and his business to another insurer (and still inform all his current and previous clients about their outstanding risk), or try to help the insurer resolve the risk.
Good guideline advice but it seems you didn't read the article. Their personal data was at risk here. Leaving them alone would very likely result in a breach of this person's data. Both he and you have an ethical responsibility to at minimum notify the business of this problem and follow up with it.
I also guess you haven't read the article too:
> And the real irony? The legal threats are the reputation damage. Not the vulnerability itself - vulnerabilities happen to everyone. It's the response that tells you everything about an organization's security culture.
See. The moral of the story is that the entity care more about their face than the responsibility to fix the bug, that's the biggest issue.
He also pointed out bugs do happens and those are reasonable, and he agreed to expose them in an ethical manner -- but the goodwill, no matter well or ill intentioned, those responses may not come with the same good tolerations, especially when it comes to "national" level stuff where those bureaucrats knows nothing about tech but they knew it has political consequences, a "deface" if it was exposed.
Also, I happened to work with them before and know exactly why they have a lot of legal documents and proceedings, and that's because of bureaucracy, the bad kind, the corrupt kind of bureaucracy such that every wrong move you inflicted will give you huge, if not capitcal punishment, so in order to protect their interest, they rather do nothing as it is unfortunately the best thing. The risk associated of fixing that bug is so high so they rather not take it, and let it rot.
There's a lot of system in Hong Kong that is exactly like that, and the code just stay rotten until the next batch of money comes in and open up new theatre of corruption. Rinse and repeat
That’s not how it works. You are not ethically responsible to hack every company you interact with.
No, that's exactly how it works when you're Certified.
https://www.giac.org/policies/ethics/
"I will protect confidential and proprietary information with which I come into contact."
GIAC has zero authority, any group of people can get together and make their own policies and print a nice little certificate when somebody applies.
I use a different email address for every service. About 15 years ago, I began getting spam at my diversalertnetwork email address. I emailed DAN to tell them they'd been breached. They responded with an email telling me how to change my password.
I guess I should feel lucky they didn't try to have me criminally prosecuted.
That could be a hack or something the company sold to a third party.
During a property search for rentals in the UK I created a throwaway alias email (to my regular account) as I did not really trust them with my data. This was not for those requiring me to provide credit check papers and name of children (!! yes, you read it right, name of children!) at the very first contact in their web form just to start conversation about if there is viewing ability or not, and then perhaps schedule one. No. Those were avoided completely (despite the desperate property market for renters, I am not that desperate: eventually we left the UK in a big part because of property troubles). Two of those were reported to the relevant authority (one case got confirmed after several months, but still pending after more than a year. The other sank, apparently. My trust in the UK institutions is not elevated). There were more than two requiring full set of data on the prospective viewing candidate.
The throwaway email was for the ""reliable"" ones. The trusted names. Or those without over-reaching data collection (one big name, Cheffin, one of the reported one, had over-reaching habit).
Having a throwaway alias proved benefitial. From zero spam to my email suddenly spam started to arrive with about 4 / week frequency. Kept coming until the alias got disabled. Cannot tell which was the culprit, only have a shortlist based on timing. But that never ever elsewhere used email somehow got to fraudster elements from the few UK property agent organizations I contacted. In very shor time (few weeks).
Every single time I order from KFC, I get an e-mail by a hot girl in my area. You think I can sue them for free chicken wing buckets?
Those aren't spam. Its that hot girls really want your wings.
Same with me. I started to get spam from the email I used for a Portuguese airline. They didn't even respond.
I've had multiple "big companies" leak my randomly generated email addresses. I create a unique one for each such account, like say my airline frequent flyer account for delta, and I've had several of those leak.
blah1381812301.318719@somedomain.com would never be guessed.
Same, then later learned about TAP being breached. No disclosure from the company itself though...
always cc the local GDPR office when reporting such things
They won't do anything. Had this exact scenario with two Shopify-based sites where my address somehow ended up with the second shop. Reported it, shop 1 investigated themselves and found themselves to be innocent, case closed.
Shopify shares these I think, no?
That would be illegal. I doubt Shopify are to blame here, it's more likely one of the gazillion plugins that every shop uses was the vector. Either way, it's highly likely the shop owner is the data controller, from a legal perspective.
(Scenario: E-Mail address A with shop A, address B with shop B, then received a newsletter I did not subscribe to [already illegal] from shop B to address A. Only common data point: PayPal account.)
They'll just be incorporated in Ireland who are more than happy to be a haven for such criminals.
Where can I read more about this?
How do you generate the email addresses? Do you run your own e-mail server or do you use a third-party service?
A few ways I've heard about - DuckDuckGo.com has a system that generates a random email address on their domain where you can request "a new email address" whenever you need one; you request a new alias and they create a permanent mapping to your real address from that new address. Then mail sent to say Foo-Bar-Hotdog@duck.com goes to you, duck remembers the mapping that this goes to your address. You can reply back and duck handles the anon mapping.
Or you can have a catchall email address on your own domain, where anything sent to any alias on your domain gets forwarded to your own address. Then hamburger@myDomain.com and mcdonalds@myDomain.com goes to your real private address. you don't have to set it up. Anytime you join a new service, say reddit, you tell them your address is "reddit@myDomain.com".
All of these have a level of pain associated with them. And they aren't that private. The government could no doubt get a court order to pierce the obscured email addresses.
There's proton email and many others. All of these are too painful for most people.
I have wondered if people who want to be really secret set up a chain of these anon mail forwarding systems.
Own the domain put catch-all for that domain. No need to generate anything.
Proton let's me bring my own subdomain for those random emails and does a pretty good job of tracking which email is given to whom, and also supports hiding your email even if you want to initiate the email contact, not just reply (plus scheme in mail address doesn't allow this). Otherwise you can also use their domain too, to stay fully anonymous.
So far I've been happy. I hope I'll stay happy.
I've been happy with Proton too. I use my own domain and Proton's catch all for this. I always register using addresses like service.name@matheusmoreira.com.
Fastmail will let you create any number of "aliases" as they call them, with not too much friction.
If you’re on Gmail, there’s “plus addressing” - this allows you to append any term after your email - and then sort accordingly.
So if your Gmail is foo.bar@gmail.com you can use foo.bar+servicename@gmail.com and the mail will still end up in your mailbox. Then you can create a rule that sorts incoming mails accordingly.
I use addy.io
Since the author is apparently afraid to name the organisation in question, it seems the legal threats have worked perfectly.
Or maybe in the diving community, "Maltese insurance company for divers" is about as subtle as "Bird-themed social network with blue checkmarks".
I'm a diver, DAN is the only company I can name that specialises in diving insurance.
Huh, apparently they're registered in Malta, what a coincidence...
I read that entire article thinking it said driving instructor. Doesn't really change anything but it makes so much more sense that he's a part time diving instructor.
checks out with both Perplexity[0] and top Google results
[0]: https://www.perplexity.ai/search/maltese-scuba-diving-insura...
Interesting that perplexity takes a random Redditor comment as fact...
yeah, so many software engineers are not verify "ai search results". Hey people, llm generated search results aren't reliable, might well have hallucinations. You have to verify anything they say.
Even better, one that specifically says "I don't know if that's it for sure"
There's pretty much only one global insurer affiliated with dive schools, so this is spot on
well, it is. quick search revealed a name of a certain big player, although there are some other local companies whose policies can be extended to "extreme sports"
https://www.reddit.com/r/scuba/comments/1r9fn7u/apparently_a...
Bluesky?
That's a butterfly.
There is precisely one large, internationally well known company that offers dive insurance and is based in Malta.
They left more than enough clues to figure out that this is DAN (Divers Alert Network) Europe.
Ironically, this will garner far more attention and focus on them than if they had disclosed this quietly without threats.
If you follow the jurisdictional trail in the post, the field narrows quickly. The author describes a major international diving insurer, an instructor driven student registration workflow, GDPR applicability, and explicit involvement of CSIRT Malta under the Maltese National Coordinated Vulnerability Disclosure Policy. That combination is highly specific.
There are only a few globally relevant diving insurers. DAN America is US based. DiveAssure is not Maltese. AquaMed is German. The one large diving insurer that is actually headquartered and registered in Malta is DAN Europe. Given that the organization is described as being registered in Malta and subject to Maltese supervisory processes, DAN Europe becomes the most plausible candidate based on structure and jurisdiction alone.
Maybe.
Or maybe they took what they know to sell to the black hats.
This is legal, correct?
If you can reasonably know they're criminal? No. If you sell an exploit instead of knowledge of a vulnerability? No. If they pay you with something they stole? No.
But otherwise? Usually, yes.
Hey TFA, other people have gone to prison for finding monotonic user/account IDs and _testing_ their hunch to see if it's true. See, doing that puts you at great risk of violating the CFAA. Basically, the moment you knew they were allocating account IDs monotonically and with a default password was the moment you had a vulnerability that you could report without fear of prosecution, but the moment you tested that vulnerability is the moment you may have broken the law.
Writing about it is essentially confessing. You need a lawyer, and a good one. And you need to read about these things.
The blog is under a German domain, the company is from Malta. Why would they care about a US law again?
Because Americans can never comprehend of literally anywhere on earth existing. Genuinely if any other place on earth tried this crap…the Americans would lose their minds.
Why don’t you just get a rotisserie chicken from Costco and put some money into your 401k? Be careful, the IRS knows exactly how much taxes you owe.
IANAL but the law in Germany is basically the same in this case, accessing data that's meant to be protected and not intended for you is is illegal. It depends somewhat on the interpretation of what "specifically protected" ("besonders gesichert") means. https://www.gesetze-im-internet.de/stgb/__202a.html
Exactly. My apologies for not noticing this was over in Europe, but you'll find laws similar to CFAA all over the place. And in Europe it might be worse simply because you might have 27 different such laws _and_ the European arrest warrant, and you might not know which of those 27 laws applies. (I guess you could say the same about the U.S., with 50 instead of 27, but at least for this sort of thing in the U.S. it's mainly federal law that matters the most.)
Can a non specific password constitute a specific protection? I guess no
It can. The fact there is a password, even if you can trivially find said password, is considered a protection. The German law is completely absurd here.
What is CFAA? I couldn't find anything about it in EU or Malta. Is it something in India or China? Or Japan? Hmm, maybe I'm missing another country.. Australia?
Computer Fraud and Abuse Act
Parent is making the point that people from the US often forget that other countries exist and adhere to different rules & regulations and it seems like you're unintentionally emphasizing it for them.
For anyone seeking more details on this act, it is embodied as "18 U.S. Code §1030 - Fraud and related activity in connection with computers"[0], and applies specifically to the United States of America, a nation not involved in any way with this incident.
> Basically, the moment you knew they were allocating account IDs monotonically and with a default password was the moment you had a vulnerability that you could report without fear of prosecution
That logic is garbage and assumes there is some arbitrary point at which a user should magically know the difference between a few IDs happening to be near each other versus a system wide problem. The law would use the interpretations of "knowingly", "intent" and in this case "reasonable".
That feels fundamentally broken. How can you expect an organisation to respond appropriately if you don’t provide them any kind of proof?
He had enough proof, his own students, who assumingly agreed. And in case the company still pretends there is no problem you could still crawl their entire user base...
I forgot that US law applies everywhere.
Would a better course of action here have been for him to generate a “test test” account under his?
they you could kick him out of the org for "creating a bogus account" - "our company isn't bad, you're the bad actor". The bad company he was try get to fix their thing didn't behave properly, end of story.
This happens over and over again because for so many companies their natural thing is to hid any problem and threaten to sue anyone who discloses. Software problems have broken that typical behavior, to some extent.
I salute the author of this post who dared to do the right thing. I hope the company comes to their senses and doesn't try to punish the diving instructor. Over and over companies have tried this same "attack the problem reporter" strategy when software problems are revealed.
I find it interesting how American-accented people publish on social media how to access non-linked FBI files related to the Epstein leak, by updating a URL.
I think the right way would be to sell this shit on darknet and then anonymously reveail the bug to the public.
AFAIK, what this dude did - running a script which tries every password and actually accessing personal data of other people – is illegal in Germany. The reasoning is, just because a door of a car which is not yours is open you have no right to sit inside and start the motor. Even if you just want to honk the horn to inform the guy that he has left the door open.
https://www.nilsbecker.de/rechtliche-grauzonen-fuer-ethische...
For clarification, here's the actual quote from the article describing the process:
> I verified the issue with the minimum access necessary to confirm the scope - and stopped immediately after.
No notion of a script, "every password" out of a set of a single default password may be open to interpretation, no mention of data downloads (the wording suggests otherwise), no mention of actual number of accesses (the text suggest a low number, as in "minimum access necessary to confirm the scope").
Still, some data was accessed, but we don't know to what extent and what this actually was, based on the information provided in the article. There's a point to be made about the extent of any confirmation of what seems to be a sound theory at a given moment. But, in order to determine whether this is about a stalled number generator or rather a systematic, predictable scheme, there's probably no way around a minimal test. We may still have a discussion, if a security alert should include dimensions like this (scope of vulnerability), or should be confined to a superficial observation only.
> running a script which tries every password
This isn't directly applicable to your point, but I need to correct this. They weren't guessing tons of passwords, they were were trying one password on a large number of accounts.
Correct you are.
Maybe the law should be changed then. The companies that have this level of disregard for security in 2026 are not going to change without either a good samaritan or a data breach.
He didn't have to crack the site. He could have reported up to that point.
We need a change in law but more to do with fining security breaches or requiring certification to run a site above X number of users.
Showing up without a PoC complicates things.
You can lead a horse to water, as they say.
Suicidal horses who won’t drink pose little risk to other innocent horses!
He downloaded data of multiple users
Yes, that’s the PoC.
Seemingly it could have been scoped tighter.
But complaining about the methodology your (successful, free, overdue) penetration test is wild.
I understand why the author thought that way, but showing up with private data that the company is obligated to protect complicates things quite a lot more.
I've dealt with security issues a number of times over my career, and I'm genuinely unsure what my legal obligations would be in response to an email like this. He says the company has committed "multiple GDPR violations"; is there something I need to say in response to preserve any defenses the company may have or minimize the fines? What must I do to ensure that he does eventually delete the customer data? If I work with him before the data is deleted, or engage in joint debugging that gives him the opportunity to exfiltrate additional data, is there a risk that I could be liable for failing to protect the data from him?
There's really no option when getting an email like this other than immediately escalating to your lawyers and having them handle all further communication.
where did they mention a script to try passwords? all accounts apparently have the same default password
> is illegal in Germany
Germany is not exactly well-known for having reasonable IT security laws
It's not necessarily just Germany. Lots of countries have laws that basically say "you cannot log in to systems that you (should) know you're not allowed to". Technical details such as "how difficult is the password to guess" and "how badly designed is the system at play" may be used in court to argue for or against the severity of the crime, but hacking people in general is pretty damn illegal.
He also didn't need to run the script to try more than one or maybe two accounts to verify the problem. He dumped more database than he needed to and that's something the law doesn't particularly like.
People don't like it when they find a well-intentioned lock specialist standing in their living room explaining they need better locks. Plenty of laws apply the same logic to digital "locksmiths".
In reality, it's pretty improbable in most places for the police to bother with reports like these. There have been cases in Hungary where prestigious public projects and national operations were full of security holes with the researchers sued as a result, but that's closer to politics than it is to normal police operations.
The main problem I have this with real-world analogies we use for hacking is we assume that, like a home owner, these companies ultimately care about security and are in good-faith trying to make secure systems.
They're not. They're malicious actors themselves. They will expose the absolute maximum amount of data they can with the absolute maximum amount of parties they can to make money. They will also collect the absolute maximum amount of data. Your screen is 1920 by 1080? Cool, record that, we can sell that.
All the common sense practices we were taught in school about data security, they do the opposite. And, to top it off, they don't actually want to fix ANYTHING because doing so threatens their image, their ego, and potentially their bottom line.
And people wonder how the US can just turn off the electric grid of another country on demand...with laws like these, I expect there are local 6 year olds who can do the same.
I agree. You have to know when to stop.
No expert but I assume anything you do that is good faith usage of the site is OK. And take screenshots and report the potential problem. But making a python script to pull down data once you know? That is like getting in that car.
Real life example of fine would be you walk past a bank at midnight when it is unstaffed and the doors open so you have access to lobby (and it isnt just the night atm area). You call police on non emergency no and let them know.
This is exactly what I thought. The person did something illegal by accessing random accounts and no explanation makes this better. Could have asked his diving students for their consent, could have asked past students for their consent to access their accounts - but random accounts you cannot access.
Since this is a Maltese company I would assume different rules apply, but no clue how this is dealt with in Malta.
How the company reacted is bad, no question, but I can’t glance over the fact how the person did the initial „recon“.
Hopefully no criminals turn up to do the illegal thing.
You don't need to retrieve other people's data to demonstrate the vulnerability.
It's readily evident that people have an account with a default password on the site for some amount of time, and some of them indefinitely. You know what data is in the account (as the person who creates the accounts) and you know the IDs are incremental. You can do the login request and never use the retrieved access/session token (or use a HEAD request to avoid getting body data but still see the 200 OK for the login) if you want to beat the dead horse of "there exist users who don't configure a strong password when not required to". OP evidenced that they went beyond that and saw at least the date of birth of a user on there by saying "I found underage students on your site" in the email to the organization
If laws don't make it illegal to do this kind of thing, how would you differentiate between the white hat and the black hat? The former can choose to do the minimum set of actions necessary to verify and report the weakness, while the latter writes code to dump the whole database. That's a choice
To be fair, not everyone is aware that this line exists. It's common to prove the vulnerability, and this code does that as well. It's also sometimes extra work (set a custom request method, say) to limit what the script retrieves and just not the default kind of code you're used to writing for your study/job. Going too far happens easily in that sense. So the rules are to be taken leniently and the circumstances and subsequent actions of the hacker matter. But I can see why the German the rules are this way, and the Dutch ones are similar for example
> You don't need to retrieve other people's data to demonstrate the vulnerability.
If you’re reporting to a nontechnical team…which sometimes you are…sometimes you do?
If the nontechnical team is refusing to forward it to whoever maintains the system, they apparently see no problem and you could disclose it to a journalist or the public. Or you could try it via the national CERT route, have them talk to this organization and tell them it's real. In some cases you could send a proof of concept exploit that you say you haven't run, but they can, to verify the bug. You can choose to retrieve only your own record, or that of someone who gave consent. You can ask the organization "since you think the vulnerability is not real, do you mind if I retrieve 1 record for the sole purpose of sending you this data and prove it is real?"
In jurisdictions like the one I'm most familiar with, it's official national policy not to prosecute when you did the minimum necessary. In a case where you're otherwise stuck, it's entirely reasonable to retrieve 1 record for the sake of a screenshot and preventing a bigger data leak. You could also consider doctoring a screenshot based on your own data. By the time they figured out the screenshot was fake, it landed on a technical person's desk who saw that the vulnerability is real
Lots of steps to go until it's necessary to dump the database as OP did, but I'll agree it can sometimes (never happened to me) be necessary to access at least one other person's data, and more frequently that it will happen by accident
If you flip it, we have a dude here admitting to breaching a large number of accounts and gaining access to PII -- including PII about minors.
Are we and the Maltese government just going to trust this guy and assume he has actually deleted everything, with no investigation?
If his goal was to keep the data he wouldn't have reported it?
That doesnt necessarily track. He could have stolen the data, then reported it to clear his own name. He did access more data than he needed to prove that there is a likely breach.
How will you ensure the other people who were exploiting the hole have deleted their copies?
What a weird way to think about this.
Is it? if 10 people may have committed a crime, should we exonerate 1 of them because he reported it and promises he didnt do anything?
That depends on provable intent,
and your societal goals for ensuring the next exploit is reported, not ignored or shared online.
Absolutely not. That's not your concern nor your problem.
They're perfectly capable of hiring incident response experts, and companies commonly have cyber insurance that'll pay for it.
"Demonstrating" is dumb and means you turn an ordinary disclosure into personal liability for you.
Blabbing about it on the internet is just the idiot cherry on the stupid cake.
It's illegal in the US, too. This is an incredibly stupid thing to do. You never, ever test on other people's accounts. Once you know about the vulnerability, you stop and report it.
Knowing the front door is unlocked does not mean you can go inside.
Don't comment on topics you know nothing about. Nothing this guy did is illegal in the US. Everything this guy did followed standard procedures for reporting security issues. The company apparently didn't understand anything about running a secure software operation and did everything wrong. And there in lies the problem. Without civil penalties for this type of bad behavior, then it will continue. In the US, a lawyer doing this would risk disbarment as this type of behavior dances on the edge of violating whistleblower laws.
I know exactly what I'm talking about, I'm a security engineer lol. Who has worked with plenty of lawyers.
Yes, this is absolutely illegal. The CFAA is pretty fuzzy when it comes to vuln reporting but accessing other people's accounts without their permission is a line you don't cross. Having a badly secured site is usually not a crime, but hacking one is.
Several jobs ago, some dumbass tested a bunch of API keys that people had accidentally committed on github and then "reported" the vulnerability to us.
The in-house atty I was working with was furious and the guy narrowly avoided legal trouble. If he'd just emailed us about it, we'd've given him something.
Also, whistleblower laws are for employees, not randos doing dumb shit online.
> "is illegal in Germany"
> "Whatever Europe is doing, do the opposite"
on brand
Last year I found a vulnerability in a large annual event's ticket system, allowing me to download tickets from other users.
I had bought a ticket, which arrived as a link by email. The URL was something like example.com/tickets/[string]
The string was just the order number in base 64. The order number was, of course, sequential.
I emailed the organizer and the company that built the order system. They immediately fixed it... Just kidding. It's still wide open and I didn't hear anything from them.
I'm waiting for this year's edition. Maybe they'll have fixed it.
And you are not worried enough about other users that you reported the compsny or at least name them here?
> The security research community has been dealing with this pattern for decades: find a vulnerability, report it responsibly, get threatened with legal action. It's so common it has a name - the chilling effect.
Governments and companies talk a big game about how important cybersecurity is. I'd like to see some legislation to prevent companies and governments [1] behaving with unwarranted hostility to security researchers who are helping them.
I'm not a lawyer, but I believe the EU's Cyber Resilience Act combined with the NIS2 Directive do task governments with setting up bodies to collaborate with security researchers and help deal with reports.
The law seems written to target vendors and products rather than services though, reading through this: https://www.acigjournal.com/Vulnerability-Coordination-under...
I truly don’t understand why you decided to take the stance of setting them deadlines and disclosing the vulnerability if they miss them. I understand you had good intentions, but I also can see how this can look like unnecessary escalation and even like blackmail to someone outside the industry, like an insurance manager or a lawyer.
I agree that disclosing a vulnerability in a major web browser or in a protocol makes sense because it’s in the interests of the humanity to fix it asap. But a random insurance firm? Dude, you’re talking to them as if they were Google.
If you really care about them and wish them good (which I believe you do!) you should’ve just left out the deadlines and disclosure part and I don’t think cc’ing the national agency was that necessary given the scale of the problem. Maybe should’ve just given them a call and have had a friendly chat over the phone. You would’ve helped them and stayed friends.
Adding a deadline to a disclosure of a vulnerability of this nature is standard practice. Every day it's not patched is a day data could be compromised. Any halfway competent lawyer should be fully aware of this.
Disclosure without a deadline WILL be ignored.
It does not matter if it's Google or your local boyscouts club, any organization requiring users to provide information that can be abused in the wrong hands takes on a responsibility to handle such data responsibly.
NIS 2 article 12 specifically says the CSIRT must help reporter and provider negotiate a disclosure timeline. He set a timeline because there's supposed to be a timeline.
I think it is obvious that the author just wants to come out as the great hero bounty hunter he is and in fact did reach the HN front page, so good for them.
If he wanted to solve it he would automatically sue them back for breaching his and his clients' personal data and not make any publicity blog post.
There's always a deadline, otherwise there is no incentive to remediate.
This is what the blog writer wrote in email informing about the vulnerability:
> I am offering a window of 30 days from today the 28th of April 2025 for [the organization] to mitigate or resolve the vulnerability before I consider any public disclosure.
> Please note that I am fully available to assist your IT team with technical details, verification steps and recommendations from a security perspective.
He is offering a window of 30 days and that he will consider public disclosure only after that window. He didn't say that this was the full and final window. He didn't say that he will absolutely and definitely disclose. He is being more than co-operative by willing to offer his time and knowledge in this matter, even if he doesn't need to.
If they are not Google, then instead of push-and-shove legal threats, they could have been forthcoming and said something like, "We are not an IT company with expertise in this matter. We will definitely need more than 30 days to resolve this matter. Please let us know if you are agreeable to a longer time Window of <n days> before you consider disclosure."
To top it all, they ask to keep this matter away from the authorities despite:
> The Maltese National Coordinated Vulnerability Disclosure Policy (NCVDP) explicitly requires that confirmed vulnerabilities be reported to both the responsible organization and CSIRTMalta.
So he followed the law and that is bad, how?
> I don’t think cc’ing the national agency was that necessary given the scale of the problem that necessary given the scale of the problem.
Children's addresses were publicly accessible via the vulnerability - does the urgency solely require the matter to be large scale to be taken seriously?
> Maybe should’ve just given them a call and have had a friendly chat over the phone. You would’ve helped them and stayed friends.
The same could be said about the company. Why are only people expected to be nice and friendly while it is fine for companies to issue legal threats?
I truly don’t understand how you can be so naive xox
Nope, this just didn't works either.
That's an assumption - maybe backed by experience, but still. The professional way would be to slowly escalate. Tell them nice and friendly. Wait a bit. Increase pressure bit by bit.
You also don't directly shout at anyone making a mistake - at least not the first time.
This is standard practice. Typical HN behaviour to drive by with quite evidently zero relevant background and self-righteously preach for three paragraphs about something that you don’t understand. This industry sucks.
First day on the Internet huh. A word of advice, never go to Reddit or read the Youtube comment section.
It's standard practice and it freaks managers the fuck out, esp if they're not familiar with hacker culture. Maybe the standard practice needs some work? I'm not sure, I understand the perspective of security researchers who want to force action on a fix. But I also completely understand how a deadline is perceived as a threat.
Don't forget that there's lots of gray hat / black hat hackers out there as well, who will begin with an email similar to this, add a bitcoin address for the "bug bounty" in the next, and will end with escalating the price of the "bounty" for the "service" of deleting the data they harvested. It's hard even for tech-savvy managers to figure out which of these you're dealing with. Now put yourself into the shoes of the average insurance company middle manager.
For completeness, I don't think this company's behavior is excusable. I'm just saying that maybe also the security community should iterate a bit more on the nuances of the "standard practice" vulnerability reporting process, with the explicit goal of not freaking people out so bad.
If this freaks them out maybe they shouldn’t roll their own SaaS?
They almost certainly did not. They likely just hired a cheap contractor to get their service up, and went with it when "it worked".
The contractor (who was certainly incompetent) probably looked at a bunch of nightmarishly complex identity API's and said "F** it!", combine that with being grossly underpaid and you get stuff like this.
It's a bad situation, of course, and involving threatening lawyers makes it even more ugly. But I can understand how a very small business (knowing nothing about IT other that what their incompetent contractor told them) might get really offended and scared shitless by some rando giving them a 30-day deadline, reporting them to authorities, and demanding that they contact all affected customers.
Sure they might get rightfully scared because their neglect caused potential issues for their customers and having that public might decrease revenue.
But that is ok I think. They should get scared enough to not risk such a neglect again
How is an insurance company a SaaS?
Most likely, the insurance company handles the actually insurance policies, claims, payouts, etc themselves, but uses a contractor to build their website, user portals, etc.
Survival (post diving accident) as a Service
Maybe the standard practice sucks. No matter how you turn it around, it does sound like blackmail. Just because you disclose a vulnerability to an org doesn’t mean you have any right or legitimacy to impose a deadline on them, you’re not their boss. This is some vigilante shit and it has not justification whatsoever. Report to the org, report to the authorities as needed and move on.
Without a deadline of some form, when do you escalate to public knowledge so customers can know they might get defrauded in some capacity?
> Without a deadline of some form, when do you escalate to public knowledge so customers can know they might get defrauded in some capacity?
You set a deadline after an initial conversation and urging them to fix it, if they don’t respond. I think the idea would be to escalate slowly. Like the original poster said large tech companies like know how to do this and streamlined the process. But, to someone not familiar with the process it looks like threats and deadlines imposed by a random person.
I am not defending the company just presenting their possible point of view. It’s worth seeing things with their eyes so to speak to try to understand their motivations.
But that is the intention, isn't it? The company showed neglect. The researcher has a moral right ( and I would say duty) to make that public. It's nice of them to give the company some time to get their shit together. After the vulnerability has been fixed there is no issue for customers in publishing about the neglect. The bad press for the company is deserved.
The idea was change the initial approach and not mention deadlines and just see if they’ll fix it. Point to the law indicating they should notify the authorities. Then if they don’t respond, give them a timeline tell them you’re notifying them. Like the original post said this is not Google, not a tech company, this looks like extortion of some sort to them. So it’s not that surprising what their response was.
It all depends on the goal. Is the goal for them to fix it most of all? To get them embarrassed? To make a blogpost and get internet points?
Blackmail to gain what? Speedy update to the site? The OP is going to disclose the vulnerability. The only matter up for debate is the timing.
> Instead, I offered to sign a modified declaration confirming data deletion. I had no interest in retaining anyone’s personal data, but I was not going to agree to silence about the disclosure process itself.
Why sign anything at all? The company was obviously not interested in cooperation, but in domination.
Getting them to agree to your terms pretty much nullifies their domination strategy, and in fact becomes legally binding on them.
It's clear that the intentions of the insurance company are selfish and they want to gain leverage over the reporter. Even if the reporter managed to add a clause about data deletion, the company could still make the reporter's life hell with the remaining clauses that were signed. This is not worth the risk.
He didn't add a clause, he replaced their entire declaration with a single clause of his choice. At least that is how I read it.
Is this Divers Alert Network (DAN) Europe, and it's insurance subsidiary, IDA Insurance Limited?
Another commenter basically deduced this
Not a security researcher, but I once found an open Redis port without auth on a large portal. Redis was used to cache all views, so one could technically modify any post and add malicious links, etc. I found the portal admin's email, emailed them directly, and got a response within an hour: "Thanks, I closed the port." I didn't need a bounty or anything, so sometimes it may be easier and safer to just skip all those management layers and communicate with an actual fellow engineer directly
Companies in Malta have to report these things to the police. Some university of malta student found a vulnerability in some software and they got instantly referred to the police rather than being tracked when they reported the issue.
Companies are doing their best to not reward people who diligently inform them about vulnerabilities.
Incrementing user IDs and a default password for everyone — so the real vulnerability was assuming the company had any security to disclose to in the first place.
At this point 'responsible disclosure' just means 'giving a company a head start on hiring a lawyer before you go public.'
I disclosed a vulnerability much like this one. .gov website. Incrementing IDs. No password to crack, just a url parameter with a Boolean value. Pretty much
example.com/clients/fullz?id=123&butDoIReallyHaveToAuth=false
Changed param key but yeah. Just that. You did need to have an authenticated session, but any valid session token would do.
They hit me with same kind of response. I got a lawyer. Worked out in the end, but I was out three hundred bucks for the consultation
That was the last vulnerability I will ever disclose
Proxies are cheaper than lawyers.
When you are acting in good faith and the person/organization on the other end isn't, you aren't having a productive discussion or negotiation, just wasting your own time.
The only sensible approach here would have been to cease all correspondence after their very first email/threat. The nation of Malta would survive just fine without you looking out for them and their online security.
Agree - yet, security researchers and our wider community also needs to recognize that vulnerabilities are foreign to most non-technical users.
Cold approach vulnerability reports to non-technical organizations quite frankly scare them. It might be like someone you've never met telling you the door on your back bedroom balcony can be opened with a dummy key, and they know because they tried it.
Such organizations don't kmow what to do. They're scared, thinking maybe someone also took financial information, etc. Internal strife and lots of discussions usually occur with lots of wild specualation (as the norm) before any communication back occurs.
It just isn't the same as what security forward organizations do, so it often becomes as a surprise to engineers when "good deed" seems to be taken as malice.
> Such organizations don't know what to do.
Maybe they should simply use some common sense? If someone could and would steal valuables, it seems highly unlikely that he/she/it would notify you before doing it.
If they would want to extort you, they would possibly do so early on. And maybe encrypt some data as a "proof of concept" ...
But some organizations seem to think that their lawyers will remedy every failure and that's enough.
> If someone could and would steal valuables, it seems highly unlikely that he/she/it would notify you before doing it.
after* doing it. Though I agree with your general point
Note the parts in the email to the organization where OP (1) mentions they found underage students among the unsecured accounts and (2) attaches a script that dumps the database, ready to go¹. It takes very little to see in access logs that they accessed records that they weren't authorized to, which makes it hard to distinguish their actions from malicious ones
I do agree that if the org had done a cursory web search, they'd have found that everything OP did (besides dumping more than one record from the database) is standard practice and that responsible disclosure is an established practice that criminals obviously wouldn't use. That OP subsequently agrees to sign a removal agreement, besides the lack of any extortion, is a further sign of good faith which the org should have taken them up on
¹ though very inefficiently, but the data protection officer that they were in touch with (note: not a lawyer) wouldn't know that and the IT person that advises them might not feel the need to mention it
cynical. worst part? best one can do in this situation. can't imagine how I could continue any further interaction with such organization.
10000% this
> the portal used incrementing numeric user IDs
> every account was provisioned with a static default password
Hehehe. I failed countless job interviews for mistakes much less serious than that. Yet someone gets the job while making worse mistakes, and there are plenty of such systems on production handling real people's data.
Literally found the same issue in a password system, on top of passwords being clear text in the database... cleared all passwords, expanded the db field to hold a longer hash (pw field was like 12 chars), setup "recover password" feature and emailed all users before End of Day.
My own suggestion to anyone reading this... version your password hashing mechanics so you can upgrade hashing methods as needed in the future. I usually use "v{version}.{salt}.{hash}" where salt and the resulting hash are a base64 string of the salt and result. I could use multiple db fields for the same, but would rather not... I could also use JSON or some other wrapper, but feel the dot-separated base64 is good enough.
I have had instances where hashing was indeed upgraded later, and a password was (re)hashed at login with the new encoding if the version changed... after a given time-frame, will notify users and wipe old passwords to require recovery process.
FWIW, I really wish there were better guides for moderately good implementations of login/auth systems out there. Too many applications for things like SSO, etc just become a morass of complexity that isn't always necesssary. I did write a nice system for a former employer that is somewhat widely deployed... I tried to get permission to open-source it, but couldn't get buy in over "security concerns" (the irony). Maybe someday I'll make another one.
If you are needing to version your password hashes, then you are likely doing them incorrectly and not using a proper computationally-hard hashing algorithm.
For example, with unsuitable algorithms like sha256, you get this, which doesn't have a version field:
But if you use a proper password hash, then your hashing library will automatically take care of versioning your hash, and you can just treat it as an opaque blob:import hashlib; print(f"MD5: {hashlib.md5(b'password').hexdigest()}") print(f"SHA-256: {hashlib.sha256(b'password').hexdigest()}") MD5: 5f4dcc3b5aa765d61d8327deb882cf99 SHA-256: 5e884898da28047151d0e56f8dc6292773603d0d6aabbdd62a11ef721d1542d8
This isn't a new thing, and as far as I'm aware, it's derived from the old apache htpasswd format (although no one else uses the leading colon)import argon2; print(f"Argon2: {argon2.PasswordHasher().hash('password')}") import bcrypt; print(f"bcrypt: {bcrypt.hashpw(b'password', bcrypt.gensalt()).decode()}") from passlib.hash import scrypt; print(f"scrypt: {scrypt.hash('password')}") Argon2: $argon2id$v=19$m=65536,t=3,p=4$LZ/H9PWV2UV3YTgF3Ixrig$aXEtfkmdCMXX46a0ZiE0XjKABfJSgCHA4HmtlJzautU bcrypt: $2b$12$xqsibRw1wikgk9qhce0CGO9G7k7j2nfpxCmmasmUoGX4Rt0B5umuG scrypt: $scrypt$ln=16,r=8,p=1$/V8rpRTCmDOGcA5hjPFeCw$6N1e9QmxuwqbPJb4NjpGib5FxxILGoXmUX90lCXKXD4$ htpasswd -bnBC 10 "" password :$2y$10$Bh67PQAd4rqAkbFraTKZ/egfHdN392tyQ3I1U6VnjZhLoQLD3YzReIt's not a leading colon: It is a colon separator between the username and password, and the command used has the username as an empty string.
It wasn't done wrong.. A contract requirement for a state deployment required a specific hashing algorithm...
Several web frameworks, including Rails, Laravel, and Symfony, will automatically upgrade password hashes if the algorithm or work factor has changed since the password was last hashed.
Years ago I worked for a company that bought another company. Our QA folks were asked to give their site a once-over. What they found is still the butt of jokes in my circle of friends/former coworkers.
* account ids are numeric, and incrementing
* included in the URL after login, e.g. ?account=123456
* no authentication on requests after login
So anybody moderately curious can just increment to account_id=123457 to access another account. And then try 123458. And then enumerate the space to see if there is anything interesting... :face-palm: :cold-sweat:
I did some work ~15 years ago for a consulting company. The company pushes their own custom opensource cms into most projects - built on top of mongodb and written by the ceo. He’s a lovely guy, and good coder. But he’s totally self taught at programming and he has blind spots a mile wide. And he hates having his blind spots pointed out. He came back from a react conference once thinking the react team invented functional programming.
A friend at the company started poking around in the CMS. Turns out the login system worked by giving the user a cookie with the mongodb document id for the user they’re logged in as. Not signed or anything. Just the document id in plain text. Document IDs are (or at least were) mostly sequential, so you could just enumerate document IDs in your cookie to log in as anyone.
The ceo told us it wasn’t actually a security vulnerability. Then insisted we didn’t need to assign a CVE or tell any of our customers and users. He didn’t want to fix the code. Then when pushed he wanted to slip a fix into the next version under the cover of night and not tell anyone. Preferably hidden in a big commit with lots of other stuff.
It’s become a joke between us too. He gives self taught programmers a bad rep. These days whenever I hear a product was architected by someone who’s self taught, I always check how the login system works. It’s often enlightening.
A person who is like that is rarely called a "lovely person": how does that lovely interaction look like when you point such an egregious flaw out to them?
And tbh, this has nothing to do with being self-taught: by the time I enrolled in CS program, I was arguably self-taught and could spot issues like this myself. But I pride myself in learning from my mistakes and learning fast.
So it's more likely a character thing: if you are willing to admit when you are wrong, you'll learn much faster!
Being self-taught isn't the problem. I've self-taught myself 10x more than I learned in school (and yes I was CS in school).
You might as well make them sequential if they're numeric, making them non-sequential just puts more load on your server when the brute force happens.
I suspect that the direction of these situations often depends on how your initial email is routed internally in these organizations. If they go to a lawyer first, you will get someone who tries to fix things with the application of the law. If it goes to an engineer first, you will get someone who tries to fix it with an application of engineering. If it were me, I would have avoided involving third party regulators in the initial contact at least.
Yes, this routing is common. German energy company recommended by a climate organization had a somewhat similar vulnerability and no security contact, so I call them up and.. mhm, yes, okay, is that l-e-g-a-l-@-company-dot-de? You don't want me to just send it to the IT department that can fix it? Okay I see, they will put it through, yes, thank you, bye for now!
Was a bit of a "oh god what am I getting into again" moment (also considering I don't speak legal-level German), but I knew they had nothing to stand on if they did file a complaint or court case so I followed through and they just thanked me for the report in the end and fixed it reasonably promptly. No stickers or maybe a discount as a customer, but oh well, no lawsuit either :)
In the early internet days, you could email root@company.com about a website bug, and somebody might reply.
> If it were me, I would have avoided involving third party regulators in the initial contact at least.
I'm surprised to see this take only mentioned once in this thread. I think people here are not aware of the sheer amount of fraud in the "bug bounty" space. As soon as you have a public product you get at least 1 of these attempts per week of someone trying to shake you down for a disclosure that they'll disclose after you pay them something. Typically you just report them as spam and move on.
But if I got one that had some credible evidence of them reporting me to a government agency already, I'd immediately get a lawyer to send a cease and desist.
It seems like OP was trying to be a by the book law abiding citizen, but the sheer amount of fraud in this space makes it really hard to tell the difference from a cold email.
You typically disclose the vulnerability for one of these reasons: you want money, you want fame, you want to make a better world. There are others such as blackmail but let's settle for the typical ones.
If you do it for money or fame, you step cautiously not to annoy the company. You ask, you beg, etc. Not something to be proud of but this is life.
If you do this to make the world a better place, you get annoying. You explain the risks, possibly how to fix it and then send a few reminders with the threat of making it public. Depending on where you are this may be a danger for you or not (though you would usually go anonymous in that case).
OP did the right thing. Without setting deadlines, a company will ignore it. Or not - but in that case they will not be offended by the deadline and would discuss with the reporter (by agreeing on mitigation if a complete fix cannot be done easily).
There used to be a time when companies cared because it was an uncommon event. Today you get 3 "We are so sorry" emails a week, so one more or one less make it less stressful to have public disclosures or data leaks. There is simply no accountability.
Full disclosure is responsible disclosure.
Companies can't hide when there is a website or bot spewing out the information with their logo next to it.
Proxies are cheaper than lawyers.
Here’s a similar case, but she handled it differently : https://teletype.in/@cyllchuesnconii/TSwR1AAfffT
If this was in Costa Rica the appropiate way was to contact PRODHAB about the leak of personal information and Costa Rica CSIRT ( csirt@micitt.go.cr ).
Here all databases with personal information must be registered there and data must be secure.
> If this was in Costa Rica the appropiate way was to contact PRODHAB about the leak of personal information and Costa Rica CSIRT ( csirt@micitt.go.cr ).
They did. It's in the article. Search for 'CSIRT'. It's one of the key points of the story.
I’ve worked in I.T. For nearly 3 decades, and I’m still astounded by the disconnect between security best practices, often with serious legal muscle behind them, and the reality of how companies operate.
I came across a pretty serious security concern at my company this week. The ramifications are alarming. My education, training and experience tells me one thing: identify, notify, fix. Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.
Anytime I see an article about a data breach, I wonder how long these vulnerabilities were known and ignored. Is that just how business is conducted? It appears so, for many companies. Then why such a focus on security in education, if it has very little real-world application?
By even flagging the issue and the potential fallout, I’ve put my career at risk. These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.
> I came across a pretty serious security concern at my company this week. The ramifications are alarming. […] Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.
I was in a very similar position some years ago. After a couple of rounds of “finish X for sale Y then we'll prioritise those issue”, which I was young and scared enough to let happen, and pulling on heartstrings (“if we don't get this sale some people will have to go, we risk that to [redacted] and her new kids, can we?”) I just started fixing the problems and ignoring other tasks. I only got away with the insubordination because there were things I was the bus-count-of-one on at the time and when they tried to butter me up with the promise of some training courses, I had taken & passed some of those exams and had the rest booked in (the look of “good <deity>, he got an escape plan and is close to acting on it” on the manager's face during that conversation was wonderful!).
The really worrying thing about that period is that a client had a pen-test done on their instance of the app, and it passed. I don't know how, but I know I'd never trust that penetration testing company (they have long since gone out of business, I can't think why).
I wish I could recall the name of a pen test company I worked with when I wrote my auth system... They were pretty great and found several serious issues.
At least compared to our internal digital security group would couldn't fathom, "your test is wrong for how this app is configured, that path leads to a different app and default behavior" it's not actually a failure... to a canned test for a php exploit. The app wasn't php, it was an SPA and always delivered the same default page unless in the /auth/* route.
After that my response became, show me an actual exploit with an actual data leak you can show me and I'll update my code instead of your test.
An older company I worked for went out of their way to find a pen tester that would basically rubberstamp everything and give them a pass. I actually uncovered major issues with the software during that process, to the point where it was unusable. Major components were severely out of date and open to attack. Other parts didn't even work as advertised. I didn't stick around much longer.
> By even flagging the issue and the potential fallout, I’ve put my career at risk.
Simple as. Not your company? not your problem? Notify, move on.
I read that post as him talking about their company, in the sense of the company they were working for. If that was the case, then an exploit of an unfixed security issue could very much affect them either just as part of the company if the fallout is enough to massively harm business, or specifically if they had not properly documented their concerns so “we didn't know” could be the excuse from above and they could be blamed for not adequately communicating the problem.
For an external company “not your company, not your problem” for security issues is not a good moral position IMO. “I can't risk the fallout in my direction that I'm pretty sure will result from this” is more understandable because of how often you see whistle-blowers getting black-listed, but I'd still have a major battle with the pernickety prick that is my conscience¹ and it would likely win out in the end.
[1] oh, the things I could do if it wasn't for conscience and empathy :)
Their websites says they're a freelance cloud architect.
The article doesn't say exactly, but if they used their company e-mail account to send the e-mail it's difficult to argue it wasn't related to their business.
They also put "I am offering" language in their e-mail which I'm sure triggered the lawyers into interpreting this a different way. Not a choice of words I would recommend using in a case like this.
This is a good point. I think we get a couple of emails a week for exactly this kind of bottom feeder 'consulting firm' 'offering' to tell us all about some massive security issue they found, as long as we sign up for a 'consulting engagement'[1]. On the other hand, we generally ignore them, not threaten to sue them.
[1] We get about as many 'pay us a bounty or we'll tell the world about this horrid vulnerability we found'. I have suggested to legal we treat those like extortion attempts to make them go away and stop wasting our time but legal doesn't want to spend time on it.
> These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.
I had a bit of a feral journey into tech, poor upbringing => self taught college dropout waiting tables => founded iPad point of sale startup in 2011 => sold it => Google in 2016 to 2023
It was absolutely astounding to go to Google, and find out that all this work to ascend to an Ivy League-esque employment environment...I had been chasing a ghost. Because Google, at the end of the day, was an agglomeration of people, suffered from the same incentives and disincentives as any group, and thus also had the same boring, basic, social problems as any group.
Put more concretely, couple vignettes:
- Someone with ~5 years experience saying approximately: "You'd think we'd do a postmortem for this situation, but, you know how that goes. The people involved think they're an organization-wide announcement that you're coming for them, and someone higher ranked will get involved and make sure A) it doesn't happen or B) you end up looking stupid for writing it."
- A horrible design flaw that made ~50% of users take 20 seconds to get a query answered was buried, because a manager involved was the one who wrote the code.
I've seen into some moderately high levels of "prestigious" business and government circles and I've yet to find any level at which everyone suddenly becomes as competent and sharp as I'd have expected them to be, as a child and young adult (before I saw what I've seen and learned that the norm is morons and liars running everything and operating terrifically dysfunctional organizations... everywhere, apparently, regardless how high up the hierarchy you go). And actually, not only is there no step at which they suddenly become so, people don't even seem to gradually tend to brighter or generally better, on average, as you move "upward"... at all! Or perhaps only weakly so.
Whatever the selection process is for gestures broadly at everything, it's not selecting for being both (hell, often not for either) able and willing to do a good job, so far as what the job is apparently supposed to be. This appears to hold for just about everything, reputation and power be damned. Exceptions of high-functioning small groups or individuals in positions of power or prestige exist, as they do at "lower" levels, but aren't the norm anywhere as far as I've been able to discern.
Ty for sharing this, I don’t talk about it often, and never in professional circles. There’s a lot of emotions and uncertainty attached to it. It’s very comforting to see someone else describe it as it is to me without being just straightforwardly misanthropic.
I would get fired at Google within seconds then. I’m more than happy to shine a light on bullshit like that.
> A horrible design flaw that made ~50% of users take 20 seconds to get a query answered was buried, because a manager involved was the one who wrote the code.
Maybe not when it is as much as 20 seconds, but an old manager of mine would save fixing something like that for a “quick win” at some later time! He would even have artificial delays put in, enough to be noticeable and perhaps reported but not enough to be massively inconvenient, so we could take them out during the UAT process - it didn't change what the client finally got, but it seemed to work especially if they thought they'd forced us to spend time on performance issues (those talking to us at the client side could report this back up their chain as a win).
There is a term for this but I can't remember what it's called.
Effectively you put in on purpose bugs for an inspector to find so they don't dig too deep for difficult to solve problems.
'canary', 'review canary' or something.
There's a related (apocryphal?) story from Interplay about adding a duck to animations so that the producer would ask for it to be removed, to make him happy, while leaving the rest alone.
Yeah, that one too.
> vulnerability in the member portal of a major diving insurer
What are the odds an insurer would reach for a lawyer? They probably have several on speed dial.
What makes you think they don't retain them in-house?
What makes you think you don't need speed dial in-house? ;)
Depends on the usage... in-house counsel may open up various liabilities of their own, depending on how things present.
All the disclosure and legal issues aside, it’s sobering to think of how many of these types of trivial bugs exist on random websites that collect sensitive user information. It seems hopeless to try to safeguard one’s own information.
Which is why collecting and storing sensitive user information needs to be more heavily restricted and treated as the unsafe activity that it is.
the NDA demand with a same-day deadline is such a classic move. makes it clear they were more worried about reputation than fixing anything.
Reply: "sorry, before reaching out to you I already notified a major media organization with a 90 day release notice"
In case someone takes this as actual advice, I think this comment is best accompanied with a warning that this gets them to call a lawyer for sure ^^'
(OP mentions a lawyer in the title, but the post only speaks of a data protection officer, which is a very different role and doesn't even represent the organization's interests but, instead, the users', at least under GDPR where I'm from)
Typical shakedown tactic. I used to have a boss who would issue these ridiculous emails with lines like "you agree to respond within 24 hours else you forfeit (blah blah blah)"
This is somewhat related, but I know of a fairly popular iOS application for iPads that stores passwords either in plaintext or encrypted (not as digests) because they will email it to you if you click Forgot Password. You also cannot change it. I have no experience with Apple development standards, so I thought I'd ask here if anyone knows whether this is something that should be reported to Apple, if Apple will do anything, or if it's even in violation of any standards?
FWIW, some types of applications may be better served with encryption over hashing for password access. Email being one of them, given the varying ways to authenticate, it gets pretty funky to support. This is why in things like O365 you have a separate password issued for use with legacy email apps.
If anything it’s just a violation of industry expectations. You as a consumer just don’t need to use the product.
>whether this is something that should be reported to Apple, if Apple will do anything
Lmao Apple will not do anything for actual malware when reported with receipts, besides sending you a form letter assuring you "experts will look into it, now fuck off" then never contact you again. Ask me how I know. To their credit, I suspected they ran it through useless rudimentary automated checks which passed and they were back in business like a day later.
If your expectation is they will do something about shitty coding practices half the App Store would be banned.
> Apple will not do anything for actual malware when reported with receipts, besides sending you a form letter assuring you "experts will look into it, now fuck off"
Ask while you are in an EU country, request appeal and initiate Out-of-court dispute resolution.
Or better yet: let the platform suck, and let this be the year of the linux desktop on iPhone :)
I used to say "submit it to Plain Text Offenders: https://plaintextoffenders.com/", but the site appears defunct since… 2012‽ How time flies…
This is extremely disappointing. The insurer in question has a very good reputation within the dive community for acting in good faith and for providing medical information free of charge to non-members.
This sounds like a cultural mismatch with their lawyers. Which is ironic, since the lawyers in question probably thought of themselves as being risk-averse and doing everything possible to protect the organisation's reputation.
I find often that conversations between lawyers and engineers are just two very different minded people talking past each other. I'm an engineer, and once I spent more time understanding lawyers, what they do, and how they do it, my ability to get them to do something increased tremendously. It's like programming in an extremely quirky programming language running on a very broken system that requires a ton of money to stay up.
Could you post on HN on that? Would be worth reading.
And are you only talking about cybersecurity disclosure, liability, patent applications... And the scenario when you're both working for the same party, or opposing parties?
I'm talking about any situation where a principled person who is technically correct gets a threatening letter from a lawyer instead of a thank you.
If you read enough lawyer messages (they show up on HN all the time) you will see they follow a pattern of looking tough, and increasingly threatening posture. But often, the laws they cite aren't applicable, and wouldn't hold up in court or public opinion.
> they follow a pattern of looking tough, and increasingly threatening posture. But often, the laws they cite aren't applicable, and wouldn't hold up in court
And it takes years to prove that and be judged as not guilty, or if guilty (as OP would likely be for dumping the database), that the punishment should be nil due to the demonstrated good faith even if it technically violated a law
Wouldn't you say the threats are to be taken seriously in cases like OP's?
No.
I'm curious to hear your take on the situation in the article.
Based on your experience, do you think there are specific ways the author could have communicated differently to elicit a better response from the lawyers?
It would take a bit of time to re-read the entire chain and come up with highly specific ways. The way I read the exchange, the lawyer basically wants the programmer to shut up and not disclose the vulnerability, and is using threatening legal language. While the programmer sees themself as a responsible person doing the company a favor in a principled way.
Some things I can see. I think the way the programmer worded this sounds adversarial; I wouldn't have written it that way, but ultimately, there is nothing wrong with it: "I am offering a window of 30 days from today the 28th of April 2025 for [the organization] to mitigate or resolve the vulnerability before I consider any public disclosure."
When the lawyer sent the NDA with extra steps: the programmer could have chosen to hire a lawyer at this point to get advice. Or they could ignore this entirely (with the risk that the lawyer may sue him?), or proceed to negotiate terms, which the programmer did (offering a different document to sign).
IIUC, at that point, the lawyer went away and it's likely they will never contact this guy again, unless he discloses their name publicly and trashes their security, at which point the lawyer might sue for defamation, etc.
Anyway, my take is that as soon as the programmer got a lawyer email reply (instead of the "CTO thanking him for responsible disclosure"), he should have talked to his own lawyer for advice. When I have situations similar to this, I use the lawyer as a sounding board. i ask questions like "What is the lawyer trying to get me to do here?" and "Why are they threatening me instead of thanking me", and "What would happen if I respond in this way".
Depending on what I learned from my lawyer I can take a number actions. For example, completely ignoring the company lawyer might be a good course of action. The company doesn't want to bring somebody to court then have everybody read in a newspaper that the company had shitty security. Or writing a carefully written threatening letter- "if you sue me, I'll countersue, and in discovery, you will look bad and lose". Or- and this is one of my favorite tricks, rewriting the document to what I wanted, signing that, sending it back to them. Again, for all of those, I'd talk to a lawyer and listen to their perspective carefully.
> which the programmer did (offering a different document to sign). \n\n IIUC, at that point, the lawyer went away
The article says that the organization refused the counter-offer and doubled down instead
> he should have talked to his own lawyer for advice
Costing how much? Next I'll need a lawyer for telling the supermarket that their alarm system code was being overlooked by someone from the bushes
It's not bad legal advice and I won't discourage anyone from talking to a lawyer, but it makes things way more costly than they need be. There's a thousand cases like this already online to be found if you want to know how to handle this type of response
Sounds very usa-esque (or perhaps unusually wealthy) to retain a lawyer as "sounding board"
> This sounds like a cultural mismatch with their lawyers.
Note that the post never mentions lawyers, only the title. It sounds to me like chatgpt came up with two dozen titles and OP thought this was the most dramatic one. In the post, they mention it was a data protection officer who replied. This person has the user's interests as their goal and works for the organization only insofar as that they handle GDPR-related matters, including complaints. If I'm reading it right, they're supposed to be somewhat impartial per recital 97 of the GDPR: "data protection officers [...] should be in a position to perform their duties and tasks in an independent manner"
Another comment says the situation was fake. I don't know, but to avoid running afoul of the authorities, it's possible to document this without actually accessing user data without permission. In the US, the Computer Fraud and Abuse Act and various state laws are written extremely broadly and were written at a time when most access was either direct dial-up or internal. The meaning of abuse can be twisted to mean rewriting a URL to access the next user, or inputting a user ID that is not authorized to you.
Generally speaking, I think case law has avoided shooting the messenger, but if you use your unauthorized access to find PII on minors, you may be setting yourself up for problems, regardless if the goal is merely dramatic effect. You can, instead, document everything and hypothesize the potential risks of the vulnerability without exposing yourself to accusation of wrongdoing.
For example, the article talks about registering divers. The author could ask permission from the next diver to attempt to set their password without reading their email, and that would clearly show the vulnerability. No kids "in harm's way".
Instead of understanding all of this, and when it does or does not apply, it's probably better to disclose vulnerabilities anonymously over Tor. It's not worth the hassle of being forced to hire a lawyer, just to be a white hat.
Part of the motivation of reporting is clout and reputation. That sounds harsh or critical but for some folks their reputation directly impacts their livelihood. Sure the data controller doesn't care, but if you want to get hired or invited to conferences then the clout matters.
You could use public-key encryption in your reports to reveal your identity to parties of your choosing.
Sounds like they were bluffing and trying to coerce the researcher in to signing an NDA. I wouldn't of signed and they wouldn't have reach in the US and presumably Germany where the researched is based. Also, I'm glad the affected vendor isn't DAN.
all things aside, the location at which you discovered the vulnerability is so interesting..i mean imagine being on a 2 week vacation and then amidst this happens..on a beautiful day.
One way how to improve cybersecurity is let cyber criminals loose like predators hunting prey. Companies needs to feel fear that any vulnerability in their systems is going to be weaponized against them. Only then they will appreciate an email telling them about security issue which has not been exploited yet.
One way how to improve cybersecurity is let cyber criminals loose like predators hunting prey.
Who, exactly, is holding them back now?
Like re-introducing wolves into Yellowstone.
I found a vulnerability recently in a major online platform through HackerOne which could allow an attacker to cheaply DoS the service. I wrote up a detailed report (by hand) showing exactly how to reproduce and even explained exactly how a specially crafted request to a critical service took 10 seconds to get a response (just with a very simple, easy to reproduce example)... I then explained exactly how this vector could be scaled up to a DDoS...
They acknowledged it as a legitimate issue and marked my issue as 'useful info' but refused to pay me anything; they said that they would only pay if I physically demonstrate that it leads to a disruption of service; basically baiting me into doing something illegal! It was obvious from my description that this attack could easily be scaled up. I wasn't prepared to literally bring down the service to make my point. They didn't even offer the lowest tier of $200.
So bad. AI slop code is taking over the industry, vulnerabilities are popping up all over the place, so much so that companies are refusing to pay out bounties to humans. It's like neglect is being rewarded and diligence is being punished.
Then you read about how small the bug bounties are, even for established security researchers. It doesn't seem like a great industry. HackerOne seems like a honeypot to waste hackers' time. They reward a tiny number of hackers with big payouts to create PR to waste as many hackers' time as possible. Probably setting them up and collecting dirt on them behind the scenes. That's what it feels like at least.
This is sort of my issue with bug bounty programs: it can easily start to feel like extortion when a 'good samaritan' demands money. But they promised it to you by having a bug bounty program, then denied it. You feel rightfully cheated when the bug is legitimate, and doubly so when they acknowledge it. But demanding the money feels weird as well.
I try to go into these things with zero expectations. Having a mediating party involved from the start is a bit like OP immediately CC'ing the CERT: extra legal steps in the disclosure process. Mediating parties are usually a pain to work with, and if it's deemed "out of scope" then they typically refuse to even notify the vulnerable party (or acknowledge to you that it hasn't been disclosed). I don't want a pay day, I just want them to fix their damn bug, but there's no way to report it besides through this middle person. Literally every time I've had to use a reporting procedure (like HackerOne) has resulted in tone-deaf responses from the company or complete gatekeeping. All of those bugs exist to this day. Every time I can email a human directly, it gets fixed, and in some occasions they send a thank-you like some swag and chocolates, a t-shirt, something
Based on what I hear in the community, my HackerOne experiences have been outliers, but it might still be more effective (if you're not looking to collect bounty money) to talk to organizations directly where possible and avoid the ones that use HackerOne or another mediation party
Wow this more like in US. Didn't know Malta is so lawyered.
I've said before that we need strong legal protections for white-hat and even grey-hat security researchers or hackers. As long as they report what they have found and follow certain rules, they need to be protected from any prosecution or legal consequences. We need to give them the benefit of the doubt.
The problem is this is literally a matter of national security, and currently we sacrifice national security for the convenience of wealthy companies.
Also, we all have our private data leaked multiple times per month. We see millions of people having their private information leaked by these companies, and there are zero consequences. Currently, the companies say, "Well, it's our code, it's our responsibility; nobody is allowed to research or test the security of our code because it is our code and it is our responsibility." But then, when they leak the entire nation's private data, it's no longer their responsibility. They're not liable.
As security issues continue to become a bigger and bigger societal problem, remember that we are choosing to hamstring our security researchers. We can make a different choice and decide we want to utilize our security researchers instead, for the benefit of all and for better national security. It might cause some embarrassment for companies though, so I'm not holding my breath.
> we need strong legal protections for white-hat and even grey-hat security researchers or hackers.
I have a radical idea which goes even further: we should have legaly mandated bug bounties. A law which says that if someone makes a proper disclosure of an actual exploitable security problem then your company has to pay out. Ideally we could scale the payout based on the importance of the infrastructure in question. Vulnerabilities with little lasting consequence would pay little. Serious vulnerabilities with potential to society wide physical harm could pay out a few percents of the yearly revenue of the given company. For example hacking the high score in a game would pay only little, a vulnerability which can collapse the electric grid or remotely command a car would pay a king’s ransom. Enough to incentivise a cottage industry to find problems. Hopefully resulting in a situation where the companies in question find it more profitable to find and fix the problems themselves.
I’m sure there is a potential to a lot of unintended consequences. For example i’m not sure how could we handle insider threats. One one hand insider threats are real and the companies should be protecting against them as best as they could. On the other hand it would be perverse to force companies to pay developers for vulnerabilities the developers themselves intentionally created.
There should exist a vulnerability disclosure intermediary. They can function as a barrier to protect the scientist/researcher/enthousiast and do everything by the book for the different countries.
MSRC (Microsoft Security Response Center) — https://msrc.microsoft.com/
They’ll close a report as “no action” if the issue isn’t related to Microsoft products. That said, in my experience they’ve been a reasonable intermediary for a few incidents I’ve reported involving government websites, especially where Microsoft software was part of the stack in some way.
For example, I’ve reported issues in multiple countries where national ID numbers are sequential. Private companies like insurers, pension funds, and banks use those IDs to look up records, but some of them didn’t verify that the JSON Web Token (JWT) used for the session actually belonged to the person whose national ID was being queried. In practice, that meant an attacker could enumerate IDs and access other citizens’ financial and personal data.
Reporting something like that directly to a government agency can be intimidating, so I reported it to Microsoft instead, since these organizations often use Azure AD B2C for customer authentication. The vulnerability itself wasn’t in Microsoft’s products, but MSRC’s reactive engineers still took ownership of triage and helped route it to the right contacts in those agencies through their existing partnerships.
National CERTs usually take up this role. I presume OP could have anonymously disclosed to the Maltese CERT, whom they already CC'd, though you'd have to check with them specifically to see if they offer that. Hackerspaces also often do this, especially if you're a member but probably also if not and they have faith that your actions were legal (best case, you can demonstrate exactly what you did, like by showing the script you ran, as OP could)
Who compensates them for the risk?
What risk? It sounds to me like the worst they could get is a subpoena to produce the identity of the reporter
Besides, it's usually governmental organizations that do this sort of thing
The risk of lawsuits like the ones threatened to be filed against this researcher.
They can also sue the pope but I don't think the pope finds that a risk worth considering either when they didn't do any hacking, legal or otherwise. How would an organization get sued for hacking when they didn't do any hacking and are merely passing on a message?
They would call it abetting. It's not as if the site doesn't know what it's disclosing.
That's why you just sell it on the black market and let it be the intermediary.
The free market at work!
Contacting the authorities led the company to hire lawyers— for communication with the data protection authority.
The lever lawyers have to “make it go away” is “law says so.” They’re not going to beg for mercy, they’re not going to invite you to coffee, no “bug bounty.” From their perspective if they arm-wrestle the researcher into an NDA, they patched the only known breach, retrospectively.
Perhaps it’s not prosocial or best practice, but you can clearly see how this went down from the company perspective, with a subject organization that has a tenuous grasp of cyber security concepts.
I think we should stop making excuses for shitty practices. I can understand why they might do it, i can also see there are much better ways to deal with this situation.
> I am offering a window of 30 days from today the 28th of April 2025 for [the organization] to mitigate or resolve the vulnerability before I consider any public disclosure.
Well, you started friendly but then made illegal threats. So they responded friendly but then sent you lawyers.
IANAL but this is exactly it. They weren't being cartoonishly evil—they were in catastrophic liability mode and the blogger's specific choices forced them to show their hand.
The second you CC CSIRT Malta, you've triggered the 72-hour GDPR Article 33 clock with the Data Protection Commissioner. That's why they complained about "additional complexities"—they couldn't treat this as a theoretical bug anymore. They had 72 hours to either report a confirmed breach (€20M exposure) or silence the witness. They chose door number two.
And that 30-day deadline? To a non-technical GC, "fix this in 30 days or I go public" reads like extortion, not standard disclosure. As that Mustafabei comment noted, that's actionable language in many EU jurisdictions. They genuinely thought they were being shaken down, hence the immediate lawyer deployment.
The self-own is what gets me. Their strategy was rational—silence the guy, claim no "confirmed" breach occurred, avoid Article 34 notifications—but the execution turned a fixable IDOR bug into written evidence of witness intimidation. They managed to validate every suspicion that DAN (let's be real, it's DAN Europe) cares more about covering asses than protecting diver data.
The irony is if he'd skipped the CSIRT CC and just sent a casual "hey, noticed your student IDs look sequential, maybe check your auth?" they'd have fixed it quietly, never notified users, and learned absolutely nothing. Instead we got this mess. Better for the community, worse for his stress levels.
Easy enough then.
For bad companies, sell the exploits on the gray market. They can pay market price too.
Share the portal name! We want to know the ~f...~ ”heroes”!
I find these tales of lawyerly threats completley validate the hackers actions. They reported the bug to spur the company to resolve it. Their reaction all but confirms that reporting it to them directly would not have been productive. Their management lacks good stewardship. They are not thinking about their responsibility to their customers and employees.
I think the problem is the process. Each country should have a reporting authority and it should be the one to deal with security issues.
So you never report to actual organization but to the security organization, like you did. And they would be more equiped to deal with this, maybe also validate how serious this issue is. Assign a reward as well.
So you are researcher, you report your thing and can't be sued or bullied by organization that is offending in the first place.
If the government wasn't so famous for also locking people up that reported security issues I might agree, but boy they are actually worse.
Right now the climate in the world is whistleblowers get their careers and livihoods ended. This has been going on for quite a while.
The only practical advice is ignore it exists, refuse to ever admit to having found a problem and move on. Leave zero paper trail or evidence. It sucks but its career ending to find these things and report them.
That’s almost what we already have with the CVE system, just without the legal protections. You report the vulnerability to the NSA, let them have their fun with it, then a fix is coordinated to be released much further down the line. Personally I don’t think it’s the best idea in the world, and entrenching it further seems like a net negative.
This is not how CVEs work at all. You can be pretty vague when registering it. In fact they’re usually annoyingly so and some companies are known for copy and pasting random text into the fields that completely lead you astray when trying to patch diff.
Additionally, MITRE doesn’t coordinate a release date with you. They can be slow to respond sometimes but in the end you just tell them to set the CVE to public at some date and they’ll do it. You’re also free to publish information on the vulnerability before MITRE assigned a CVE.
Yeah, something like that, nothing too much, just to exclude individual to deal with evil corps
Does it have to be a government? Why not a third party non-profit? The white hat gets shielded, and the non-profit has credible lawyers which makes suing them harder than individuals.
The idea is to make it easier to fix the vulnerability than to sue to shut people up.
For credit assignment, the person could direct people to the non profit’s website which would confirm discovery by CVE without exposing too many details that would allow the company to come after the individual.
This business of going to the company directly and hoping they don’t sue you is bananas in my opinion.
This would only work if governments and companies cared about fixing issues.
Also, it would prevent researchers from gaining public credit and reputation for their work. This seems to be a big motivator for many.
Why disclose anything? These companies are heartless.
What’s the bet this was Divers Alert Network (DAN) that did this. There aren’t a huge number of insurance companies who insure diving students in Malta.
The part where they blame users for not changing the default password is infuriating but unfortunately very common. I've seen this exact same attitude from companies that issue credentials like "Welcome1!" and then act shocked when accounts get popped.
What really gets me is the legal threat angle. Incremental user IDs + shared default password isn't even a sophisticated attack to discover. A curious user would stumble onto this by accident. Responding to that with criminal liability threats under Maltese computer misuse law is exactly the kind of thing that discourages researchers from reporting anything at all, which means the next person who finds it might not be so well-intentioned.
The fact that minors' data was exposed makes the GDPR Article 34 notification question especially pointed. Would love to know if the Maltese DPA ever followed up on this.
Maintaining Cybersecurity Insurance is a big deal in the US, I don't know about Europe. So vulnerability disclosure is problematic for data controllers because it threatens their insurance and premiums. Today much of enterprise security is attestation based and vulnerability disclosure potentially exposes companies to insurance fraud. If they stated that they maintained certain levels of security, and a disclosure demonstratively proves they do not, that is grounds for dropping a policy or even a lawsuit to reclaim paid funds.
So it sort of makes sense that companies would go on the attack because there's a risk that their insurance company will catch wind and they'll be on the hook.
It's not generally good financial advice to pay the overhead of an insurance company for costs you can easily pay yourself (also things like phone insurance, appliance warranty extensions, etc. won't make your device last longer and the insurer knows better than you what premium covers the average repair costs plus a profit margin). If you have a decent understanding of where the line is between vulnerability disclosure and criminal activities, fronting any court fees and a little bit of lawyer time (iff you can afford these out of pocket) until you're acquitted should be the better route, assuming anyone even ever takes you to court
Heh, what insurance company you use should be public information, and bug finders should report to them.
A sobering reminder that full disclosure is responsible disclosure.
The only chains you should have on you are proxy chains.
Malta has been mentioned? As a person living here I could say that workflow of the government here is bad. Same as in every other place I guess.
By the way, I had a story when I accidentally hacked an online portal in our school. It didn't go much and I was "caught" but anyways. This is how we learn to be more careful.
I believe in every single system like that it's fairly possible to find a vulnerability. Nobody cares about them and people that make those systems don't have enough skill to do it right. Data is going to be leaked. That's the unfortunate truth. It gets worse with the come of AI. Since it has zero understanding of what it is actually it will make mistakes that would cause more data leaks.
Even if you don't consider yourself as an evil person, would you still stay the same knowing real security vulnerability? Who knows. Some might take advantage. Some won't and still be punished for doing everything as the "textbook way".
Being more careful is an option, or owning up to it and saying "hey I just did this and noticed this thing unexpectedly happened, apparently you have an XSS here" (or whatever it was). In most cases, the organization you're reporting to is happy about this up-front information, and in the exceptional situation where someone decides to take it to court, there's a clear paper trail (backed up by access and email logs) of what actions were taken and why, making it obvious you did nothing wrong
> No ..., no ..., no .... Just ...
Am I the only one who can't stand this AI slop pattern?
Between that and 'Read that again' my heart kinda sank as I went. When if ever will this awful trend end?
It's one thing for your blog post to be full of faux writing style, but also that letter to the organization... oof. I wouldn't enjoy receiving that from someone who attached a script that dumps all users from my database and the email, as well as my access logs, confirm they ran it
Name. And. Shame.
> No exploits, no buffer overflows, no zero-days. Just a login form, a number, and a default password that was set for each student on creation.
ai;dr
This is AI slop.
Use your own words!
I would rather read the original prompt!
So strange that I have to scroll this far to find mention of AI writing. It's clearly AI, but apparently now even tech people get fooled not just boomers on Facebook. They don't name the company and the whole story is just way too perfect, and cookie cutter... If you're a human reading this, consider that the comments here may also be AI. Dead Internet and all..
Also in the email towards the organization. Makes it sound as condescending "let me dumb it down for you to key points" to the receiver of the email as, well, as LLMs are. Bit off-putting and the story itself is also common to the point of trite. Heck, nothing even ended up happening in this case. No lawyer is mentioned outside of the title, no police complaint was filed, no civil case started, just the three emails saying he should agree to not talk about this. Scary as those demands can be (I have been at the butt end of such things as well, and every time I wish I had used Tor instead of a CIOT-traceable IP address as soon as my "huh, that's odd system behavior"-senses go off. Responsible disclosure just gives you grey hairs in the 10% of cases that respond like this, even if so far 0% actually filed a police complaint or court case)
Presuming nobody had found this exploit previously, it actually is a zero-day.
A performative display of performative anti-AI purism.
OP discovered the state of Malta's InfoSec culture the hard way.
TLDR: infosec is screwed in Malta. The only people who benefit are malicious actors.
Some missing historical context is that there was no real legislation other than computer misuse up until the recent case known as the FreeHour case. A group of students discovered some pretty nasty vulnerabilities in an app aimed at matching student schedules. One of these vulns was exposing RW API keys for hundreds of student's google calendars, hanging out to dry on the open internet.
The students involved, together with one of their lecturers, sent a standard vuln disclosure notice via email to the company. Instead of what you'd expect, the students were arrested, strip searched and charged with computer misuse.
This really threw the entire local infosec scene off, with some very vocal voices saying how draconian the situation was. Finally they all receieved presidential pardons [1] although last I heard they don't have their hardware back yet. FreeHour and their tech supplier (never publicly mentioned but if you ask around you can find out who they are) never saw any consequences.
I've done two public disclosures [2] [3] which worked out well but only because I knew how to go about it. In such a tiny country is about who you know and how you know them, so in both cases I established contact via trusted intermediaries, both times ensuring I found someone who would know what I was talking about whilst also not immediately reach for the police.
I'm sitting on another issue I discovered because after a long conversation with CSIRT about it we figured the only way I can actually anonymously report it is by snail mailing it to them. I can't pull together the energy to complete it because I don't have the time right now in my life for another legal melodramatic situation.
Despite this, MITA (the government IT department) annually runs cybersec award ceremony [4]. I had once planned to nominate the students for the award but the nomination criteria forbids nominations for individuals who have "averse media publications" about them.
This is very much a deep socio-political problem in the country: we don't handle candour or bluntness of any kind in the public sphere. Being a very blunt person, it got me in all kinds of trouble growing up.
[1] https://timesofmalta.com/article/pardon-issued-students-lect...
[2] https://www.simonam.dev/accidental-pentest/
I am a lawyer and my field do cross this area which the events have transpired.
First, yes, everyone should acknowledge that this matter has been handled poorly by their corporate in-house and external lawyers. These should not have happened. The company should face consequences. I advise my data controller corporate clients to reach out to the reporter/whistleblower immediately and have the IT team collaborate, at the very least talk to the person to effectively replicate the exploit so it can be thoroughly fixed. There should even be procedures on how this should be handled. I understand from the article that this is not how it's so done.
However, I feel obligated to note some different aspects, all of which are absolutely not intended to condone how this company handled the situation. I want to re-iterate; they should have handled it better.
Things to note;
1. They might have already reached out to the data privacy board. The data privacy boards, especially in Europe are very involved in the reporting procedures and in my experience, their experts are very reluctant about public disclosures if the breach/data leak is caused by an exploit. They (sometimes rightfully) do not trust to the private sector's biased explanation that this vulnerabililty has been "fixed" and sometimes effectively prevent public disclosures about the event, allowing only the affected data subjects to be informed about the event. The potential danger of re-exploitation and protection of the public far outweighs the public's (that is persons who are not affected by this breach) right to be informed of such event. Affected persons should be notified. You might not have been aware that these happened. It is their legal obligation to notify the affected data subjects but it is not their legal obligation to notify the reporter that the notifications to the data subjects are made.
2. You did the right thing reaching out to the company and upon some radio silence, contacting the competent authority. But sadly, your duties as a citizen end there. You played your part and did all you could have done if not more. Contacting the company again was not really required. If you found yourself losing sleep, you could have re-contacted the authorities with a data subject request or a right to be informed request. They are legally obligated (under GDPR) to respond to you.
3. Sadly, your e-mail, especially the line below is actually a threat that is actionable under many EU juristictions;
I am offering a window of 30 days from today the 28th of April 2025 for [the organization] to mitigate or resolve the vulnerability before I consider any public disclosure.
You cannot disclose this to public. Even with good intentions. This might enable the exploit to actually be exploited by ill-faithed persons and would cause more damage. The company is responsible for this vulnerability and they should face counsequences for their actions or the lack thereof, but going public about an exploit is absolutely ill-advised, even if this is intended to coerce the company into action.Nevertheless, I wanted to re-iterate that this is not intended to condone the company's behaviors in any way. You did the right thing warning them and the authorities but further action might have caused more damage. It is always best to attend to this situations with the guidance of a data privacy legal consultant.
> 3. Sadly, your e-mail, especially the line below is actually a threat that is actionable under many EU juristictions;
I suppose the choice of words is the problem here? How should one announce an embargo period?
> You cannot disclose this to public. Even with good intentions.
Bullshit, NIS 2 article 12 specifically says CSIRTs must coordinate the negotiation of a disclosure timeline between reporter and provider. I'd say offering a 30 day embargo while CC'ing the relevant CSIRT is the start of such negotiation from the reporter.
My biggest doubt about this story, LLM writing aside, is the lack of mention of a CSIRT follow up.
Fuck you! Name the company! They shall burn!
Wish they named them. Usually I don't recommend it. But the combination of:
A) in EU; GDPR will trump whatever BS they want to try B) no confirmation affected users were notified C) aggro threats D) nonsensical threats, sourced to Data Privacy Officer w/seemingly 0 scruples and little experience
Due to B), there's a strong responsibility rationale.
Due to rest, there's a strong name and shame rationale. Sort of equivalent to a bad Yelp review for a restaurant, but for SaaS.
Dan Europe has a flow as discussed in the article and both the foundation and the regulated insurance branch is registered in Malta.
EU GDPR has very little enforcement. So while the regulation in theory prevents that, in practice you can just ignore it. If you're lucky a token fine comes up years down the line.
Messenger shooting is a common tactic with psychopaths.
The same-day deadline on the NDA is the tell. If they had a real legal position, they wouldn't need a signature before close of business. That's a pressure tactic designed to work on someone who doesn't know any better. The fact that he pushed back and nothing happened confirms it was a bluff.
Unless the company has a bug-bounty program, never ever tell them about vulnerabilities. You'll get ignored at best and have legal issues at worst. Instead, sell them on the black market. Or better yet, just give away for free if you don't care about money. That's how companies will eventually learn to at least have official vulnerability disclosure policy.
Not clear to me why the author thinks he's the good guy in this scenario. His letter to the company might as well read "I am a busybody who downloaded private information about a person who is not me from your web site, ENTIRELY WITHOUT AUTHORIZATION from that person. Here, let me show it to you."
Why does he think he's entitled to do this? I get that his intentions are more or less good but don't see that as much excuse. What did he expect them to say? "Oh thank you wise and wonderful full-time Linux Platform Engineer"?
I appreciate that the web site in question seems to have absolutely pathetic security practices. Good reason not to do business with them. Not a good reason to do something that, in many jurisdictions at least, sounds like it constitutes a crime.
Why does someone with a .de website insure their diving using some company based in Malta?
Based on this interaction, you have wonder what it's like to file a claim with them.
Absolutely horrible according to DIVE TALK
https://www.youtube.com/watch?v=O7NsjpiPK7o
Insurance company would not cover a decompression chamber for someone who has severe decompression sickness, it is a life-threatening condition that requires immediate remediation.
The idea that you possible neurological DCS and you must argue on the phone with an insurance rep about if you need to be life-flighted to the nearest chamber is just.... Mind blowing
Divers Alert Network, which is probably the most well known dive membership (and insurance) org out there is registered in Malta in Europe.
It is probably among the standard forms required to participate in a diving class/excursion for travelers from other countries; and, Malta was probably chosen as the official HQ for legal or liability shelter reasons.
Of course he got a response by a lawyer. He shouldn't have hacked the whole site, that's highly illegal, and usually the police is coming knocking, not just a lawyer. Such a morally bankrupt weirdo
Rage bait.
The rage bait is the cookie cutter made up story with zero concrete info on the company (disclosure?!) and AI generated writing.