Ask HN: How much do you care about security while building an MVP?
I'm in a situation in which I have to decide between security of the application and shipping fast. The idea that we're working is not yet evaluated but we believe it has the potential to go viral. What would be the ideal thing to do here, build the product perfectly or to ship fast and fix later? There's no reason for security to slow you down when building version one of a product. Application-level stuff like enforcing that User A can't modify User B's data takes no time to implement, and should flow out into the IDE as fast as the actual code for the feature if you keep it in the front of your mind. It just wouldn't feel right to write the IF block that checks whether a record exists without also checking that its userID matches up with the current logged in user. Similarly, database constraints all go in at design time. The schema isn't ready until bad data won't fit. No extra time needed there either. Beyond that, you're into stack and infrastructure security stuff. Pick your platform well and you get most of it for free. Good luck trying to author a SQL Injection bug in a compiled language with parameterized queries, for instance. Really, it's all about having built things in the past, knowing what sort of issues need worrying about, and getting into a habit of never half-assing things. If you do that, you have to go out of your way to mess things up. It'll feel so wrong to cut corners that it'll probably actually slow you down to do so. Upboats and agreement: Picking good infrastructure that you know you can handily win the security war with is key. It almost doesn't matter how secure your MVP is, if your architecture is securable. You can't bolt on a secure architecture after the fact but you can tighten the one you build in. What will you do if/when it's in the hands of 100K users, you are working as hard as you can to keep up with bug reports, and some 14 year old figures how to exploit a security hole to do something that will make everyone angry with you? Security is really, really hard to back-patch - some would say impossible. If it's not there at the beginning then you will either never have it, or you will have to do a complete re-write. MVP stands for minimum viable product. It really is about focusing on the features users need the most. It is not about shipping crap. Eric Ries has stated just two weeks ago on HN [0], that this is a common misunderstanding. Security is a non-functional requirement [1], not a feature. It is not something you can prioritize on or not. Fortunately getting the basics right on the application level is not that hard, there are already many useful tips in this thread. [0]: https://news.ycombinator.com/item?id=9369642 [1]: http://en.wikipedia.org/wiki/Non-functional_requirement This is a really interesting argument, I had it in my head to disprove your thesis. But the more I thought about it the more I agree with you. I feel somehow though that the line between functional and non-functional requirements are not as well defined when dealing with user data. There is an expectation (even if unrealistic) that the confidentiality, availability, and integrity of the user data will be preserved. Twitter though can serve as a counterpoint to part of this argument (availability). Are there examples of a successful MVPs that dealt with user data and failed the confidentiality or integrity requirements? >Are there examples of a successful MVPs that dealt with user data and failed the confidentiality or integrity requirements? I think there are many examples of successful products that had and/or have security issues. Think of all those apps that transfered user data over insecure connections. The problem with those non-functional requirements is, that they are not all equally important and their importance varies from product to product. They are often ill-defined and hard to fully formalize. Nevertheless I think there are obvious "industry standards" (Update your stuff, encrypt at least connections). Programmers and managers are people and mistakes happen, but just ignoring security altogether is negligent and one should be held accountable in the case of damages. Stuff like Sonys ten year old Apache getting hacked simply must not happen. The federal privacy laws in Germany are quite good in that area [0]. It is explained well how you have to handle other people's data: [0]: http://www.gesetze-im-internet.de/englisch_bdsg/englisch_bds... Security is a stance not a feature. Being caught vulnerable and on the back-foot can be /really/ hard to recover from.
Do risk-based security, and be realistic about remote risks.
It sounds like there are already concerns about identified risks. It really depends a lot on the type of information you store on people. Is it very personal/sensitive? Is there money involved? Well it all comes down to what bug holes you will release and what type of data you will be handling. If you will be handling sensitive people information then you need to make sure that those are safe. Last thing you want is you new startup to be easily exposed and get a beating. If someone might just game the site and get something that wont affect anyone else, then you might ignore the fix for a while. But you should try and build whatever you are with a relative safety net. You cannot have bugs and holes in your system that will expose the system to hackers that will take over everything. Cause then it will be game over for you. No one will trust you afterwards. You can simply do the best to merely record what's going on (ship your logs off to a system that's super well secure and basically write-only with minimal services as fiat). Additionally, somehow protect data sufficiently well that the problem would not cause catastrophic loss of data. A start-up went under after attackers managed to get their AWS access keys and wiped out everything in all their accounts after refusing to pay the ransom fee. Huge difference between terribly stupid and realistically aware of pros and cons. It's a balance. To paraphrase Steve Yegge: if you dial up security to 100 and accessibility (or in your case, working MVP) to 0, you die. If you dial up working to 100 and security to 0, you can win. If you have no idea if this MVP has any promise, I'd seek the answer to that and do the barest of minimum security, provided you'll have the willingness to do it right immediately after getting any traction. That can be easy to say and hard to do, of course. Frameworks will give you security best-practices and a faster shipping time. Of course, frameworks aren't bulletproof; but if your understanding of security isn't up-to-date then a framework will provide you some shielding against SQL injection, CSRF, etcetera – and will almost certainly help you ship a product faster since you will reduce the amount of boilerplate you're writing. With all due respect, doesn't everyone think their product might go viral? It probably won't. You'll have time to fix the security after you launch. That's a fucking terrifying suggestion It depends. What sort of security issue is at stake here? It depends on so many things. And you've provided no context. The premise of the question is flawed.
Unfortunately I can not find the corresponding paragraph in the part where the punishments are listed. Of course, someone has to drag you to court anyway before anything happens and unfortunately: Where personal data are processed or used automatically, the internal
organization of authorities or enterprises is to be arranged in such a
way that it meets the specific requirements of data protection. In
particular, measures suited to the type of personal data or data
categories to be protected shall be taken,
1. to prevent unauthorized persons from gaining access to data
processing systems with which personal data are processed or used
(access control),
2. to prevent data processing systems from being used without
authorization (access control),
3. to ensure that persons entitled to use a data processing system
have access only to the data to which they have a right of access, and
that personal data cannot be read, copied, modified or removed without
authorization in the course of processing or use and after storage
(access control),
4. to ensure that personal data cannot be read, copied, modified or
removed without authorization during electronic transmission or
transport, and that it is possible to check and establish to which
bodies the transfer of personal data by means of data transmission
facilities is envisaged (transmission control),
5. to ensure that it is possible to check and establish whether and by
whom personal data have been input into data processing systems,
modified or removed (input control),
6. to ensure that, in the case of commissioned processing of personal
data, the data are processed strictly in accordance with the
instructions of the principal (job control),
7. to ensure that personal data are protected from accidental
destruction or loss (availability control),
8. to ensure that data collected for different purposes can be
processed separately.
One measure in accordance with the second sentence Nos. 2 to 4 is in
particular the use of the latest encryption procedures.
How is it handled in the US? Such offenses shall be prosecuted only if a complaint is filed.
Complaints may be filed by the data subject, the Federal Commissioner
for Data Protection and Freedom of Information and the supervisory
authority.