The GDPR blog post
medium.comI got a few dozen gdpr emails today, some from companies I didn't know existed. This law is a fantastic development for end users/consumers.
Yup. Today I opened my fridge wondering if I'll see a note about an updated privacy policy inside.
It's ironic seeing that the law was in power for the last 2 years, but companies woke up only last week. A lot of those mails are only information, with no (clearly marked) link to a consent panel, so I assume that me ignoring them means they won't be allowed to spam me anymore.
You're joking, but my fridge did hand me a GDPR notice in the morning:
Wow. That's the funniest thing I've seen this week.
Beats the schadenfreude I'm having with some of the IoT lightbulbs no longer working for EU customers - a problem which actually impacted a friend of mine. See: https://twitter.com/internetofshit/status/999619364541394944.
EDIT: 2 friends now. I wonder how many more bought those lightbulbs...
Anyone who bought these lightbulbs should return them to the retailer for a full refund.
The yeelights are typically bought on eg Aliexpress. I have mine with the local network developer functionality enabled, here's to hoping they will still work with that at least.
That'd be an entertaining help desk conversation. "No the product is still functional, but my country made it too much of a hassle for them to operate in my region"
That's fairly incredible. A friend bought a Samsung smart TV, but after reading the EULA (which of course they spring on you after powering it on rather than to inform you at the point-of-sale) decided to forego the smarts in the TV. Those things track just about everything, your fridge apparently is still relatively mellow.
Did it open without you clicking 'ok'?
Yeah, I think it was pertaining the Samsung network account only. For secondary features like calendar sync, app control etc.
I meant the fridge ;)
Sure enough! But we don't discuss important things in the kitchen from now on.
Funniest thing I saw this year. Brave new world.
It's a common misconception, but consent is only one of the 6 recognized lawful basis for processing specifically recognized by the GDPR, and a company may contact you when any one is applicable. For example, if you have an existing business relationship via a contract (you bought something), or the business has a legal obligation to inform you of changes, or the nebulous "legitimate interests" basis on behalf of the business.
Those emails which ask you to click to continue to receive marketing are a red flag that those companies did not have any legal basis previously, or they're just cargo-culting other companies even though they already have a perfectly valid basis to keep in touch (like you being a an actual customer). Check out your favorite big-company SaaS signup today (like, Jira), you will still typically not see any explicit consent checkboxes, because due to a customer relationship it is not needed.
If you're in the US, it's often not a bad bet to backburner compliance with ill-conceived, fuzzily-defined law that is going into effect 2+ years in the future. Chance are good that a new crop of House of Representatives members will have been voted in during the intervening time, possibly flopping the majority, and the provisions of the statute in question will either be reversed, endlessly delayed, or mangled out of all recognition from their original form.
From that perspective, this whole trainwreck makes considerably more sense.
We have a ton more parties and the majority are not religiously/fanatically opposed to each other. Also, they tend to shift over time, nobody battens down the hatches as the Democrats or the Republicans do. They have to be flexible to survive.
In the US one of the sides could start killing people in the street and they'd still end up in Congress...
What this means is that for many topics it's easier to reach consensus and the opposing party, now in power, is much more likely to continue a policy the general population likes.
> so I assume that me ignoring them means they won't be allowed to spam me anymore.
In my case, many of them assumed silence as accepting the new terms.
> We encourage you to take the time to review our revised Privacy Policy and Terms of Use. By continuing to use Microverse on or after May 25th, 2018, you acknowledge our updated Privacy Policy and agree to our updated Terms of Use.
> What do I have to do? You don't need to do anything as these changes will automatically apply to you. If you don't want to accept the changes, you can unsubscribe below and we will remove you from our database.
> Opt out: If you have not already opted out of receiving marketing communications from Bugsnag and would like to, you can do so at any time.
> We are clarifying that all of our users, no matter where they are located, may contact us at any time to review the personal data that we have of theirs, request that we delete that data, or withdraw their consent to receive promotional announcements from us.
> The updated Privacy Policy automatically comes into effect for all Envato users on 25 May 2018. So your continued use of the Envato sites from that date will be subject to it.
That is implied in some of the e-mails - that is, they're asking you for explicit opt-in permission to keep mailing you. Mind you the old required 'unsubscribe' link was often adequate, but I don't mind this either.
The flood of emails is a nice reminder of how many services you're signed up with, too. Some even with multiple e-mail addresses.
I was particularly surprised seeing names I don't even recognize. Turns out that some of my one-off on-line purchases were handled by companies with names completely different than the names of the shops they put on-line.
They're mostly incompetent: https://www.theguardian.com/technology/2018/may/21/gdpr-emai...
The rest knows well that they got your address without your consent and try to get this consent from you now so that they can legally keep your data.
I got so many emails from services someone accidentally used my email to sign me up for. Very eye opening.
To be honest, I thought most companies just took it as an excuse to send me one final piece of spam.
So, I'm really not trying to start a fight, please read this with curious intent.
I personally don't really feel like keeping my email is a violation of my privacy. If they're not "processing" it (that feels like code for "data mining") is this really required? I mean my email address is literally a public means of contacting me. It's kind of fun that they decided to use a one-way hash, but this story doesn't make me feel like the internet has really been improved.
The problem is identification of physical persons. Your e-mail is public, but it also identifies you as a person. This is important, because it allows for correlating different data sets.
Touch Surgery sounds like a honest company, so for them this was just some extra burden. But the same law prevents ShadyAdtechCo from getting datasets from several companies and joining them on e-mail column to build a profile of you, without your explicit, informed consent in several places.
Wouldn't then a hash of your email also identify you as a person? The companies can still build a profile of you if they just agree to use the same hashing function :/
Or even if ShadyAdtechCo just knows what the hashing function is, and has a list of plaintext email addresses to test against – perhaps obtained from one of the datasets they're joining against, or even from crawling the web.
Hashing should be done with salt for precisely that reason.
If you mean a static salt, that could help mitigate against hacks (if the attacker has access to the database but not the code), but where adtech is concerned it's probably more realistic to assume that the datasets they're using were disclosed willingly. If you mean using a different salt for each address, that could work for some use cases, but it wouldn't work for the use case described in the blog post, since Touch Surgery needs to be able to lookup whether a given address is in the database (to see whether they've previously declined an invitation).
It's really no problem to do this. We're using a variation on this: https://unix.stackexchange.com/questions/158400/etc-shadow-h.... The output of crypt (where the input is an email address) is pretty useless if we did suffer a data breach. They'd have to hash every known email address with that salt in order to figure out who had declined an invite from us.
What is the salt based on?
Why is it reasonable to assume they were disclosed willingly? That sounds like a startling assumption and the reverse of the one I'd make.
Without good evidence, you must assume they were disclosed unwillingly.
Couldn't the salt be unique to the requesting account? I would assume that just because a user declined an invitation from one user, they still might want to accept an invitation from a separate user.
IANAL, but specifying how you hash the e-mails (algorithm + salt generation) would be part of your GDPR compliance process.
We'll have to wait til someone got sued into oblivion before we know that
Noone will get sued under the GDPR. That's not how it works.
Article 79 says that each data subject that considers that their rights under GDPR have been violated by a controller or processor has the right to an effective judicial remedy, which may be brought before a court in either the Member State where the data subject resides, or where the controller or processor has an establishment.
This is distinct from administrative and non-judicial remedies.
That sure sounds like suing.
An email address identifies an email address, not a person. More than a few times I've signed up to newsletters with a distribution list address, that maps to n number of actual people behind the group, who may or may not be fixed members of that group.
You could track c_sharp_enthusiasts@myCompany.com, but that is not identifying a person.
In many cases it will identify a person. I have two email addresses, a personal one and a home one, and I don't use any others. In my case this absolutely will identify me, and I would argue that in the vast majority of case this will be true as well. You're unusual relative to the general population in using distribution list addresses, and they have to start somewhere. I would also argue that this is exactly the piece of information that is most commonly used to merge datasets, and that's probably why the legislation includes it.
No one but us computer supernerds makes that distinction. On the consumer market, an email address is an excellent way to identify a person over a very, very, very long time. No one but the tech-literate change their email too often.
No idea if deleting the email is required under GDPR, but for a taste of why it might be, imagine scaling up a bit. If they keep one email in this scenario, they're keeping one personal fact: email A is affiliated with email B. Suppose that process runs at scale for a while, and ends up with a database of 10 million email addresses averaging 100 connections. That becomes a digital map of society, with lots of private facts hidden in it -- a valuable/dangerous pile of surveillance to leave lying around for no reason, even if they're not actively using it.
There's a big difference between one fact and a billion, of course. But that's what keeps happening on the internet -- what feels like one small harmless thing turns out not to be harmless at scale, with no real warning that you're crossing from one regime to the other.
> If they're not "processing" it (that feels like code for "data mining") is this really required? I mean my email address is literally a public means of contacting me.
It's also a means of identifying you. It's not just the piece of data, even if it's only your mail address that is stored it can be sensitive information due to context. Say, due to a data breach a list of all members of a company's mailing list is leaked. You might not mind too much if your publicly known mail address shows up on Amazon's mailing list, but you might care if it shows up on the leaked mailinglist of transexual-midget-porn.com
What if someone else registered using his email (some websites for example don't send confirmation email if you change it in your profile or add additional email) or leaked data is fake? How that protects him?
You can't store the e-mail address without consent, and getting consent would probably involve sending a confirmation email.
But providing an email so it can be associated with your profile implies giving consent, as you don't have to do that.
GDPR requires explicit consent if you're using consent as your legal basis for storing that contact email.
Keeping your email without a use case is a violation of your privacy.
How would you feel if you had to give your home address to the baker to buy a pastry?
I see what you're saying, but I also feel like the two are very different to me.
I had a recruiter call me up with what I suspect was a made up role. At the end of the call he casually dropped in the line "ok, well, is it ok if I get back in touch when something more suitable comes in?"
It was conspicuous. I asked is he'd asked me that because of GDPR. He said yes. I said no.
> I would be very wary of a company who claims this legislation is onerous.
... and elsewhere ...
> On the other hand it also was not very hard for us. We are not a creepy company.
> This is not to say that preparing for GDPR didn’t take us 100s of hours. It did.
A company who it didn't affect much, spent 100s of our hours? I think it would reasonable to call that onerous.
The different & fair question would be if time was justified.
> A company who it didn't affect much, spent 100s of our hours? I think it would reasonable to call that onerous.
100 hours is 12.5 days. That is not much to protect your users data.
That’s the issue with GDPR, it’s that the regulatory burden for Facebook is the same as it is for a small company.
At $dayjob we are at hundreds of thousands of dollars in staff time and legal fees (mostly updating and reviewing existing contracts). We don’t do anything shady with user data, and already have a robust data security program due to our industry.
A family member’s small business which packages meats for the grocery is similarly burdened to the tune of hundreds of thousands.
That’s a huge waste repeated millions of times over around the world. They could have just targeted this at the big web companies and Adtech firms with some simple qualifiers. This law isn’t really much good for consumers, but it’s very good for lawyers.
Also see:
> We engaged a dedicated GDPR consultant
Even in Europe (at least in Spain), mainstream journalists and pundits are generally misstating the effects and contents of GDPR.
I wish not-so-hot takes like this are more widely read, and along with sane enforcing, contribute to the sorely needed education on these topics of the general population.
A colleague sent me this, lots of funny variants https://gdprhallofshame.com/
Warning though: that site is not gdpr-compliant
Have you reported it to authorities?
no
Why? Is there a company behind this webpage?
doesnt have to be a company. it processes data and it's not a strictly personal site, it's all over the internet in fact.
> sponsored spontaneously by the amazing Raygun.
Why is it not gdpr-compliant?
privacy policy, cookies, analytics, opt-in
I agree it's a rather nice site. So refreshing not to have to wade through privacy guff popups.
Is there not any issue with having a hashed version of the email, given the entropy of an email address is quite small?
Maybe, but they have a good reason to keep that data, and they even go out of their way to "hide it" the best they can using a one-way function.
To save the information that a certain email address has explicitly withdrawn consent, they need to store it. The alternative is to send out a new email the next time someone adds then. I think the interpretation of GDPR this particular instance of information storing is still open, but they have done everything possible to keep it safe. Should the list of hashes be leaked, the best an adversary can realistically do is check known emails against the list of hashes.
Yup, this is exactly what GDPR is aimed at. They thought about what they need, why they need it and have it documented.
You're right, but there are safer constructions to do this. Maybe this kind of knowledge will get more popular now that GDPR is mandating it :)
Active concern for me: GDPR will promote a bunch more homegrown looks-fine-but-actually-busted crypto schemes. I don't think GDPR will be used to enforce that even in the case of breach, and I'm not sure it should -- I think we should make better schemes available instead.
What is the 'safer construction' to do this? I'm looking for ideas and trying to solve a problem. My understanding of the GDPR, which is very basic, supports the view that hashing email addresses is at least questionable. On the other hand, if an email list is a core function, de-spamming seems valid.
I posted a comment with an alternative construction plus rationale: https://news.ycombinator.com/item?id=17153329
An appropriately tuned bloom filter would probably suffice.
A Bloom filter is an interesting approach, but the problem is that the attacker and you need the same property: to know if an email is in the set. If you could tell set membership with (effectively) perfect accuracy the Bloom filter may improve performance but not privacy.
I posted an alternative construction elsewhere in the thread.
The difference is that you may be willing to accept a much higher false-positive rate than your attacker can. This is the same idea behind the old "flip a coin, and then raise your hand if either the coin came up heads or you have [embarrassing problem]" method to statistically count everyone with the embarrassing problem, without disclosing anyone's status with certainty. That's the same property your truncated hash achieves.
A Bloom filter could also be designed accordingly. I'm guessing this post's grandparent was thinking of the filter's natural false-positive rate, or you could add deliberate noise.
If had a service where I wanted people to use it and only remember explicitly opt-out, I wouldn't want any false positives to the "have already opted out" question.
Let's say my list has 10^4 members, and there are 10^9 people worldwide. If I design for a 10^-4 false positive rate, then a list constructed by reverse-engineering my algorithm (whether it's a Bloom filter or a truncated hash or anything else) will be 91% false positives, 9% true positives. That's not a huge improvement, but I could imagine applications where someone judged it worth the ~one customer I inconvenience.
This raises fun questions of what it means to disclose a fact, when you're disclosing it probabilistically. Let's say that you tell me the yes/no answer to a question you consider private. I then generate a uniform random number X on [0, 1], and disclose (("you told me yes") || (X >= a)) for some agreed constant a.
If a = 1, then I've almost surely just disclosed your secret. If a = 0, then I've almost surely disclosed nothing. At what value of a do you start to care? That's a really messy question, depending on the social consequences of the information being disclosed (what fraction of innocent candidates would you reject to make sure your child's tutor isn't on the list of clients of a psychologist known for treating pedophiles?), and the other public information about you and about my population that an attacker can fuse to make a stronger estimate.
I don't think privacy-through-false-positives is a terribly effective tool. It's just the only possible tool for creating privacy when your rule is public (whether deliberately or after a breach)--so it's interesting to think about places where it could have some benefit.
Using a hash function for this is about as good as it's going to be. If we accept the conjecture of the existence of one-way functions, and use a cryptographic sound one-way function and implementation, it's provably the best we can do.
That's not provably the case at all. We can absolutely do better, for reasons the root of this thread raises. If that were true, why isn't SHA256 the state of the art in password storage? Reason: the input space is small enough to enumerate.
I posted a comment with an alternative construction plus rationale: https://news.ycombinator.com/item?id=17153329
I never said anything about SHA256. I talked about one-way functions.
The thread model is an adversary that gets unlimited access to the values stored for this purpose, and knows the function used to compute it. He wants to check if a given email is in the set. One-way functions is provably the best way to be able to ask yes/no to the question if this email is in the set with no false answers. I have not said anything about using computationally expensive one-way functions because that does not matter if the function takes 10 seconds to compute. He already knows what emails he wants to check.
You did say hash function.
Would you mind jotting down that proof?
A proper cryptographic hash function is a one way function, if they exist.
But I'd frame your question the other way around then. You do not want to store the emails in a form that leaks any data. For that we need a compressing function. My (unwritten) assumption was that if the adversary compromise the system to get the data, they'll get any secrets too. This means that a HMAC is no better than a cryptographic hash function.
I know that is quite possible to create a system where this would be significantly harder than just a DB dump. But that is both significantly more difficult, and expensive. I'll admit that the formulation "provable the best we can do" should've had a big fat asterisk with the disclaimer about the threat model.
So, if an attacker have the data set and secrets, and wants to compute if a particular input is a member of this set. Can you do better than a cryptographic hash function?
Can you write down in pseudocode exactly what you're suggesting? As in db.write(sha256(email)) or whatever.
It seems to me this counts only as pseudonymisation and not anonymization. While the hash is not directly readable, it's still reversible with additional information (such as a large list of email addresses and knowledge of the hashing algorithm).
One of the GDPR notes says:
> [p]ersonal data which have undergone pseudonymization, which could be attributed to a natural person by the use of additional information, should be considered to be information on an identifiable natural person”
Consider that you are running some kind of controversial/embarassing site of sexual/political/other sensitive nature. You keep a hashes of people who once were users but unsubscribed or something like that.
If that database is leaked, a user could re-hash list of political figures, celebrities or just some big list of well known email addresses and with this information find out they were users of this sensitive site.
So to me it seems that pseudoanymized/hashed emails still count as PII and have to be treated as such.
It cannot be reversed without a significant amount of effort (really, even when you say the "entropy is quite small" it's not actually as small as you would think) and is therefore probably reasonable. Worst case a regulating body will tell you that no, they do not think "this will take 1-10 years to reverse" is quite good enough and then you can work with them on a solution that would be good enough.
Minutes to days to reverse almost the whole list, depending on budget. It's not a real obstruction except to casual snooping.
Could you walk me through how you come to that conclusion? I admit my estimate was very ballpark, but "minutes" seems so wildly out of line with what I think I must be making a mistake somewhere.
A single AWS GPU server can hash trial passwords on the order of 100 GH/s, which puts a pretty low ceiling on "hashcat as a service" rental costs.
I'm assuming 10^12 tries per second is economical for any business.
there are about a million words, including all likely spellings of all but the rarest first and last names, so all 1 or 2 word addresses, firstname.lastnames, etc. addresses are about 10^12. try those, plus short alphanumerics, for the 1000 most common email domains -> 10^15 addresses
Throw in every name in public leak databases that doesn't meet those patterns as well.
There's on the order of 1 million domains that are likely to be serving mail at all; try the billion most likely names for each of those for another 10^15.
This should capture almost every email address that isn't an intentionally obfuscated one-off and adds up to less than an hour at 10^12/sec. There's a modest overhead to matching against a larger list but it shouldn't matter in practice
A couple of hundred bucks spent on renting GPU instances can speed things up considerably.
The practical entropy of email addresses is indeed pretty small, lots of them are going to be first.last@company.example and a bunch more end in gmail.com or another popular provider.
If you can accept some level of false positives you could make the hash too narrow to be able to usefully reverse it. For example if only sixty people will ever subscribe or refuse to subscribe,a 24-bit hash is plenty to reject mistaken attempts to subscriber somebody who doesn't want in, but good luck guessing which GMail user is "2ca24b".
Another problem is, what if the email address changes hands - maybe even the whole email domain changed ownership. You probably need a way for people to change their minds, as that then also covers the case where the person behind the address changed.
Yes. They don't quite define how the hash works in the post, but assuming it's something like SHA256(email), that's easy to enumerate.
There are ways to do this better. Let's say that it's 1 party and you're trying to figure out if you've seen en email address before. (That's the case in the article, there are also schemes where you and another entity can figure out if you both saw any email addresses -- but that's not what we're discussing here.)
We already know how to take relatively low entropy things and store them securely to see if you've seen them before, for password storage! However, password storage works a little differently. You _know_ which entry you're checking against because you have a secret (password) but also an identifier (user name) -- so you can recompute against the same random key. This randomization means attackers need to try every password for every user. This doesn't work for us, because we just have an email, but it's close.
Three parts worth considering: KDF, PRF and truncation. Firstly, your (deterministic, for reasons mentioned above) PRF turns your low-entropy input into a higher-entropy key. But (again, for reasons mentioned above) attackers still just have to try every email should they compromise your database. You can fix that problem by also adding a PRF (pseudorandom function) that you rate-limit vigorously. Think of a PRF as a keyed hash -- the usual example is HMAC-SHA256. If you're capable of keeping PRF key material safe but might leak a database dump (not unreasonable), the PRF forces the attack to be online: an attacker can only validate guesses as long as they have access to the PRF, and the PRF comes with audit trails and rate limits.
Finally, you can choose to truncate the output. Because the output space of your PRF will be much, much larger than the input space of email addresses, a match out of the PRF gives you almost perfect certainty that you've seen the email address before. That goes for you, and an attacker. If, let's say, you have another way to validate if you've seen the user before (but it's expensive, say, you have an encrypted offline dataset but it's AES-GCM'd and you can't afford to decrypt the entire thing every time), truncation gives you a neat way to _probabilistically_ say if you've seen an address before.
> You can fix that problem by also adding a PRF (pseudorandom function) that you rate-limit vigorously. Think of a PRF as a keyed hash -- the usual example is HMAC-SHA256. If you're capable of keeping PRF key material safe but might leak a database dump (not unreasonable), the PRF forces the attack to be online: an attacker can only validate guesses as long as they have access to the PRF, and the PRF comes with audit trails and rate limits.
That particular part is assuming security through not knowing the implementation of the security models components, aka. security through obscurity. Rule no. 1 in security, always assume that the adversary knows exactly how everything is implemented and can do that for himself.
What? A PRF having a secret key is not “security by obscurity.” I am documenting how the entire process works and where the security properties come from.
Perhaps use bcrypt to be on the safe side. With correct bcrypt configuration brute-forcing gets infeasible.
I honestly can't see this being something that would ever result in enforcement action.
They have a legitimate business interest in not spamming someone if people try to sign you up multiple times, and since the email address is hashed, all they can use it for is to determine if they've sent you an invite before (and potentially when they did so, or when you declined the invitation).
Maybe they could get in trouble if they also retain information on who is trying to send you invites and creating a graph and a shadow profile based on this type of information, but it sounds pretty clear that this isn't something they're doing or are interested in doing.
So far I have received over 300 GDPR emails. When I am supposed to read all this? How do I track it? How can I track what each company stores about me? Do I feel this in any way improved safety of my data? I don't think so.
In theory, if you don't reply, all these companies should stop using your data and quite likely delete it. Sounds like improving safety for me.
It sounds pretty tedious to sift through 300+ emails to find everyone you want to keep using your data and go through whatever process they have for replying.
Why do you want them to keep using your data?
Why wouldn't I? With the exception of one or two emails, they've all been from companies/services I signed up to originally.
No this is not correct.
I think I may agree with you, but to say something like that you also need to say why you don't think it's correct, otherwise it's pretty unhelpful.
sorry should've mentionted. This is discussed in other comments in this thread.
You could say the same about all the terms & conditions links that you studiously ignored when signing up for these services...
300?! You must subscribe to a lot of things...
Then again, I've used my throwaway account to subscribe to a lot of things, I feel like I've gotten around 20 GDPR mails so far, why don't I have more? It's interesting (and scary) how many sites I've not dealt with for years still have my data.
I guess we're all free to ignore those emails, if you don't really care that they have your data (as has been the case until today).
I have reported as SPAM all the GDPR emails i got from unknown companies, i never asked to receive all this shit (250 GDPR messages just this week)
American entrepreneurs who are proponents of GDPR are experiencing some serious Stockholm Syndrome. Or possibly they're just faking their love for GDPR to virtue signal.
Love how I got a popup asking me to sign up with a fb/ggle account, stating "To make Medium work, we log user data and share it with service providers."
We’re still not GDPR compliant and don’t plan to be. So far so good.
Same here.
"To make Medium work, we log user data and share it with processors. To use Medium, you must agree to our Privacy Policy, including cookie policy."
No Medium, I must NOT agree to your privacy policy and your cookie policy, because to use and share my data you need my FREE consent. AND you can NOT deny me reading an article without giving consent, because then the consent is not FREE, and it is NOT strictly necessary for the service.
Medium: either you allow me to read blog posts on your webserver without FORCING me to allow you to collect my data, or you don't. Choose. But stop fucking annoying me with lying banners.
I think it's more likely that "to make Medium make money," they engage in tracking for advertising purposes.
Medium works perfectly well for my purposes without that banner being displayed. I can open up developer tools and delete that node.
If I don't click agree, does that mean that this information isn't collected? Because tracking cookies are still placed.
Now what is interesting is that I don't remember being asked for consent for them to place a cookie to log the number of articles I read in a month as part of their sign-up funnel.
> Now what is interesting is that I don't remember being asked for consent for them to place a cookie to log the number of articles I read in a month as part of their sign-up funnel.
They could probably make this compliant by storing the counter in your local storage and never sending it anywhere - just having a piece of JS that essentially does: if(Storage.getItem("visits") > 6) { displaySignnupPopup(); }
Ah, when I used to bother with Proxomitron (https://www.proxomitron.info/), I could rewrite anything that went "over the wire" because it acts as a HTTP-proxy listening at localhost. I remember modifying Javascript lines so adding my own code was possible...
One could add an SSL library and basically MITM HTTPS connections, but I never tried that.
I block JS wherever I can, though.
> AND you can NOT deny me reading an article without giving consent
They can't, but they can, for example, ask for a fee to read the article. They don't deny you reading it, but they don't have to give it to you for free either.
Either you share your data, so they can make money to operate the site, or you don't, but then the content is not free.
I expect some sites will choose this route.
> They can't, but they can, for example, ask for a fee to read the article. They don't deny you reading it, but they don't have to give it to you for free either.
Of course they can charge a fee. They should.
> Either you share your data, so they can make money to operate the site, or you don't, but then the content is not free.
No. My data is not a commodity exchange. The GDPR makes that VERY clear. I can not pay with my data. Full stop. There is money for that.
You can pay with your data if you consent to it. It's your data, your choice. But if you don't consent then prepare your credit card for payment.
Most people then will choose the free version.
As y0ghur7_xxx is saying, you cannot pay with your data, just as you cannot take a loan with an interest rate of 1000%, since that would be usury. Even if you signed a contract agreeing to such terms, they don't count as they are not legal.
Contracts don't make laws. Laws make laws. You cannot pay with your data.
> You can pay with your data if you consent to it.
No I can not. I can pay with money. User data is not money.
You are wrong. The entire point of the law is to stop this being an option.
Why is it a problem that you have to agree to be monetized to read a stupid blog post? If you don't like those terms, go read something else. If you refuse their terms, it seems obvious to me that they should be able to refuse to serve you. Maybe I'm missing something here, but this sounds like asking for a free lunch.
> Why is it a problem that you have to agree to be monetized to read a stupid blog post?
Do you want my opinion or what the gdpr says?
My opinion is that my data is not a commodity exchange. We (I) usually use money for that.
The gdpr says
"When assessing whether consent is freely given, utmost account shall be taken of whether, inter alia, the performance of a contract, including the provision of a service, is conditional on consent to the processing of personal data that is not necessary for the performance of that contract."
So they can politely ask me for my data to read the blog post, and I can politely refuse to give it to them. And if my data is not necessary for letting me read the blog post (it is not) then they have to let me read it anyway.
> If you don't like those terms, go read something else.
That is what I did. I left the site. But this is not what I am angry about. They are lying by saying that i MUST consent to their terms.
> If you refuse their terms, it seems obvious to me that they should be able to refuse to serve you. Maybe I'm missing something here, but this sounds like asking for a free lunch.
User data is not money. User data is user data. I am not asking for free lunch.
> My opinion is that my data is not a commodity exchange. We (I) usually use money for that.
So, you decline the exchange (as you did).
Why should the law forbid me from making such exchange, if I want to?
A popup (probably what used to be cookie warning) on Medium says:
> Medium uses browser cookies to give you the best possible experience. To make Medium work, we log user data and share it with processors. To use Medium, you must agree to our Privacy Policy.
I must agree to logging user data and sharing it with processors?
EDIT: come to think of it, it might be a new, GDPR-specific, dark pattern. I can use the site without clicking "I agree", and the existence of that button sort of implies the consent is not assumed. The wording of the message ("you must agree") is just trying to bait consent.
EDIT2: I just read[0] that biggest sites in my country are treating closing the GDPR popup as giving consent to everything. This definitely does not sound as explicit, informed consent. I sincerely hope it'll land them in a world of hurt.
--
[0] - (PL link) https://zaufanatrzeciastrona.pl/post/klikasz-x-w-komunikacie...
The most hilarious part of that is that you really don't need to. I've added a blacklist for everything (first party JS, styling, images) on medium.com and everything continues to work just fine. I don't remember specifically why I did that, but it's probably a dark pattern (such as a modal that pops up after one paragraph of reading) that annoyed me in the past.
Hm, they must have changed how the site works. A while ago, I decided to simply skip any medium.com link posted to HN, because a) it was a crap experience reading on mobile, b) it was a crap experience reading with javascript disabled and c) with javascript enabled, the site was too annoying
When I say "it works fine" I mean "the text is on the page and in a reasonable font". There is a huge amount of wasted space in terms of crap content (banners, etc - all mangled because there is no JS) there, which would be annoying on mobile. This experience (in my mind) competes with websites that will not show the content at all without JS, which is not a high bar.
Some complaints have been filed against Google and Facebook for this practice today:
"Processors" need theoretically not be advertising/tracking networks, but could also e.g. be payment processors. That is something that I could imagine classifies as necessary.
Consent must be given for a specific purpose. It is debatable whether a generic "we need to share your data with our processors" is sufficient. I doubt it.
Yeah, I guess if it were a payment processor or something like that, consent would not even be necessary - so if they're asking for consent, it's probably for something else, in which case the generic message might well be insufficient.
Isn't that the quid pro quo, though? I don't feel obliged to accept their shitty privacy policy, and in return they are not obliged to serve me their often equally shitty content.
> I don't feel obliged to accept their shitty privacy policy, and in return they are not obliged to serve me their often equally shitty content.
Yes and no. Yes, in the sense that you can argue that. No, in the sense that the GDPR just says "no, you cannot ask people to pay with personal information". So either they must show me the article even if I opt-out of giving my information. Or they must make reading their article conditional upon something else (say, paying them). They CANNOT make it conditional upon my consent to use my personal data, because that's just coercing me into clicking "yes", which is exactly what GDPR is supposed to curb.
Trading nonessential data sharing for ability to use a service is forbidden under GDPR. I'd be fine with them geoblocking me, but the way it is now, this popup sounds not compliant, and also manipulative.
I don't agree. What's the next step? Will they ban me? Is it my responsibility to reach out to them and tell them I disagree? Or am I just expected to never go to Medium again?
To be GDPR compliant, everything needs to be opt-in (except for the stuff that is critical to functionality). If you refuse to answer their questions, they need to assume that means "no". From this you can see that it's not your responsibility to tell them anything and you can still use their website just fine.
Nope, the site works fine if you disable cookies. Once Facebook and Google fail I'm sure they'll be next.
I didn't click "I Agree" anyway. If they processed the data, I guess they're in violation now.
That said, it's not the first time I've seen something like that this week. I wonder if some companies aren't simply testing if they can get away with it.
Yes, they say
To make Medium work, we log user data and share it with processors. To use Medium, you must agree to our Privacy Policy, including cookie policy.
However it seems to work just fine without cookies - when I load the site in lynx, and reject all cookies, it loads just fine.
> this is going to be another ridiculous Cookie Law
Given the number of ugly popups I had to click within the last few days, it already is.
I never added this cookie law notice to any of our websites and apps and never had a single problem. We operate in the EU. Pretty small scale. We did nothing for gdpr.