This seems as good a place as any to challenge some of the simplifications that are often given in defence of the GDPR.
Not the OP, but it's pretty straight forward for most people (including the author of TFA). You need to identify what private information you collect.
Fair enough.
You need to decide what lawful basis you are using to collect that data. If you have no lawful basis, you have to stop collecting that data.
Right, but probably the most practically relevant basis for anything non-trivial will be legitimate interests, which of course involves balancing tests. Even today, just a week before this all comes into effect, there is little guidance about where regulators will find that balance.
If you are using consent lawful basis, you need to get consent in an opt-in manner. You need to record what statement you have shown to the user and any consent that you receive.
But this is retrospective and stronger than the previous requirement. Even if you have always been transparent about your intentions and acquired genuine opt-in from willing users, you are now likely to be on the wrong side of the GDPR if you can't produce the exact wording that was on your web site or double opt-in email a decade ago. The most visible effect of the GDPR so far seems to be an endless stream of emails begging people to opt in to continue receiving things, even where people had almost certainly genuinely opted in already before.
For legitimate interest (which is essentially exactly the same as the laws that are currently on the books) you need to be able to exclude processing the data if someone objects.
Not quite. There also appear to be a balancing aspects here, though with some additional complications involving direct marketing, kids, and various other specific circumstances.
Take a common example of analytics for a web site. These may include personal data because of things like IP addresses or being tied to a specific account. Typically these have relatively low risk of harm for data subjects, but if for example a site deals with sensitive subject matter then that won't necessarily be the case either.
A business might have a demonstrable interest in retaining that data for a considerable period in order to protect itself against fraud, violation of its terms, or other obviously serious risks. Maybe the regulators will consider that those interests outweigh the risk to an individual's privacy if their IP address is retained for several years, at least in some cases. Maybe they will find differently if it's the web site for a drug treatment clinic than if it's an online gaming site.
Even if the subject matter isn't sensitive, where does the line get drawn? A business that offers a lot of free material on its site to attract interest from visitors might itself have a legitimate interest in seeing who is visiting the site and tracking conversion flows that could involve several channels over a period of months. This is arguably less important than protecting against something like fraud, but nevertheless the whole model that provides the free material may only be viable if the conversions are good enough. But equally, maybe it's not strictly necessary for the operation of the site and whatever services it offers for real money, so should the visitor's interest in not having their IP address floating around in someone's analytics database outweigh the site that is offering free content in exchange for little else in return?
That's just one simple, everyday example of the ambiguity involved here, and as far as I'm aware the regulator in my country has yet to offer any guidance in this area. Would any of the GDPR's defenders here like to give a black and white statement about this example and when the processing will or won't be legal under the new regulations?
The other lawful bases are very unlikely to show up in most organisations.
I would think the basis that you have to comply with some other law is also likely to be quite common. It will immediately cover various personal data about identifying customers and recording their transactions for accounting purposes, for example. But again, since that will include the proof of location requirements for VAT purposes in some cases, how much evidence is a merchant required to keep to cover themselves on that front, and when does it cross into keeping too much under GDPR?
The other main problem is that if you want to use something other than contract basis, you need to build something that allows the user to exercise their rights.
And once again, those rights are significantly stronger under the GDPR, particularly around erasure or objecting to processing. Setting up new systems that comply may not be too difficult, but what about legacy systems that were not unreasonable at the time but don't allow for isolated deletion of personal data? To my knowledge, there is still a lot of ambiguity around how far "erasure" actually goes, particularly regarding unstructured data such as emails or personal notes kept by staff while dealing with some issue, or potentially long-lived data in archives that are available but no longer in routine use. And then you get all the data that is built incrementally, from source control systems to blockchain, where by construction it may be difficult or impossible to selectively erase partial data in the middle.
Not to put too fine a point on that, personally I highly approve of this. I really could care less if somebody's business model is destroyed because it is now too expensive to collect information that you don't need to do the job.
But what if an online service's business model relies on processing profile data for purposes such as targeting ads to be viable, and regulators decide that a subject's right to object to that processing outweighs its necessity to the financial model?
It's easy to say a lot of people might not like being tracked, but on the other hand, if services like Google and Facebook all disappeared in the EU as a result of the GDPR, I'm not sure how popular it would be. There are two legitimate sides to this debate, and neither extreme is obviously correct.
Thank you! This post starts to show some of the huge complexities that GDPR has for business and their understanding of what the terms of the law mean.
A point is that often statements of a law are defined not by the language but by the ruling of lawsuits that occur around those statements and that is what most companies and lawyers are waiting for, what do courts rule when these lawsuits happen.
The biggest issue that I have heard of (Im no expert) is what does the right to be forgotten actually mean ? Does that mean all your backups are now illegal as you are retaining the customers information after they asked you to remove their records?
I think some of the fear that smaller business have is that this will encourage lawsuits until people understand how the courts will rule on each item.
I think the parent's reply is a good one. We could probably debate some of the finer points, but I think when we get some time to see how it all shakes out in the end we'll have a better vantage point.
I can't find it right now (and I have to get back to work), but there is a reasonableness requirement for requests. So things like backups might be covered by that. I wish there was some direction on that because it's a problem for me at work as well.
My opinion is that the directive's view is that all personal data retention should be temporary. There should be a defined point where the personal data is deleted. Either that's when it's no longer necessary for the contract, or when you no longer have a legitimate interest in it, or when the user asks for the removal.
Up to this point, most of us have been building databases with the intent of retaining the information indefinitely. So we never thought about this. Although I'm a fan of this law, I admit that it's going to be troublesome transitioning from where we were to where we need to go.
And as the parent briefly stated, immutable databases are going to be a serious problem.
I think the UK agency had some text on erasure and backups, and it basically boiled down to this:
If a data subject requests their data to be erased, you should remove their data from active systems so that it is no longer being processed, but you don't have to remove it from backups or other passive systems. You should however store some sort of marker so that if you need to restore data from backups, the data subject's data will be re-erased or otherwise stopped from entering active systems again.
And if a data subject asks, you have to tell them how long you store your backups of their personal data.
I think that's perfectly reasonable. And if your backup retention policy is "forever", now might be a good time to re-evaluate that policy.
Neither the UK nor the EU previously had any general provision for a right to erasure. At EU level, considerable waves were made when the "right to be forgotten" ruling was issued, but that came from a court that was considering a specific case.
I think some of the fear that smaller business have is that this will encourage lawsuits until people understand how the courts will rule on each item.
That concern really is unfounded, though. The primary means of enforcement of the GDPR will be action by national data protection regulators. It isn't some carte blanche for trigger-happy lawyers to start suing every business that gets a little detail wrong or anything like that.
The general concern that the picture is unclear until something happens to clarify it is, unfortunately, much better founded.
> But what if an online service's business model relies on processing profile data for purposes such as targeting ads to be viable, and regulators decide that a subject's right to object to that processing outweighs its necessity to the financial model?
forgive my frank language, but too fucking bad.
edit: my right always outweigh your profits. Sorry.
Not the OP, but it's pretty straight forward for most people (including the author of TFA). You need to identify what private information you collect.
Fair enough.
You need to decide what lawful basis you are using to collect that data. If you have no lawful basis, you have to stop collecting that data.
Right, but probably the most practically relevant basis for anything non-trivial will be legitimate interests, which of course involves balancing tests. Even today, just a week before this all comes into effect, there is little guidance about where regulators will find that balance.
If you are using consent lawful basis, you need to get consent in an opt-in manner. You need to record what statement you have shown to the user and any consent that you receive.
But this is retrospective and stronger than the previous requirement. Even if you have always been transparent about your intentions and acquired genuine opt-in from willing users, you are now likely to be on the wrong side of the GDPR if you can't produce the exact wording that was on your web site or double opt-in email a decade ago. The most visible effect of the GDPR so far seems to be an endless stream of emails begging people to opt in to continue receiving things, even where people had almost certainly genuinely opted in already before.
For legitimate interest (which is essentially exactly the same as the laws that are currently on the books) you need to be able to exclude processing the data if someone objects.
Not quite. There also appear to be a balancing aspects here, though with some additional complications involving direct marketing, kids, and various other specific circumstances.
Take a common example of analytics for a web site. These may include personal data because of things like IP addresses or being tied to a specific account. Typically these have relatively low risk of harm for data subjects, but if for example a site deals with sensitive subject matter then that won't necessarily be the case either.
A business might have a demonstrable interest in retaining that data for a considerable period in order to protect itself against fraud, violation of its terms, or other obviously serious risks. Maybe the regulators will consider that those interests outweigh the risk to an individual's privacy if their IP address is retained for several years, at least in some cases. Maybe they will find differently if it's the web site for a drug treatment clinic than if it's an online gaming site.
Even if the subject matter isn't sensitive, where does the line get drawn? A business that offers a lot of free material on its site to attract interest from visitors might itself have a legitimate interest in seeing who is visiting the site and tracking conversion flows that could involve several channels over a period of months. This is arguably less important than protecting against something like fraud, but nevertheless the whole model that provides the free material may only be viable if the conversions are good enough. But equally, maybe it's not strictly necessary for the operation of the site and whatever services it offers for real money, so should the visitor's interest in not having their IP address floating around in someone's analytics database outweigh the site that is offering free content in exchange for little else in return?
That's just one simple, everyday example of the ambiguity involved here, and as far as I'm aware the regulator in my country has yet to offer any guidance in this area. Would any of the GDPR's defenders here like to give a black and white statement about this example and when the processing will or won't be legal under the new regulations?
The other lawful bases are very unlikely to show up in most organisations.
I would think the basis that you have to comply with some other law is also likely to be quite common. It will immediately cover various personal data about identifying customers and recording their transactions for accounting purposes, for example. But again, since that will include the proof of location requirements for VAT purposes in some cases, how much evidence is a merchant required to keep to cover themselves on that front, and when does it cross into keeping too much under GDPR?
The other main problem is that if you want to use something other than contract basis, you need to build something that allows the user to exercise their rights.
And once again, those rights are significantly stronger under the GDPR, particularly around erasure or objecting to processing. Setting up new systems that comply may not be too difficult, but what about legacy systems that were not unreasonable at the time but don't allow for isolated deletion of personal data? To my knowledge, there is still a lot of ambiguity around how far "erasure" actually goes, particularly regarding unstructured data such as emails or personal notes kept by staff while dealing with some issue, or potentially long-lived data in archives that are available but no longer in routine use. And then you get all the data that is built incrementally, from source control systems to blockchain, where by construction it may be difficult or impossible to selectively erase partial data in the middle.
Not to put too fine a point on that, personally I highly approve of this. I really could care less if somebody's business model is destroyed because it is now too expensive to collect information that you don't need to do the job.
But what if an online service's business model relies on processing profile data for purposes such as targeting ads to be viable, and regulators decide that a subject's right to object to that processing outweighs its necessity to the financial model?
It's easy to say a lot of people might not like being tracked, but on the other hand, if services like Google and Facebook all disappeared in the EU as a result of the GDPR, I'm not sure how popular it would be. There are two legitimate sides to this debate, and neither extreme is obviously correct.