Archive for the ‘95 directive’ Category

Google’s legal battle to continue in the Supreme Court

Posted on July 30th, 2015 by



Google was yesterday granted permission by the UK Supreme Court to appeal against the Court of Appeal’s decision in the Vidal-Hall case (previously reported on here). In particular, the Supreme Court will consider the issue of whether the claimants can bring a claim for compensation under section 13 of the Data Protection Act 1998 even if they have not suffered actual financial loss – the milestone ruling in the original case.

The Court of Appeal ruling on this issue was thought to greatly expand the scope for data protection claims to be brought in the UK, as it opened the gates for claimants to bring DPA breach claims based on distress alone.

“The Supreme Court has granted permission in part for Google to appeal the Court of Appeal of England and Wales decision in a case relating to a dispute over the user information through cookies via use of the Apple Safari browser,” it held.

In media reports, Google welcomed the outcome, saying: “We are pleased that the Supreme Court has agreed to consider key issues in this complex case.”

The announcement by the Supreme Court can be found here. The hearing is still likely to be many months away, but watch this space… the appeal will determine a very important question for defendants operating websites and other businesses within the UK.

Q: Have we just passed a new EU data protection law? A: Not yet!

Posted on June 16th, 2015 by



For those of you keeping tabs on EU data protection developments, today’s exciting news was that the Council of the EU has reached a “general approach” on Europe’s proposed General Data Protection Regulation, with the twin aims of enhancing Europeans’ data protection rights and increasing business opportunities in the Digital Single Market.

And what a lot people have had to say about it! Some say it’s going to “kill off Europe’s cloud computing industry” (story here) while others describe it as “a brazen effort to destroy Europe’s world leading approach to data protection and privacy” (story here). It’s rather remarkable to note that both industry and civil liberties groups seem equally downcast about the new proposals, albeit for entirely opposing reasons.

But what these prophecies of doom all overlook is that we don’t have a new data protection law yet. In fact, far from it – we’re still only at the draft stage! And until we have agreed the final text of the new law, it’s very difficult to predict where exactly we will land on many of the issues.

For those of you struggling to understand timelines and where exactly we are in the process, here’s how things stand:

1. The European Commission (in simple terms, the executive branch of EU government) proposed a new EU data protection law in 2012 – this is the “General Data Protection Regulation”.

2. The EU Parliament (for our US audience, think the House of Representatives) and the Council of the EU (think the US Senate) each then got to review and table amendments to the draft legislation through various committee proceedings – the aim being for each institution to come up with its own preferred draft of the law.

3. The EU Parliament put forward its proposed “version” in March 2014, favouring strict protection of individuals’ rights. Today’s development is that the Council of the EU has finally (and reluctantly) put forward its own proposed “version”, with a greater leaning towards risk-based application of data protection rules. This some 3 years after the law was originally proposed by the Commission – progress has not been quick.

4. What happens next is that the Parliament, the Council and the Commission will now enter three-way “trilogue” negotiations (explained here). These are scheduled to begin on 24 June and their ultimate aim is to produce a final negotiated text that all three institutions agree on. Then, and only then, will the General Data Protection Regulation become law.

5. But, wait a minute! Even when the new law does get adopted, it’s unlikely to take effect for a further two years (unless this two year lead-in period is negotiated out during the trilogue). So, even assuming things go swimmingly and the three institutions agree on the language of the law this year, then it is still very unlikely to become effective before the middle of 2017 – and, given the rate of progress to date, 2018 frankly seems more realistic.

What all this means is that today was certainly a big day for EU data protection, but there’s still a long road to travel down. There are some things that seem almost certain to make it into the final text (application of EU data protection rules to any worldwide business servicing EU citizens, extension of liability to data processors, some notion of a one-stop shop, greater fines etc.), but many that still remain open to debate (mandatory DPOs, the role of consent, etc.).

Stay tuned, and we’ll keep you posted once we have a better assessment of the likely final text of the law. In the meantime, enjoy speculating along with everyone else but remember that, until the law is adopted, it’s just that – speculation!

Handling government data requests under Processor BCR

Posted on June 2nd, 2015 by



Earlier today, the Article 29 Working Party published some new guidance on Processor BCR. There’s no reason you would have noticed this, unless you happen to be a BCR applicant or regularly visit the Working Party’s website, but the significance of this document cannot be overstated: it has the potential to shape the future of global data transfers for years to come.

That’s a bold statement to make, so what is this document – Working Party Paper WP204 “Explanatory Document on the Processor Binding Corporate Rules” – all about? Well, first off, the name kind of gives it away: it’s a document setting out guidance for applicants considering adopting Processor BCR (that’s the BCR that supply-side companies – particularly cloud-based companies – are all rushing to adopt). Second, it’s not a new document: the Working Party first published it in 2013.

The importance of this document now is that the Working Party have just updated and re-published it to provide guidance on one of the most contentious and important issues facing Processor BCR: namely how Processor BCR companies should respond to government requests for access to data.

Foreign government access to data – the EU view

To address the elephant in the room, ever since Snowden, Europe has expressed very grave concerns about the ‘adequacy’ of protection for European data exported internationally – and particularly to the US. This, in turn, has led to repeated attempts by Europe to whittle away at the few mechanisms that exist for lawfully transferring data internationally, from the European Commission threatening to suspend Safe Harbor through to the European Parliament suggesting that Processor BCR should be dropped from Europe’s forthcoming General Data Protection Regulation (a suggestion that, thankfully, has fallen by the wayside).

By no means the only concern, but certainly the key concern, has been access to data by foreign government authorities. The view of EU regulators is that EU citizens’ data should not be disclosed to foreign governments or law enforcement agencies unless strict mutual legal assistance protocol has been followed. They rightly point out that EU citizens have a fundamental right to protection of their personal data, and that simply handing over data to foreign governments runs contrary to this principle.

By contrast, the US and other foreign governments say that prompt and confidential access to data is often required to prevent crimes of the very worst nature, and that burdensome mutual legal assistance processes often don’t allow access to data within the timescales needed to prevent these crimes. The legitimate but conflicting views of both sides lead to the worst kind of outcome: political stalemate.

The impact of foreign government access to data on BCR

In the meantime, businesses have found themselves trapped in a ‘no man’s land’ of legal uncertainty – the children held responsible for the sins of their parent governments. Applicants wishing to pursue Processor BCR have particularly found themselves struggling to meet its strict rules concerning government access to data: namely that any “request for disclosure should be put on hold and the DPA competent for the controller and the lead DPA for the BCR should be clearly informed about it.” (see criteria 6.3 available here)

You might fairly think: “Why not just do this? If a foreign government asks you to disclose data, why not just tell them you have to put it on hold until a European DPA sanctions – or declines – the disclosure?” The problem is that reality is seldom that straightforward. In many jurisdictions (and, yes, I’m particularly thinking of the US) putting a government data disclosure order “on hold” and discussing it with a European DPA is simply not possible.

This is because companies are typically prohibited under foreign laws from discussing such disclosure orders with ANYONE, whether or not a data protection authority, and the penalties for doing so can be very severe – up to and including jail time for company officers. And let’s not forget that, in some cases, the disclosure order can be necessary to prevent truly awful offences – so whatever the principle to be upheld, sometimes the urgency or severity of a particular situation will simply not allow for considered review and discussion.

But that leaves companies facing the catch-22. If they receive one of these orders, they can be in breach of foreign legal requirements for not complying with it; but if they do comply with it, they risk falling foul of European data protection rules. And, if you’re a Processor BCR applicant, you might rightly be wondering how on earth you can possibly give the kind of commitment that the Working Party expects of you under the Processor BCR requirements.

How the Working Party’s latest guidance helps

To their credit, the Working Party have acknowledged this issue and this is why their latest publication is so important. They have updated their BCR guidance to note that “in specific cases the suspension and/or notification [to DPAs of foreign government data access requests] are prohibited”, including for example “a prohibition under criminal law to preserve the confidentiality of a law enforcement investigation”. In these instances, they expect BCR applicants to use “best efforts to obtain the right to waive this prohibition in order to communicate as much information as it can and as soon as possible”.

So far, so good. But here’s the kicker: they then say that BCR applicants must be able to “demonstrate” that they exercised these “best efforts” and, whatever the outcome, provide “general information on the requests it received to the competent DPAs (e.g. number of applications for disclosure, type of data requested, requester if possible, etc.)” on an annual basis.

And therein lies the problem: how does a company “demonstrate” best efforts in a scenario where a couple of NSA agents turn up on its doorstep brandishing a sealed FISA order and requiring immediate access to data? You can imagine that gesticulating wildly probably won’t cut it in the eyes of European regulators.

And what about the requirement to provide “general information” on an annual basis including the “number of applications for disclosure”? In the US, FISA orders may only be reported in buckets of 1,000 orders – so, even if a company received only one or two requests in a year, the most it could disclose is that it received between 0 and 999 requests, making it seem like government access to their data was much more voluminous than in reality it was.

I don’t want problems, I want solutions!!!

So, if you’re a Processor BCR applicant, what do you do? You want to see through your BCR application to show your strong commitment to protecting individuals’ personal data, and you certainly don’t want to use a weaker solution, like Model Clauses or Safe Harbor that won’t carry equivalent protections. But, at the same time, you recognize the reality that there will be circumstances where you are compelled to disclose data and that there will be very little you can do – or tell anyone – in those circumstances.

Here’s my view:

  • First off, you need a document government data access policy. It’s unforgivable in this day and age, particularly in light of everything we have learned in the past couple of years, not to have some kind of written policy around how to handle government requests for data. More importantly, having a policy – and sticking to it – is all part and parcel of demonstrating your “best efforts” when handling government data requests.
  • Second, the policy needs to identify who the business stakeholders are that will have responsibility for managing the request – and, as a minimum, this needs to include the General Counsel and, ideally, the Chief Privacy Officer (or equivalent). They will represent the wall of defense that prevents government overreach in data access requests and advise when requests should be challenged for being overly broad or inappropriately addressed to the business, rather than to its customers.
  • Third, don’t make it easy for the government. They want access to your data, then make them work for it. It’s your responsibility as the custodian of the data to protect your data subject’s rights. To that end, ONLY disclose data when LEGALLY COMPELLED to do so – if access to the data really is that important, then governments can typically get a court order in a very short timeframe. Do NOT voluntarily disclose data in response to a mere request, unless there really are very compelling reasons for doing so – and reasons that you fully document and justify.
  • Fourth, even if you are under a disclosure order, be prepared to challenge it. That doesn’t necessarily mean taking the government to court each and every time, but at least question the scope of the order and ask whether – bearing in mind any BCR commitments you have undertaken – the order can be put on hold while you consult with your competent DPAs. The government may not be sympathetic to your request, particularly in instances of national security, but that doesn’t mean you shouldn’t at least ask.
  • Fifth, follow the examples of your peers and consider publishing annual transparency reports, a la Google, Microsoft and Yahoo. While there may be prohibitions against publishing the total numbers of national security requests received, the rules will typically be more relaxed when publishing aggregate numbers of criminal data requests. This, in principle, seems like a good way of fulfilling your annual reporting responsibility to data protection authorities and – in fact – goes one step further: providing transparency to those who matter most in this whole scenario, the data subjects.
  • So why does the Working Party’s latest opinion matter so much? It matters because it’s a vote of confidence in the Processor BCR system and an unprecedented recognition by European regulatory authorities that there are times when international businesses really do face insurmountable legal conflicts.

    Had this opinion not come when it did, the future of Processor BCR would have been dangerously undermined and, faced with the prospect of Safe Harbor’s slow and painful demise and the impracticality of Model Clauses, would have left many without a realistic data export solution and further entrenched a kind of regulatory ‘Fortress Europe’ mentality.

    The Working Party’s guidance, while still leaving challenges for BCR applicants, works hard to strike that hard-to-find balance between protecting individuals’ fundamental rights and the need to recognize the reality of cross-jurisdicational legal constraints – and, for that, they should be commended.

    EU data exports – choosing the least worst option?

    Posted on April 3rd, 2015 by



    Data, as anyone doing privacy on a global scale will tell you, knows no boundaries.  It can be collected in country A, routed through countries B, C, and D, and come to rest on servers in country E.  Those servers are likely then maintained by a third party in country F, with subcontracted support from another third party provider in country G.
    All that is well and good, but what do you do when country A happens to be within Europe, and any one or more of countries B through G are outside of Europe?  Europe’s aging Data Protection Directive tells you that if any of that data is personal in nature, then its transfer outside of the Europe is forbidden.  Forbidden, that is, unless you have an “adequate” data export solution in place.
    So the good news is that you can export data internationally if you have an “adequate” solution in place, and the even better news is that there’s not one solution but three!  Phew!  Your choices are either:
    • sign up to the US-EU Safe Harbor Framework – a voluntary privacy framework for US-based importers of data, 
    • execute so-called EU Model Clauses – also known as Standard Contractual Clauses) – standard form, non-negotiable data export agreements approved by the European Commission, or
    • implement Binding Corporate Rules – a binding organizational data governance policy framework reviewed and approved by European data protection authorities.
    So far, so good.  But then comes the problem: each of these solutions suffers from some serious drawbacks that either makes it commercially infeasible (Model Clauses), mistrusted by European customers and regulators (Safe Harbor), or subject to a lengthy regulatory approval process that  puts off well-intentioned businesses that would otherwise be willing to adopt it (BCR).
    A perilous future for Safe Harbor?
    To illustrate the issue, today roughly 4,000 US businesses rely on Safe Harbor to import personal data from Europe.  However, following the Snowden revelations, European legislators and regulators are increasingly reluctant to recognize the validity of Safe Harbor – believing that it no longer (or perhaps, never) provides “adequate” protection for data and, as such, should be suspended or revoked.  See here and here, for example.
    Indeed, one case currently before the European Court of Justice, may well decide Safe Harbor’s fate once and for all.  In Schrems v the Irish Data Protection Commissioner (Case C-362/14), one of the points the Court has to consider is whether Safe Harbor does in fact provide “adequate” protection for European data exports; if it decides the answer is no, then Safe Harbor could well be over.
    But, frankly, whether or not that happens is largely an academic point.  Many US importers already find EU customers will refuse to contract with them if they rely on Safe Harbor.  And, even if it survives this court case, the European Commission has been threatening for some time to suspend Safe Harbor.  With this level of ongoing uncertainty, it’s inevitable that businesses are looking to other options available to them
    The problem with Model Clauses

    In nearly all cases, they next turn to Model Clauses as the solution to their data export woes.  On one level, doing so makes a lot of sense: Model Clauses are the darling of the regulatory community (after all, they created them), contain robust data protection terms, and so are often considered a ‘guaranteed compliant’ solution for the customers that use them.
    The reality, though, is something different.  Model Clauses neither provide the protection for data that customers and regulators think they do, nor are they actually complied with in practice – more often than not, they’re signed, put in a drawer and forgotten about.  For data importing vendors, they are also woefully impractical – containing subcontracting controls that are unrealistic, excessive audit rights, and no liability limitations.  And, to add to all this, where lengthy international subcontracting chains are involved, exporters and importers will often be looking an an extremely complicated web of Model Clause contracts to prepare and sign.
    Taking that all into account, what right-minded person would really want to entrust any transfer of data to something so complicated and unworkable in practice?
    Which leaves BCRs
    With Safe Harbor on its last legs, and model clauses suffering from all manner of problems, then the final remaining solution available to data importers is Binding Corporate Rules.  In themselves, BCRs are a fine solution and often thought of (rightly) as the gold standard for data exports from the EU – after all, they have to get reviewed and signed off by European regulators.  
    Further, the business adopting BCRs gets to draft them in a way that reflects the particular characteristics and needs of their organization and, once in place, BCRs can be self-managed by the business with minimal ongoing maintenance and regulatory oversight. The consequence of this is that they significantly reduce administrative burden and, for large organizations, even cost as compared with model clauses.
    But their single biggest drawback is the lack of any simple approval or self-certification process.  Adopting BCR, as anyone who’s been through the process knows, is not quick or straightforward.  While the end result is undoubtedly positive, the regulatory approval process typically takes around 18 months from start to finish.  Many organizations, faced with pressing data export needs, simply don’t have the time to hang around and so turn to quicker, off-the-shelf solutions.  
    So what do you do?
    The simple reality right now is that Europe has no good solution for facilitating international data exports, which is in stark contrast to increasingly globalized movements and storage of data.  Yet, be that as it may, data export compliance is an important component of European privacy law, and one that will not get any simpler in the short- to mid-term.
    Businesses are therefore left to consider what will be the most appropriate solution for their needs.  For US businesses, that will still often be Safe Harbor, but on the understanding that this cannot be relied upon as an “exclusive” solution for all their data exports needs and that, in many cases, they still need to be prepared to sign Model Clauses with important customers who insist on them.
    What is the most appropriate data export strategy for an international business then?  Here’s my suggestion:
    1. If you’re a US business, rely on Safe Harbor to the extent you can.
    1. Where you can’t, or if you are sending data to other non-EU countries, use Model Clauses (there’s really very little alternative).
    1. But, to provide a more effective longer term solution, start the process now of preparing for and adopting BCR.  Once implemented, these will ultimately be a far more efficient solution that can replace the awkward pairing of Safe Harbor and Model Clause solutions.
    So while there’s no good solution, with some careful strategizing and forward thinking, you may at least get to a place that is – for want of a better word – adequate.

    Belgian research report claims Facebook tracks the internet use of everyone

    Posted on April 1st, 2015 by



    A report published by researchers at two Belgian universities claims that Facebook engages in massive tracking of not only its users but also people who have no Facebook account. The report also identifies a number of other violations of EU law.

    When Facebook announced, in late 2014, that it would revise its Data Use Policy (DUP) and Terms of Services effective from 30 January 2015, a European Task Force, led by the Data Protection Agencies of the Netherlands, Belgium and Germany, was formed to analyse the new policies and terms.

    In Belgium, the State Secretary for Privacy, Bart Tommelein, had urged the Belgian Privacy Commission to start an investigation into Facebook’s privacy policy, which led to the commissioning of the draft report that has now been published. The report concludes that Facebook is acting in violation of applicable European legislation and that “Facebook places too much burden on its users. Users are expected to navigate Facebook’s complex web of settings in search of possible opt-outs“.

    The main findings of the report can be summarised as follows:

    Tracking through social plug-ins

    The researchers found that whenever a user visits a non-Facebook website, Facebook will track that user by default, unless he or she takes steps to opt-out. The report concludes that this default opt-out approach is not in line with the opt-in requirements laid down in the E-privacy Directive.

    As far as non-users of Facebook are concerned, the researchers’ findings confirm previous investigations, most notably in Germany, that Facebook places a cookie each time a non-user visits a third-party website which contains a Facebook social plug-in such as the Like-button. Moreover, this cookie is placed regardless of whether the non-user has clicked on that Like button or not. Considering that Facebook does not provide any of this information to such non-users, and that the non-user is not requested to consent to the placing of such cookie, this can also be considered a violation of the E-privacy Directive.

    Finally, the report found that both users and non-users who decide to use the opt-out mechanism offered by Facebook receive a cookie during this very opt-out process. This cookie, which has a default duration of two years, enables Facebook to track the user or non-user across all websites that contain its social plug-ins.

    Other data protection issues identified

    In addition to a number of consumer protection law issues, the report also covers the following topics relating to data protection:

    • Consent: The researchers are of the opinion that Facebook provides only very limited and vague information and that for many data uses, the only choice for users is to simply “take-it-or-leave-it”. This is considered to be a violation of the principle that in order for consent to be valid, it should be freely given, specific, informed and unambiguous as set-out in the Article 29 Working Party’s Opinion on consent (WP 187).
    • Privacy settings: The report further states that the current default settings (opt-out mechanism) remain problematic, not in the least because “users cannot exercise meaningful control over the use of their personal information by Facebook or third parties” which gives them “a false sense of control”.
    • Location data: Finally, the researchers consider that Facebook should offer more granular in-app settings for the sharing of location data, and should provide more detailed information about how, when and why it processes location data. It should also ensure it does not store the location data for longer than is strictly necessary.

    Conclusion

    The findings of this report do not come as a surprise. Indeed, most of the alleged areas of non-compliance have already been the object of discussions in past years and some have already been investigated by other privacy regulators (see e.g. the German investigations around the ‘like’ button).

    The real question now surrounds what action the Belgian Privacy Commission will take on the basis of this report.

    On the one hand, as of late, data protection enforcement has been put high on the agenda in Belgium. It seems the Belgian Privacy Commission is more determined than ever to show that its enforcement strategy has changed. This can also be situated in the context of recent muscular declarations from the State Secretary of Privacy that companies like Snapchat and Uber must be investigated to ensure they comply with EU data protection law.

    Facebook, on the other hand, questions the authority of the Belgian Privacy Commission to conduct such an investigation, stating that only the Irish DPA is competent to discuss their privacy policies. Facebook has also stated that the report contains factual inaccuracies and expressed regret that the organisation was not contacted by the researchers.

    It will therefore be interesting to see how the discussions between Facebook and the Belgian Privacy Commission develop. The President of the Belgian Privacy Commission has declared a number of times that it will not hesitate to take legal action against Facebook if the latter refuses to implement the changes for which Privacy Commission is asking.

    This could potentially lead to Facebook being prosecuted, although it is more likely that it will be forced to accept a criminal settlement. In 2011, following the Privacy Commission’s investigation into Google Street View, Google accepted to pay 150.000 EUR as part of a criminal settlement with the public prosecutor.

    Will no doubt be continued…

     

     

    EU privacy reform: are we nearly there yet?

    Posted on February 7th, 2015 by



    One thing everyone agrees on is that the EU needs new data protection rules. The current rules, now some 20 years old, are getting long in the tooth. Adopted at a time when having household Internet access was still a rare thing (remember those 56kpbs dial-up modems, anyone?), there’s a collective view across all quarters that they need updating for the 24/7 connected world in which we now live.

    The only problem is this: we can’t agree what those new rules should look like. That shouldn’t really be a surprise – Europe is politically, culturally, economically and linguistically diverse, so it would be naive to think that reaching consensus on such an important and sensitive topic would be quick or easy.

    Nevertheless, whether through optimism, politicization, or plain naivety, there have been repeated pronouncements over the years that adoption of the new rules is imminent. Since the initial publication of the EU’s draft General Data Protection Regulation in January 2012, data protection pundits have repeatedly predicted it would all be done and dusted in 2012, 2013, 2014 and now – no surprises – in 2015.

    The truth is we’re a way off yet, as this excellent blog from the UK Deputy Information Commissioner highlights. Adoption of the new General Data Protection Regulation ultimately requires agreement to be reached, first, individually by each of the European Parliament and the Council of the EU on their respective preferred amendments to the original draft proposals; and then, second, collectively between the Parliament, the Council and the Commission via three-way negotiations (so-called “trilogue” negotiations).

    As at the date of this post, the Parliament has reached consensus on its preferred amendments to the draft, but the Council’s deliberations in this respect are still ongoing. That means the individual positions of both institutions have not yet been finalised, the trilogue negotiations have not yet begun, and so an overall agreed upon text is not yet even close. There’s still a mountain to climb.

    Not that progress hasn’t been made – it has, but there’s still a long way to go and it’s very unlikely the new law will pass in 2015. Even when it does, the expectation is that it will be a further two years until it takes effect. In other words, don’t expect the news rules to bite any time before 2018 – six years after they were originally proposed.

    Why so long? Designing privacy rules fit for the 21st century is a difficult task, and the difficulty stems from the inherent subjectivity of privacy as a right. When thinking about what protections should exist, a natural consideration is what “expectation” of privacy individuals have. And therein lies the problem: no two people have the same expectations: what you expect and I expect are likely very different. Amplify those differences onto a national stage, and it becomes quickly apparent why discussions over new pan-European rules have become so protracted.

    How, then, to progress the debate through to conclusion?

    First, European lawmakers need to listen to the views of all stakeholders in the legislative process without prejudice or pre-judging their value. It’s far too simplistic to dismiss consumer advocates’ proposals as ‘impractical’, and equally disingenuous to label all industry concerns as just ‘lobbying’. Every side to the debate raises important points that deserve careful consideration. Insufficiently strong privacy protections will come at an expense to society, our human rights and our dignity; but, conversely, excessively strict regulation will impede innovation, hamper technological progress and restrict economic growth. A balance needs to be found, and ignoring salient points made by any side to the debate comes at a cost to us all.

    Once lawmakers accept this, then they must also accept compromise and not simply ‘dig in’ to already fortified positions. Any agreement requires compromise – whether a verbal agreement between friends, a written contract between counterparties, or even legislative agreement over new laws like the General Data Protection Regulation. At present, however, there is too much bluster, quarreling and entrenchment, where reason, level-headedness and compromise should prevail.

    When it comes to new data protection rules, a compromise – one that benefits all stakeholders of the information economy – is there to be struck: we just have to find it.

    WP29 Guidance on the right to be forgotten

    Posted on December 18th, 2014 by



    On 26 November the Article 29 Working Party (“WP29“) issued WP225 (the “Opinion“). Part I of the Opinion provides guidance on the interpretation of the Court of Justice of the European Union ruling on Google Spain and Inc v the Spanish Data Protection Authority and Mario Costeja Gonzalez (the “Ruling“) and in part II the WP29 provides a list of common criteria that the European Regulators would take into account when considering right to be forgotten (“RTBF“) related complaints from individuals.

    The Opinion is in line with the Ruling but it further elaborates on certain legal and practical aspects of it and it offers, as a result, an invaluable insight into European Regulators’ vision of the future of the RTBF.

    Some of the main ‘take-aways’ are highlighted below:

    Territorial scope

    One of the most controversial conclusions in the Opinion is that limiting the de-listing to the EU domains of the search engines cannot be considered sufficient to satisfactorily guarantee the rights of the data subjects and that therefore de-listing decisions should be implemented in all relevant domains, including “.com”.

    The above confirms the trend of extending the application of EU privacy laws (and regulatory powers) beyond the traditional interpretation of current territorial scope rules under the Data Protection Directive and will present search engines with legal uncertainly and operational challenges.

    Material scope

    The Opinion argues that the precedent set out by the judgment only applies to generalist search engines and not to search engines with a limited scope of action (for instance, search engines within a website).

    Even though such clarification is to be welcome, where does this leave non-search engine controllers that receive right to be forgotten requests?

    What will happen in practice?

    In the Opinion, the WP29 advises that:

    • Individuals should be able to exercise their rights using “any adequate means” and cannot be forced by search engines to use specific electronic forms or procedures.
    • Search engines must follow national data protection laws when dealing with requests.
    • Both search engines and individuals must provide “sufficient” explanations in their requests/decisions.
    • Search engines must inform individuals that they can turn to the Regulators if they decide not to de-list the relevant materials.
    • Search engines are encouraged to publish their de-listing criteria.
    • Search engines should not inform users that some results to their queries have been de-listed. WP29’s preference is that this information is provided generically.
    • The WP29 also advises that search engines should not inform the original publishers of the information that has been de-listed about the fact that some pages have been de-listed in response to a RTBF request.

     

    You thought consent applies only to cookies?! Then guess again!

    Posted on December 13th, 2014 by



    Imagine this: you walk into a big department store. You pick up a pair of running shoes and take them to the counter to purchase. The store has thousands of visitors every day, so to the sales assistant, you’re just another nameless face in the crowd.

    As you’re buying the shoes, the sales assistant hands you a note. On it is written some kind of seemingly meaningless number “Hteushrbt6123987!”. You ask the sales assistant what this means. “Oh,” he says, “it’s just a way for us to remember that you like sports equipment. This number is unique to you, so we make a note of it and record the fact that you like running shoes. Next time you come in, we’ll ask you for the number and look it up on our systems. That’ll tell us that you like running shoes, so we’ll then show you other sports products we think may interest you.”

    Slightly bemused, you pocket the paper, leave the store and return home. But, sometime later, you return to the store. As you enter, another shop assistant asks you if the store has ever given you a piece of paper with a number on it. You root around in your pockets, find the note, and hand it over. The shop assistant examines it, and taps away on a little handheld device he’s carrying. “Ah!” he says, “Number Hteushrbt6123987! You like running shoes, don’t you? Maybe you’d like to see some other running gear we have in stock? We have some new running vests in, you know – let me show you!”

    If such a thing existed, this is how cookie-based targeted advertising would work in the offline world. The note handed to you by the shop assistant represents, of course, a cookie: a piece of information stored with you that enables you (and so your shopping preferences) to be recognized next time you visit the shop so that the merchant can show you products it thinks will interest you – all without knowing your real name, address or other directly identifying details.

    Depending on your personal preferences, you may think this is great (“They showed me stuff I wanted but without needing to know my personal details!”) or creepy (“They may not know my name, but that number is all they need to track and surveil me!”) That’s a debate that fiercely divides opinion in the privacy community.

    Imagining fingerprinting in the offline world

    But cookies aren’t the only way to identify someone. Imagine if instead of being handed a note, the sales assistant instead jotted down some of your personal characteristics: your age, height, weight and gender; the color of your hair (and whether you have any hair at all!); whether or not you wear glasses; your nationality and so on. We’re all unique, so if the sales assistant recorded enough of these details, the store wouldn’t need your name or to give you a number – they could recognize you simply from the information they’d collected about you: “Ah, yes, you’re the 6 foot, 36 year old dark-haired British male, weighing 180 pounds and wearing glasses, who likes running shoes. Let me show you our latest sportswear items!”

    In privacy terms, we call a uniquely defining aggregation of personal characteristics a ‘fingerprint’. Perhaps you have heard the term ‘device fingerprinting’ discussed as an alternative technology to cookies in the online world? In an online context, websites can collect device characteristics about the desktop or mobile based device visiting them – such as its IP address, browser type, screen resolution, installed font pack and so on. Gather enough of these details and you have a ‘device fingerprint’.

    Fingerprinting and consent

    Over the past few years, some businesses have been swinging away from using cookies and towards using other tracking technologies, like device fingerprinting, because of concerns about EU “cookie consent” requirements. The thinking goes that if website cookies require consent, then a ‘cookieless’ technology like device fingerprinting should avoid the need for consent.

    For online businesses, the attractions are obvious: no more ugly cookie banners, no cumbersome user consent experiences, no more paying third party cookie compliance vendors. That logic may seem sound; unfortunately, it’s wrong.

    This is because “cookie consent” is a misnomer: it isn’t about cookies at all – it’s about online tracking, in whatever form that takes. This is clear both from the wording of Article 5(3) of the e-Privacy Directive (which creates the consent requirement but never uses the term “cookie”, referring instead to “information”) and from recent guidance on device fingerprinting published by the Article 29 Working Party (here). The long and short of it is that when an online service tracks its visitors by any means – cookies, device fingerprinting, LSOs, pixels, scripts or any other technology – consent requirements will apply.

    Choosing a consent strategy

    What’s less clear is what form that consent needs to take – namely, whether consent needs to be obtained on an opt-in basis (i.e. the assistant asks you if it’s ok to hand you the piece of paper with the number on it) or whether it can be implied if the visitor doesn’t opt-out (i.e. the assistant hands you the note with the number, and tells you to throw it away if you don’t want it). Because of this complexity, we keep a table of these different opt-in and opt-out standards around the EU, which you can see here.

    Deciding on the correct consent strategy for your online operations can be tricky, and depends on a number of factors including the necessity of the tracking you do, the context in which you do it, and the countries across which you operate (do you, for example, want a ‘one size fits all’ consent standard across all website operations or a country-by-country approach to consent based on local legal requirements and risk?)

    But, whatever you do, don’t do nothing. That would be like having the shop assistant reach over the till to superglue the number to you while your back was turned.

    And none of us would want a world where that would be acceptable.

    A New ISO Standard for Cloud Computing

    Posted on November 5th, 2014 by



    The summer of 2014 saw another ISO Standard published by the International Standards Organisation (ISO). ISO27018:2014 is a voluntary standard governing the processing of personal data in the public cloud.

    With the catchy title of “Information technology – Security techniques – Code of the practice for protection of personally identifiable information (PII) in public clouds acting as PII processors” (“ISO27018“), it is perhaps not surprising that this long awaited standard is yet to slip off the tongue of every cloud enthusiast.  European readers may have assumed references to PII meant this standard was framed firmly on the US – wrong!

    What is ISO27018?

    ISO27018 sets out a framework of “commonly accepted control objectives, controls and guidelines” which can be followed by any data processors processing personal data on behalf of another party in the public cloud.

    ISO27018 has been crafted by ISO to have broad application from large to small and from public entity to government of non-profit.

    What is it trying to achieve?

    Negotiations in cloud deals which involve the processing of personal data tend to be heavily influenced by the customer’s perceptions of heightened data risk and sometimes very real challenges to data privacy compliance. This is hurdle for many cloud adopters as they relinquish control over data and rely on the actions of another (and sometimes those under its control) to maintain adequate safeguards. In Europe, until we see the new Regulation perhaps, a data processor has no statutory obligations when processing personal data on behalf of another. ISO27018 goes some way to impose a level of responsibility for the personal information it processes.

    ISO27018’s introductory pages call out its objectives:

    • It’s a tool to help the public cloud provider to comply with applicable obligations: for example there are requirements that the public cloud provider only processes personal information in accordance with the customer’s instructions and that they should assist the customer in cases of data subject access requests;
    • It’s an enabler of transparency allowing the provider to demonstrate why their cloud services are well governed: imposing good governance obligations on the public cloud provider around its information security organisation (eg the segregation of duties) and objectives around human resource security prior to (and during employment) and encouraging programmatic awareness and training. Plus it echoes the asset management and access controls elements of other ISO standards (see below);
    • It will assist the customer and vendor in documenting contractual obligations: by addressing typical contractually imposed accountability requirements; data breach notification, imposing adequate confidentially obligations on individuals touching on data and flowing down technical and organisation measures to sub-processors as well as requiring the documentation of data location. This said, a well advised customer may wish to delve deeper as this is not a full replacement for potential data controller to processor controls; and
    • It offers the public cloud customer a mechanism to exercise audit and compliance rights: with ISO27018’s potential application across disparate cloud environments, it remains to be seen whether a third party could certify compliance against some of the broader data control objectives contained in ISO27018. However, a regular review and reporting and/or conformity reviews may provide a means for vendor or third party verification (potentially of more use where shared and/or virtualised server environments practically frustrate direct data, systems and data governance practice audit by the customer).

    ISO27018 goes some way towards delivering these safeguards. It is also a useful tool for a customer to evaluate the cloud services and data handling practices of a potential supplier. But it’s not simple and it’s not a substitute for imposing compliance and control via contract.

    A responsible framework for public cloud processors

    Privacy laws around the world prescribe nuanced, and sometimes no, obligations upon those who determine the manner in which personal information is used. Though ISO27018 is not specifically aimed at the challenges posed by European data protection laws, or any other jurisdiction for that matter, it is flexible enough to accommodate many of the inevitable variances. It cannot fit all current and may not fit to future rules. However, in building this flexibility, it loses some of its potential bite to generality.

    Typically entities adopting ISO27001 (Information security management) are seeking to protect their own assets data but it is increasingly a benchmark standard for data management and handling among cloud vendors. ISO27018 builds upon the ISO27002 (Information technology – Security technique – Code of practice for information security controls) reflecting its controls, but adapting these for public cloud by mapping back to ISO27002 obligations where they remain relevant and supplementing these controls where necessary by prescribing additional controls for public cloud service provision (as set out separately in Annex A to ISO27018). As you may therefor expect, ISO27018 explicitly anticipates that a personal information controller would be subject to wider obligations than those specified and aimed at processors.

    Adopting ISO27018

    Acknowledging that the standard cannot be all-encompassing, and that the flavours of cloud are wide and varied, ISO27018 calls for an assessment to be made across applicable personal information “protection requirements”.  ISO27018 calls for the organisation to:

    • Assess the legal, statutory, regulatory and contractual obligations of it and its partners (noting particularly that some of these may mandate particular controls (for example preserving the need for written contractual obligations in relation to data security under Directive (95/46/EC) 7th Principle));
    • To complete a risk assessment across its business strategy and information risk profile; and
    • To factor in corporate policies (which may, at times, go further than the law for reasons of principle, global conformity or because of third party influences).

    What ISO27018 should help with

    ISO27018 offers a reference point for controllers who wish to adopt cloud solutions run by third party providers. It is a cloud computing information security control framework which may form part of a wider contractual commitment to protect and secure personal information.

    As we briefly explained in an earlier post in our tech blog, the European Union has also spelled out its desire to promote uniform standard setting in cloud computing. ISO27018 could satisfy the need for broadly applicable, auditable data management framework for public cloud provision. But it’s not EU specific and lacks some of the rigour an EU based customer may seek.

    What ISO27018 won’t help with

    ISO27018 is not an exhaustive framework. There are a few obvious flaws:

    • It’s been designed for use in conjunction with the information security controls and objectives set out in ISO27002 and ISO27001 which provide general information security frameworks. This is a high threshold for small or emerging providers (many of which do not meet all these controls or certify to these standards today). So more accessible for large enterprise providers but something to weigh up – the more controls there are the more ways there are to slip up;
    • It may be used as a benchmark for security and, coupled with contractual commitments to meet and maintain selected elements of ISO27018, it won’t be relevant to all cloud solutions and compliance situations (though some will use it as if it were);
    • It perpetuates the use of the PII moniker which, already holding specific US legal connotation (i.e. narrower application) is now used is a wider defined context under ISO27018 (in fact PII under ISO27018 is closer to the definition of personal data under EU Directive 95/46/EC). This could confuse the stakeholders in multi-national deals and the corresponding use of PII in the full title to ISO27014 potentially misleads around the standard’s potentially applicability and use cases;
    • ISO27018 is of no use in situations where the cloud provider is (or assumes the role) of data controller and it assumes all data in the cloud is personal data (so watch this space for ISO27017 (coming soon) which will apply to any data (personal or otherwise)); and
    • For EU based data controllers, other than constructing certain security controls, ISO27018 is not a mechanism or alternative route to legitimise international data transfers outside of the European Economic Area. Additional controls will have to be implemented to ensure such data enjoys adequate protection.

    What now?

    ISO27018 is a voluntary standard and not law and it won’t entirely replace the need for specific contractual obligations around processing, accessing and transferring personal data. In a way its ultimate success can be gauged by the extent of eventual adoption. It will be used to differentiate, but it will not always answer all the questions a well-informed cloud adaptor should be asking.

    It may be used in whole or in part and may be asserted and used alongside or as a part of contractual obligations, information handling best practice or simply a benchmark which a business will work towards. Inevitability there will be those who treat the Standard as if it is the law without thought about what they are seeking to protect against and what potential wrongs they are seeking to right.  If so, they will not reap the value of this kind of framework.

     

    What does EU regulatory guidance on the Internet of Things mean in practice? Part 2

    Posted on November 1st, 2014 by



    In Part 1 of this piece I summarised the key points from the recent Article 29 Working Party (WP29) Opinion on the Internet of Things (IoT), which are largely reflected in the more recent Mauritius Declaration adopted by the Data Protection and Privacy Commissioners from Europe and elsewhere in the world. I expressed my doubts that the approach of the regulators will encourage the right behaviours while enabling us to reap the benefits that the IoT promises to deliver. Here is why I have these concerns.

    Thoughts about what the regulators say

    As with previous WP29 Opinions (think cloud, for example), the regulators have taken a very broad brush approach and have set the bar so high, that there is a risk that their guidance will be impossible to meet in practice and, therefore, may be largely ignored. What we needed at this stage was a somewhat more balanced and nuanced guidance that aimed for good privacy protections while taking into account the technological and operational realities and the public interest in allowing the IoT to flourish.

    I am also unsure whether certain statements in the Opinion can withstand rigorous legal analysis. For instance, isn’t it a massive generalisation to suggest that all data collected by things should be treated as personal, even if it is anonymised or it relates to the ‘environment’ of individuals as opposed to ‘an identifiable individual’? How does this square with the pretty clear definition of the Data Protection Directive? Also, is the principle of ‘self-determination of data’ (which, I assume is a reference to the German principle of ‘informational self-determination’) a principle of EU data protection law that applies across the EU? And how is a presumption in favour of consent justified when EU data protection law makes it very clear that consent is one among several grounds on which controllers can rely?

    Few people will suggest that the IoT does not raise privacy issues. It does, and some of them are significant. But to say that (and I am paraphrasing the WP29 Opinion) pretty much all IoT data should be treated as personal data and can only be processed with the consent of the individual which, by the way, is very difficult to obtain at the required standards, leaves companies processing IoT data nowhere to go, is likely to unnecessarily stifle innovation, and slow down the development of the IoT, at least in Europe. We should not forget that the EU Data Protection Directive has a dual purpose: to protect the privacy of individuals and to enable the free movement of personal data.

    Distinguishing between personal and non-personal data is essential to the future growth of the IoT. For instance, exploratory analysis to find random or non-obvious correlations and trends can lead to significant new opportunities that we cannot even imagine yet. If this type of analysis is performed on data sets that include personal data, it is unlikely to be lawful without obtaining informed consent (and even then, some regulators may have concerns about such processing). But if the data is not personal, because it has been effectively anonymised or does not relate to identifiable individuals in the first place, there should be no meaningful restrictions around consent for this use.

    Consent will be necessary in several occasions such as for storing or accessing information stored on terminal equipment, for processing health data and other sensitive personal data, or for processing location data created in the context of public telecommunications services. But is consent really necessary for the processing of, e.g., device identifiers, MAC addresses or IP addresses? If the individual is sufficiently informed and makes a conscious decision to sign up for a service that entails the processing of such information (or, for that matter, any non-sensitive personal data), why isn’t it possible to rely on the legitimate interests ground, especially if the individual can subsequently chose to stop the further collection and processing of data relating to him/her? Where is the risk of harm in this scenario and why is it impossible to satisfy the balance of interests test?

    Notwithstanding my reservations, the fact of the matter remains that the regulators have nailed their colours to the mast, and there is risk if their expectations are not met. So where does that leave us then?

    Our approach

    Sophisticated companies are likely to want to take the WP29 Opinion into account and also conduct a thorough analysis of the issues in order to identify more nuanced legal solutions and practical steps to achieve good privacy protections without unnecessarily restricting their ability to process data. Their approach should be guided by the following considerations:

    1. The IoT is global. The law is not.
    2. The law is changing, in Europe and around the world.
    3. The law is actively enforced, with increasing international cooperation.
    4. The law will never keep up with technology. This pushes regulators to try to bridge the gap through their guidance, which may not be practical or helpful.
    5. So, although regulatory guidance is not law, there is risk in implementing privacy solutions in cutting edge technologies, especially when this is done on a global scale.
    6. Ultimately, it’s all about trust: it’s the loss of trust that a company will respect our privacy and that it will do its best to protect our information that results in serious enforcement action, pushes companies out of business or results in the resignation of the CEO.

     

    This is a combustible environment. However, there are massive business opportunities for those who get privacy right in the IoT, and good intentions, careful thinking and efficient implementation can take us a long way. Here are the key steps that we recommend organisations should take when designing a privacy compliance programme for their activities in the IoT:

    1. Acknowledge the privacy issue. ‘Privacy is dead’ or ‘people don’t care’ type of rhetoric will get you nowhere and is likely to be met with significant pushback by regulators.
    2. Start early and aim to bake privacy in. It’s easier and less expensive than leaving it for later. In practice this means running privacy impact assessments and security risk assessments early in the development cycle and as material changes are introduced.
    3. Understand the technology, the data, the data flows, the actors and the processing purposes. In practice, this may be more difficult than it sounds.
    4. Understand what IoT data is personal data taking into account if, when and how it is aggregated, pseudonymised or anonymised and how likely it is to be linked back to identifiable individuals.
    5. Define your compliance framework and strategy: which laws apply, what they require, how the regulators interpret the requirements and how you will approach compliance and risk mitigation.
    6. When receiving data from or sharing data with third parties, allocate roles and responsibilities, clearly defining who  is responsible for what, who protects what, who can use what and for what purposes.
    7. Transparency is absolutely essential. You should clearly explain to individuals what information you collect, what you do with it and the benefit that they receive by entrusting you with their data. Then do what you said you would do – there should be no surprises.
    8. Enable users to exercise choice by enabling them to allow or block data collection at any time.
    9. Obtain consents when the law requires you to do so, for instance if as part of the service you need to store information on a terminal device, or if you are processing sensitive personal data, such as health data. In most cases, it will be possible to rely on ‘implied’ consent so as to not unduly interrupt the user journey (except when processing sensitive personal data).
    10. Be prepared to justify your approach and evidence compliance. Contractual and policy hygiene can help a lot.
    11. Have a plan for failure: as with any other technology, in the IoT things will go wrong, complaints will be filed and data security breaches will happen. How you react is what makes the difference.
    12. Things will change fast: after you have implemented and operationalised your programme, do not forget to monitor, review, adapt and improve it.