Archive for the ‘Data as an asset’ Category

How to build a data protection compliance program from scratch

Posted on December 14th, 2015 by



It’s a daunting task.  You’re the newly appointed data privacy person in your organisation – either because you applied for the role or because someone “volunteered” you for it – and now you have to build out a data protection compliance program.  Worldwide.  From scratch.  What do you do?

Well, first off, take comfort in knowing that you’re not alone.  Thanks to a flurry of recent data privacy activity in the EU (Right to be Forgotten, Safe Harbor, GDPR) and beyond (the emergence of Asia-Pac privacy regimes, the Canadian Anti-Spam law, high profile US data breaches), the need for data protection compliance has hit the C-suite agenda like never before.  Execs everywhere are turning to folk like you to solve the problem.

And some more good news: there’s very little you can do wrong.  If you’re being tasked with building out a global data protection compliance program, odds are your organisation doesn’t have much of a program currently.  So every step you take, no matter how small, is a step in the right direction.  Though with a little bit of forethought, not only will you NOT go wrong, you will deliver SIGNIFICANT benefits in terms of compliance, risk reduction, and brand enhancement.

Here’s how you go about it:

1.  Decide what kind of organisation you want to be.

It sound so simple, but this step is key.  What is your data protection strategy?  Is it legally-driven (goal = legal compliance), risk-driven (goal = risk reduction) or ethics-driven (goal = do the right thing)?

This crucial decision will be dependent on many factors, including the nature of your organisation (a mature, regulated business may have very different goals from your Silicon Valley start-up), your values as an organisation (what does your Code of Conduct say?), how much top-level support you have, what your competitors do, available budget, privacy ‘crises’ the business has experienced in recent history, and your personal beliefs as the organisation’s data privacy evangelist – to name just a few.

These aren’t exclusive strategies, either – often the “right” approach will be some combination of the three, but perhaps with a particular leaning towards one goal in particular.  In any event, the decision taken at this point will inform every subsequent action you take, so consider wisely.

In addition to this, you need to identify your baseline privacy standards – i.e. the privacy framework against which you will benchmark your compliance.  Will you use the EU Data Privacy Directive, the US Fair Information Privacy Practices, or perhaps something with more of an international flavour – like BCR, CBPR or the OECD Principles?

Remember, this is about deciding your baseline – depending on where you operate geographically, you may need to raise yourself above this baseline in some countries, but you at least need a baseline in the first place to bring some kind of global consistency to the way your organisation protects data.

2.  Find out what kind of organisation you are today.

Before you can embark on putting in place compliance controls, you need to do a little fact-finding.  Among the things you need to find out are:

  • what data you process today, how, why and where;
  • who are your internal data privacy ‘champions’ (you’ll need them) and your data privacy ‘trolls’ (you’ll need to win them over);
  • what policies, procedures, guidance and training, if any, you already have – and what kind of state they’re in; and
  • the level of awareness that exists within the organisation to date about the importance of data protection compliance.

Depending on the size of your organisation, this can be a challenging task, so identify others who can support you in this process – the data privacy ‘champions’ mentioned above, whether business unit leaders, country managers, or just internal privacy enthusiasts.  Only once you are armed with this information will you be ready to determine what you need to do next.

Which leads nicely onto the next point…

3.  Work out how to become the kind of organisation you want to be.

The next stage is a gap analysis.  You know what you are today, you know what you want to become, so work out the gaps.  Once you’ve identified the gaps, then you’ll be ready to start putting in place the measures necessary to fill them.

When performing this gap analysis, be careful to prioritise though.  Not all gaps carry equal importance – some will pose significant risks, either to individuals directly or in terms of organisational risk, and these should be addressed first (for example, you may discover that sensitive personal information is being shared internally, or even worse, externally, in an uncontrolled fashion).  Those that are less significant (say, not having sorted out your website privacy policy in a while) should be pushed lower down the priority list.

When you’ve identified your gaps, then the real fun begins – you need to figure out how to plug those gaps!  That will entail a combination of many activities, typically including things like creating a compliance team, adopting new policies, instituting training, building out Privacy by Design processes, creating supplier due diligence standards, designing new contract templates, and more.  If needs be, look to peers in similar organisations or call upon external experts for guidance.

4.  Become the organisation you want to be.

You know what I’m going to say next: you’ve figured out how to plug the gaps, so plug them already!  Transform your organisation from where you are today to where you want to be.

5.  Rinse, wash and repeat.

A privacy professional’s work is never done.  It’s important to remember a compliance program is just that: a program, not a project.  That means it must undergo review to ensure that it remains valid, up-to-date, and works well in practice – and, if not, it needs changing.  You must institute regular audits to ensure this is the case.

Metrics can help here.  You can assess the success of the compliance program you have instituted through a number of potential metrics – for example, privacy awareness among staff, number of privacy complaints reported, data breaches suffered, and so on.

These metrics will not only help you assess the ongoing success of your program, but also help you demonstrate ROI to your sponsors and executives.  And, once you’ve done that, you get to begin all over again!

EU proposes new consumer rights for the return of data exchanged for digital content

Posted on December 10th, 2015 by



We’ve previously commented in some depth on the EU’s Digital Single Market proposals, most of which are currently out to consultation. The European Commission today set out new plans for two proposals under this DSM strategy to better protect consumers who shop online across the EU and help businesses expand their online sales. There’s more detail on the ecommerce issues at our sister Tech Blog.

The online context

In a nutshell, the EU is concerned that EU based online consumers enjoy a variety of different online rights country to country and this significantly complicates compliance for eVendors. This creates real difficulties for any eVendor looking to address all EU markets with their services. In particular, there are no consistent consumer rights around the supply of “digital content” (a term not even recognized in the laws of some Member States).

The EU proposal for a Digital Content Directive

One of today’s proposals from the European Commission included a draft for a new Directive on the supply of digital content (e.g. streaming music, online games, apps or e-books (see text here)) (the “draft Directive“). We’re told the “proposals will tackle the main obstacles to cross-border e-commerce in the EU: legal fragmentation in the area of consumer contract law and resulting high costs for businesses – especially SMEs- and low consumer trust when buying online from another country.”

But what’s this got to do with “data”?

“That’s all ecommerce” – “this is a data privacy blog” you say. Well, in today’s digital economy, information about individuals is often as valuable as money. Digital content is often supplied in exchange for the consumer giving access to personal or other data. In this draft Directive this is somewhat clumsily termed “use of the counter-performance other than money“. With this in mind, and with the desire to treat the exchange of data in the same way as the exchange of money, Articles 12 to 16 of the draft Directive address consumer rights in digital content contracts established in exchange for data.

An eVendor must cease data use upon contract termination

Importantly, under the draft Directive proposals, if an EU consumer has obtained digital content or a digital service, in exchange for data or personal data, the new rules clarify that the eVendor should stop using that data in case the contract is ended. What’s more, the eVendor should return it!

In the cases of a lack of conformity with the contract, the consumer shall be entitled to have digital content they’ve “purchased” (or participated in the “use of the counter-performance other than money”!) brought into conformity with the contract free of charge. If this can’t be done (and subject to some other provisions I’ll spare you from here), the consumer may be either entitled to a proportionate reduction in price or to terminate the contract – Article 12.

There are similar proposals in the event termination rights are exercised in respect of digital content provided and then modified by the eVendor over a period of time. If a subsequent modification adversely impacts the access to, or use of the content, then the consumer has a termination right in certain prescribed circumstances – Article 15.

There are also similar termination rights proposed in respect of long-term contracts (lasting more than 12 months) – Article 16.

When a contract for digital content terminates

What’s more, in any of the above circumstances, where the consumer terminates the contract for digital content that has been entered into in exchange for data instead of money:

  • The eVendor “shall take all measures which could be expected” and cease use of (1) any data which the consumer has provided in exchange for the digital content and; (2) any other data collected by the eVendor in relation to the supply of the digital content (including any content provided by the consumer but with the exception of the content which has been generated jointly by the consumer and others who continue to make use of the content); and
  • The eVendor shall provide the consumer with technical means to “retrieve all content provided by the consumer and any other data produced or generated through the consumer’s use of the digital content to the extent that data has been retained by the eVendor“. What’s more the consumer “shall be entitled to retrieve the content free of charge, without significant inconvenience, in reasonable time and in a commonly used data format unless this is impossible, disproportionate or unlawful“.

There is no distinction between personal data and data so the proposed rules are quite pervasive. The Recitals to the draft Directive state “[f]ulfilling the obligation to refrain from using data should mean in the case when the counter-performance consists of personal data, that the supplier should take all measures in order to comply with data protection rules by deleting it or rendering it anonymous in such a way that the consumer cannot be identified by any means likely reasonably to be used either by the supplier or by any other person.” This reads as a positive obligation to delete and not purely a reactionary step should the consumer request it.

This is HUGE! For any eVendor, isolating and stopping the use of discrete data sets relating to an individual consumer is hard enough. Designing and perfecting a mechanism to trace and then return any and all data sets specific to a customer is something else. This is a data identification and portability conundrum of extreme proportions. As above, the draft expressly applies to any data (and not just personal data).

In context, say I download a free eBook in return for my personal details and perhaps the completion of an online survey. That book reads well, but at chapter 7, I can no longer advance the pages and the eVendor cannot cure this despite my demands. As a consumer, I’ll have a right to terminate. At that point the eVendor of the book must stop using my details, cease using the data from my survey. Additionally, all that data must be identified and returned! Thankfully, the eVendor would not have to identify and cease to use certain meta-data relating to the how fast, when and on which devices I read the eBook (see below as it seems that’s out of the draft Directive’s scope). If I’m honest, for a free eBook, I’m not sure I care about the return of my data (but an Austrian student with a good legal background and time on his or her hands will!).

When would the rules apply?

This is a first draft proposal and will undoubtedly be subject to intense lobbying and debate in the coming months. Even once passed, as a directive, it would take up to 24 months to incorporate the rules into the local law of Member States.

The accompanying impact assessment stressed that in particular the draft Directive should cover services which allow the creation, processing or storage of data. “While there are numerous ways for digital content to be supplied, such as transmission on a durable medium, downloading by consumers on their devices, web-streaming, allowing access to storage capabilities of digital content or access to the use of social media, this Directive should apply to all digital content independently of the medium used for its transmission“. The Directive does not cover services performed with a significant element of human intervention or contracts governing specific sectorial services such as healthcare, gambling or financial services.

For now, the draft Directive should apply only to contracts where the eVendor “requests and the consumer actively provides data, such as name and e-mail address or photos, directly or indirectly to the supplier for example through individual registration or on the basis of a contract which allows access to consumers’ photos“.

This Directive should not apply to situations where:

  • the eVendor “collects data necessary for the digital content to function in conformity with the contract, for example geographical location where necessary for a mobile application to function properly, or for the sole purpose of meeting legal requirements, for instance where the registration of the consumer is required for security and identification purposes by applicable laws”; and
  • data collected is “strictly necessary for the performance of the contract or for meeting legal requirements and the supplier does not further process them in a way incompatible with this purpose“;
  • the eVendor collects information, “including personal data, such as the IP address, or other automatically generated information such as information collected and transmitted by a cookie, without the consumer actively supplying it, even if the consumer accepts the cookie“; and
  • the consumer is “exposed to advertisements exclusively in order to gain access to digital content“.

What about other privacy rules (and presumably the GDPR)?

Article 3 of the draft Directive clarifies that in case of conflict between the Directive and another EU act, the other EU act takes precedence. In particular, it clarifies that the Directive is without prejudice to the rules on data protection.

In terms of general proposed scope, the draft Directive “covers the supply of all types of digital content“. It also covers “digital content supplied not only for a monetary payment but also in exchange for (personal and other) data provided by consumers, except where the data have been collected for the sole purpose of meeting legal requirements“.

You thought you had enough new law to deal with.

Mark Webber – Partner, Silicon Valley California mark.webber@fieldfisher.com

The Digital Single Market: Has Europe bitten off more than it can chew?

Posted on May 8th, 2015 by



You may have read a lot of chatter about the European Commission’s Digital Single Market (DSM) over the last two days. The reaction in the blogosphere has already been a mix of optimism, hope, consternation, cynicism… and general Brussels fatigue.

What is the DSM?

In a nutshell, it is a strategy that seeks to create a true ‘single market’ within the EU – that is, a market where there is total free movement of goods, persons, services and capital; where individuals and businesses can seamlessly and fairly access online services, regardless of where in the EU they are situated.

Theoretically, EU citizens will finally be able to use their mobile phones across Europe without roaming charges, and access the same music, movies and sports events online at the same price wherever they are.

Whatever the public reaction, there is no doubt that the DSM is a highly ambitious strategy. It sets out wide legislative initiatives across a vast range of issues: from copyright, e-commerce, geo-blocking, competition, cross-border shipment, data protection, to telecoms regulation.

Much has already been written about these proposals and the Fieldfisher team has written this great summary of all the legislative proposals.

For the readers of this blog, we’d like to focus only on those proposals that relate to privacy and data protection.

Privacy & Data Protection issues

In our view, data issues lie at the heart of these reforms and there are 4 key initiatives that impact directly on these rights:

1. Review of data collection practices by online platforms

As part of the DSM, the Commission is proposing a “comprehensive analysis” of online platforms in general, including anything from search engines, social media sites, e-commerce platforms, app stores and price comparison sites.

One of the concerns of the Commission is that online platforms generate, accumulate and control an enormous amount of data about their customers and use algorithms to turn this into usable information. One study it looked at, for example, concluded that 12% of search engine results were personalized, mainly by geo-location, prior search history, or by whether the user was logged in or out of the site.

The Commission found that there was a worrying lack of awareness by consumers about the data collection practices of online platforms: they did not know what data about their online activities was being collected and how it was being used. In the Commission’s view, this not only interfered with the consumers’ fundamental rights to privacy and data protection, it also resulted in an asymmetry between market actors.

As platforms can exercise significant influence over how various players in the market are remunerated, the Commission has decided to gather “comprehensive evidence” about how online platforms use the information they acquire, how transparent they are about these practices and whether they seek to promote their own services to the disadvantage of competitors. Proposals for reform will then follow.

2. Review of the e-Privacy Directive

The e-Privacy Directive is currently a key piece of privacy legislation within the EU – governing the rules for cookie compliance, location data and electronic marketing amongst other things.

Not a huge amount has been said about this review in the DSM documents. All that we know at this stage is that the Commission plans to review the e-Privacy Directive after the adoption of the General Data Protection Regulation, with a focus on “ensuring a high level of protection for data subjects and a level playing field for all market players”. For instance, the Commission has said that it will review the e-Privacy Directive to ensure “coherence” with the new data protection provisions, and consider whether it should apply to a much wider set of service providers. It further says that the rules relating to online tracking and geo-location will be re-evaluated “in light of the constant evolution of technology” (Staff Working Document, p. 47).

3. Cloud computing and big data reforms

Cloud computing and big data services haven’t escaped the grasp of the Commission either. The Commission sees these types of services as central to the EU’s competitiveness. European companies are lagging significantly behind in their adoption and development of cloud computing and big data analytics services.

In its report, the Commission has diagnosed a number of key reasons for this lag:

  • EU businesses and consumers still do not feel confident enough to adopt cross-border cloud services for storing or processing data because of concerns relating to security, compliance with privacy rights, and data protection more generally.
  • Contracts with cloud providers often make it difficult to terminate or unsubscribe from the contract and to port their data to a different cloud provider.
  • Data localization requirements within Member States create barriers to cross-border data transfers, limiting competitive choice between providers and raising costs by forcing businesses to store data on servers physically located inside a particular countries.

The Commission are therefore proposing to remove what it sees as a series of “technical and legislative barriers” – such as rules restricting the cross-border storage of data within the EU, the fragmented rules relating to copyright, the lack of clarity over the rights to use data, the lack of open and interoperable systems, and the difficulty of data portability between services.

4. Step up of cyber-security reforms

Cyber threats have led to significant economic losses, huge disruptions in services, violations of citizens’ fundamental rights and a breakdown in public trust in online activities. The Commission proposes to step up its efforts to reduce cybersecurity threats by requiring a more “joined up” approach by the EU industry to stimulate take up of more secure solutions by enterprises, public authorities and citizens. In addition, it seeks a “more effective law enforcement response” to online criminal activity.

Too ambitious…? 

The above is just the tip of the iceberg of the reforms that are being proposed. Outside of privacy and data protection issues, the DSM Strategy includes initiatives such as harmonizing copyright laws, extending media regulation to all online platforms, and prohibiting unjustified geo-blocking.

As with all ambitious reforms of this kind in the EU, there will be vocal critics on both sides, and a huge degree of political scrutiny. The timetable for completion is either the end of 2015 or the end of 2016 but, no doubt, it will be years before any legislation is actually signed off and transposed into national law.

In an industry which changes at such a rapid speed – week after week, month after month – the real danger of EU reform is that such legislation can already be conceptually outdated by the time it is brought into force and a whole new set of problems may, by then, have emerged.

But whatever the eventual outcome of these legislative initiatives, it is clear that there is an important, wider debate to be had about the global digital market: Why is the rest of the world so behind the US? What is the secret to the US’ success and dominance? Do these proposals really go to the heart of the problem? Such questions merit a post, if not a treatise, of their own. We should perhaps show some admiration towards the European Commission for trying to tackle these deep and knotty issues head on.

 

 

Vidal-Hall v Google: A new dawn for data protection claims?

Posted on April 15th, 2015 by



The landmark judgment of the Court of Appeal in Vidal Hall & Ors v Google Inc may signal the dawn of a new beginning for data protection litigants. Prior to this case, the law in England was relatively settled: that in order to incur civil liability under the Data Protection Act 1998, the claimant had to establish at least some form of pecuniary damage (unless the processing related to journalism, art or literature). The wording of section 13(2) appeared unequivocal on this point and it frequently proved to be a troublesome hurdle for claimants – and a powerful shield for defendants.

The requirement, however, was always the source of some controversy and the English courts have tried in recent years to dilute the strictness of the rule.

Then enter Ms Vidal-Hall & co: three individuals who allege that Google has been collecting private information about their internet usage from their Safari browser without their knowledge or consent. Claims were brought under the tort of misuse of private information and under s.13 of the DPA, though there was no claim for pecuniary loss.

This ruling concerned only the very early stages of litigation – whether the claimants were permitted to even serve the claim on Google which, being based in California, were outside of the jurisdiction. Permission was granted by the Court of Appeal and the case will now proceed through the English courts.

Three key rulings lie at the heart of this judgment:

  • There is now no need to establish pecuniary damage to bring a claim under the DPA. Distress alone is sufficient.
  • It is “arguable” that browser generated information (BGI) constitutes “personal data” under the DPA.
  • Misuse of private information should be classified as a tort for the purposes of service out of the jurisdiction.

We take each briefly in turn:

(1) Damages for distress alone are sufficient

The Court of Appeal disapplied the clear wording of domestic legislation on the grounds that the UK Act could not be interpreted compatibly with Article 23 of the EU Directive, and Articles 7, 8 and 47 of the EU Charter of Fundamental Rights. It held that the main purpose of the Data Protection Directive was to protect privacy, rather than economic rights, and it would be “strange” if it could not compensate those individuals who had suffered emotional distress but no pecuniary damage, when distress was likely to be the primary form of damage where there was a contravention.

It is too early to say whether this ruling will in practice open the door to masses of litigation – but there is no doubt that a significant obstacle that previously stood in the way of DPA claimants has now been unambiguously lifted by the Court of Appeal.

(2) Browser-generated information may constitute “personal data”

A further interesting, though less legally ground-breaking, ruling was that the BGI data in this case was arguably “personal data” under the DPA. The Court of Appeal did not decide the issue, but held that there was at least a “serious issue to be tried”.

Google had argued that: (a) the BGI data on its own was anonymous as it did not name or identify any individual; and (b) it kept the BGI data segregated from other data it held from which an individual might be identifiable (e.g. Gmail accounts). Thus, it was not personal data.

In response to Google’s points, the Court considered that it was immaterial that the BGI data did not name the user – what was relevant was that the data comprised of detailed browsing histories and the use of a DoubleClick cookie (a unique identifier which enabled the browsing histories to be linked to a specific device/machine). Taking those two elements together, it was “possible” to equate an individual user with the particular device, thus potentially bringing the data under the definition of “personal data”.

The Court further considered it immaterial that Google in practice segregated the BGI data from other data in its hands. What mattered was whether Google had the other information actually within its possession which it “could” use to identify the data subject, “regardless of whether it does so or not”.

(3) Misuse of private information is a tort

Finally, there was the confirmation that the misuse of private information is a tort for the purposes of service out of the jurisdiction. Not a huge point for our readers, but it will mean that claimants who bring claims under this cause of action will more easily obtain service out of the jurisdiction against foreign defendants.

A turning point…?

So the judgment certainly leaves much food for thought and is a significant turning point in the history of data protection litigation. There may also be a wider knock-on effect within the EU as other Member States that require proof of pecuniary damage look to the English judgment as a basis for opening up pure distress claims in their own jurisdictions.

The thing to bear in mind is that the ruling concerned only the very early stages of litigation – there is still a long road ahead in this thorny litigation and a great deal of legal and factual issues that still need to be resolved.

Cookie droppers may be watching this space with a mixture of fear and fascination.

 

 

Belgian research report claims Facebook tracks the internet use of everyone

Posted on April 1st, 2015 by



A report published by researchers at two Belgian universities claims that Facebook engages in massive tracking of not only its users but also people who have no Facebook account. The report also identifies a number of other violations of EU law.

When Facebook announced, in late 2014, that it would revise its Data Use Policy (DUP) and Terms of Services effective from 30 January 2015, a European Task Force, led by the Data Protection Agencies of the Netherlands, Belgium and Germany, was formed to analyse the new policies and terms.

In Belgium, the State Secretary for Privacy, Bart Tommelein, had urged the Belgian Privacy Commission to start an investigation into Facebook’s privacy policy, which led to the commissioning of the draft report that has now been published. The report concludes that Facebook is acting in violation of applicable European legislation and that “Facebook places too much burden on its users. Users are expected to navigate Facebook’s complex web of settings in search of possible opt-outs“.

The main findings of the report can be summarised as follows:

Tracking through social plug-ins

The researchers found that whenever a user visits a non-Facebook website, Facebook will track that user by default, unless he or she takes steps to opt-out. The report concludes that this default opt-out approach is not in line with the opt-in requirements laid down in the E-privacy Directive.

As far as non-users of Facebook are concerned, the researchers’ findings confirm previous investigations, most notably in Germany, that Facebook places a cookie each time a non-user visits a third-party website which contains a Facebook social plug-in such as the Like-button. Moreover, this cookie is placed regardless of whether the non-user has clicked on that Like button or not. Considering that Facebook does not provide any of this information to such non-users, and that the non-user is not requested to consent to the placing of such cookie, this can also be considered a violation of the E-privacy Directive.

Finally, the report found that both users and non-users who decide to use the opt-out mechanism offered by Facebook receive a cookie during this very opt-out process. This cookie, which has a default duration of two years, enables Facebook to track the user or non-user across all websites that contain its social plug-ins.

Other data protection issues identified

In addition to a number of consumer protection law issues, the report also covers the following topics relating to data protection:

  • Consent: The researchers are of the opinion that Facebook provides only very limited and vague information and that for many data uses, the only choice for users is to simply “take-it-or-leave-it”. This is considered to be a violation of the principle that in order for consent to be valid, it should be freely given, specific, informed and unambiguous as set-out in the Article 29 Working Party’s Opinion on consent (WP 187).
  • Privacy settings: The report further states that the current default settings (opt-out mechanism) remain problematic, not in the least because “users cannot exercise meaningful control over the use of their personal information by Facebook or third parties” which gives them “a false sense of control”.
  • Location data: Finally, the researchers consider that Facebook should offer more granular in-app settings for the sharing of location data, and should provide more detailed information about how, when and why it processes location data. It should also ensure it does not store the location data for longer than is strictly necessary.

Conclusion

The findings of this report do not come as a surprise. Indeed, most of the alleged areas of non-compliance have already been the object of discussions in past years and some have already been investigated by other privacy regulators (see e.g. the German investigations around the ‘like’ button).

The real question now surrounds what action the Belgian Privacy Commission will take on the basis of this report.

On the one hand, as of late, data protection enforcement has been put high on the agenda in Belgium. It seems the Belgian Privacy Commission is more determined than ever to show that its enforcement strategy has changed. This can also be situated in the context of recent muscular declarations from the State Secretary of Privacy that companies like Snapchat and Uber must be investigated to ensure they comply with EU data protection law.

Facebook, on the other hand, questions the authority of the Belgian Privacy Commission to conduct such an investigation, stating that only the Irish DPA is competent to discuss their privacy policies. Facebook has also stated that the report contains factual inaccuracies and expressed regret that the organisation was not contacted by the researchers.

It will therefore be interesting to see how the discussions between Facebook and the Belgian Privacy Commission develop. The President of the Belgian Privacy Commission has declared a number of times that it will not hesitate to take legal action against Facebook if the latter refuses to implement the changes for which Privacy Commission is asking.

This could potentially lead to Facebook being prosecuted, although it is more likely that it will be forced to accept a criminal settlement. In 2011, following the Privacy Commission’s investigation into Google Street View, Google accepted to pay 150.000 EUR as part of a criminal settlement with the public prosecutor.

Will no doubt be continued…

 

 

PART 2 – The regulatory outlook for the Internet of Things

Posted on October 22nd, 2014 by



In Part 1 of this piece I posed a question asking: the Internet of Things – what is it? I argued that even the concept of the Internet of Things (“IoT“) itself is somewhat ill-defined making the point there is no definition of IoT and, even if there were, that the definition will only change. What’s more, IoT will mean different things to different people and talk to something new each year.

For all the commentary, there is not specific IoT law today (sorry there is no Internet of Things (Interconnectivity) Act in the UK (and nor will there be any time soon)). We are left applying a variety of existing laws across telecoms, intellectual property, competition, health and safety and data privacy / security. Equally, with a number of open questions about how the IoT will work, how devices will communicate and identify each other etc., there is also a lack of standards and industry wide co-operation around IoT.

Frequently based around data use and with potentially intrusive application in the consumer space (think wearables, intelligent vehicles and healthtech) there is no doubt that convergence around IoT will fan privacy questions and concerns.

An evolving landscape

This lack of definition, coupled with a nascent landscape of standards, interfaces, and protocols leaves many open questions about future regulation and the application of current laws. On the regulatory front there is little sign of actual law-making or which rules may evolve to influence our approach or analysis.

Across the US, UK and the rest of Europe some of the regulatory bodies with an interest in IoT are diverse with a range of regulatory mandates and sometimes with a defined role confined to specific sectors. Some of these regulators are waking up to potential issues posed by IoT and a few are reaching out to the industry as a whole to consult and stimulate discussion. We’re more likely to see piecemeal regulation addressing specific issues than something all encompassing.

The challenge of new technology

Undoubtedly the Internet of Things will challenge law makers as well as those of us who construe the law. It’s possible that in navigating these challenges and our current matrix of laws and principles that we may influence the regulatory position as a result. Some obvious examples of where these challenges may come from are:

  • Adaptations to spectrum allocation. If more devices want to communicate, many of these will do so wirelessly (whether via short range or wide area comms or mobile). The key is that these exchanges don’t interfere with each other and that there is sufficient capacity available within the allocated spectrum. This may need to be regulated.
  • Equally, as demand increases, with a scarce resource what kind of spectrum allocation is “fair” and “optimal” and is some machine to machine traffic more important than other traffic? With echoes of the net neutrality debate the way this evolves will be interesting. Additionally, if market dominance emerges around one technology will there be competition/anti-trust concerns;
  • The technologies surrounding the IoT will throw up intellectual property and licensing issues. The common standards and exchange and identification protocols themselves may be controlled by interested party or parties or released on an “open” basis. Regulation may need to step-in to promote economic advance via speedy adoption or simply act as an honest broker in a competitive world; and
  • In some applications of IoT the concept of privacy will be challenged. In a decentralised world the thorny issues of consent and reaffirming consent will be challenging. This said, many IoT deployments will not involve personal information or identifiers. Plus, whatever the data, issues around security become more acute.

We have a good idea what issues may be posed, but we don’t yet know which will impose themselves sufficiently to force regulation or market intervention.

Consultation – what IoT means for the policy agenda

There have been some opening shots in this potential regulatory debate because a continued interconnectivity between multiple devices raises potential issues.

  • In issuing a new Consultation: “Promoting investment and innovation in the Internet of Things“, Ofcom (the UK’s communications regulator) kicked off its own learning exercise identify potential policy concerns around:
  • spectrum allocation and providing for potential demand;
  • understanding of the robustness and reliability issues placed upon networks which demand resilience and security. The corresponding issue of privacy is recognised also;
  • a need for each connected device to have an assigned name or identifier and questioning just how those addresses should be determined and potentially how they would be assigned; and
  • understanding their potential role as the UK’s regulator in an area (connectivity) key to the evolution of IoT.

In a varied and quite penetrable paper, Ofcom’s consultation recognises what many will be shouting, their published view “is that industry is best placed to drive the development, standardisation and commercialisation of new technology“. However, it goes on to recognise that “given the potential for significant benefits from the development of the IoT across a range of industry sectors, ][Ofcom[ are interested in views on whether we should be more proactive; for example, in identifying and making available key frequency bands, or in helping to drive technical standards.”

Europe muses while Working Party 29 wades in early warning about privacy

IoT adoption has been on Europe’s “Digital Agenda” for some time and in 2013 it reported back on its own Conclusions of the Internet of Things public consultation. There is also the “Connected Continent” initiative chasing a single EU telecoms market for jobs and growth.   The usual dichotomy is playing out equating technology adoption with “growth” while Europe wrestles with an urge to protect consumers and markets.

In just one such fight with this urge, in the past month the Article 29 Working Party (comprising the data privacy regulators of Europe) published its own Opinion 8/2014 on the Recent Developments on the Internet of Things. Recognising that it’s impossible to predict with any certainty the extent to which the IoT will develop the group also calls out that the development must “respect the many privacy and security challenges which can be associated with IoT“.

Their Opinion focuses on three specific IoT developments:

  • Wearable Computing;
  • Quantified Self; and
  • Domotics (home automation).

This Opinion doesn’t even consider B2B applications and more global issues like “smart cities”, “smart transportations”, as well as M2M (“machine to machine”) developments. Yet, the principles and recommendations their Opinion may well apply outside its strict scope and cover these other developments in the IoT. It’s one of our only guiding lights (and one which applies high standards of responsibility).

As one would expect, the Opinion identifies the “main data protection risks that lie within the ecosystem of the IoT before providing guidance on how the EU legal framework should be applied in this context”. What’s more the Working Party “supports the incorporation of the highest possible guarantees for individual users at the heart of the projects by relevant stakeholders. In particular, users must remain in complete control of their personal data throughout the product lifecycle, and when organisations rely on consent as a basis for processing, the consent should be fully informed, freely given and specific.”

The Fieldfisher team will shortly publish its thoughts and explanation of this Opinion. As one may expect, the IoT can and will challenge the privacy notions of transparency and consent let alone proportionality and purpose limitation. This means that accommodating the EU’s data privacy principles within the application of some IoT will not always be easy. Security poses another tricky concept and conversation. Typically these are issues to be tacked at the design stage and not as a legal afterthought. Step forward the concept of privacy by design (a concept recognised now around the globe).

In time, who knows, we may even see the EU Data Protection Regulation pass and face enhanced privacy obligations in Europe with new focus on “profiling” and legal responsibilities falling beyond the data processor exacting its own force over IoT.

The US is also alive to the potential needs of IoT

But Europe is not alone, with its focus on activity specific laws or laws regulating specific industries, even the US may be addressing particular IoT concerns with legislation. Take the “We Are Watching You Act” currently with Congress and the “Black Box Privacy Protection Act” with the House of Representatives. Each now apparently have a low chance of actually passing, but may regulate monitoring of surveillance by video devices in the home and force car manufacturers to disclose to consumers the presence of event data recorders, or ‘black boxes’, in new automobiles.

A wider US development possibly comes from the Federal Trade Commission who hosted public workshops in 2013, itself interested in privacy and security in the connected world and the growing connectivity of devices. In the FTC’s own words: “[c]onnected devices can communicate with consumers, transmit data back to companies, and compile data for third parties such as researchers, health care providers, or even other consumers, who can measure how their product usage compares with that of their neighbors. The workshop brought together academics, business and industry representatives, and consumer advocacy groups to explore the security and privacy issues in this changing world. The workshop served to inform the Commission about the developments in this area.” Though there are no concrete proposals yet, 2014 has seen a variety of continued commentary around “building trust” and “maximising consumer benefits through consumer control”. With its first IoT enforcement action falling in 2013 (in respect of connected baby monitors from TRENDnet whose feeds were not secure) there’s no doubt the evolution of IoT is on the FTC’s radar.

FTC Chairwomen, Edith Ramirez commented that “The Internet of Things holds great promise for innovative consumer products and services. But consumer privacy and security must remain a priority as companies develop more devices that connect to the Internet“.

No specific law, but plenty of applicable laws

My gut instinct to hold back on my IoT commentary had served me well enough. In the legal sense with little to say, perhaps even now I’ve spoken too soon? What is clear is that we’re immersing ourselves in IoT projects, wearable device launches, health monitoring apps, intelligent vehicles and all the related data sharing already. The application of law to the IoT needs some legal thought and, without specific legislation today, as for many other emerging technologies we must draw upon:

  • Our insight into the existing law across and its current application across different legal fields; and
  • Rather than applying a rule specific to IoT, we have to ask the right questions to build a picture of the technology, the way it communicates and figure out the commercial realities and relative risks posed by these interactions.

Whether the internet of customers, the internet of people, data, processes or even the internet of everything; applied legal analysis will get us far enough until we actually see some substantive law for the IoT. This is today’s IoT challenge.

Mark Webber – Partner, Palo Alto California mark.webber@fieldfisher.com

The legal and practical realities of “personal data”

Posted on September 3rd, 2014 by



Are IP addresses personal data?  It’s a question I’m so frequently asked that I thought I’d pause for a moment to reflect on how the scope of “personal data” has changed since the EU Data Protection Directive’s adoption in 1995.

The Directive itself defines personal data as “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity“.

That’s not the beginning and the end of the story though.  Over the years, various regulatory guidance has been published that has further shaped what we understand by the term “personal data”.  This guidance has taken the form of papers published by the Article 29 Working Party (most notably Opinion 4/2007 on the Concept of Personal Data) and by national regulators like the UK’s Information Commissioner’s Office (see here).  Then throw in various case law that has touched on this issue, like the Durant case in the UK and the European Court of Justice rulings in Bodil Lindqvist (Case C-101/01) and the Google Right to Be Forgotten case (C-131/12), and it’s apparent that an awful lot of time has been spent thinking about this issue by an awful lot of very clever people.

The danger, though, is that the debate over what is and isn’t personal data can often get so weighted down in academic posturing, that the practical realities of managing data often get overlooked.  When I’m asked whether or not data is personal, it’s typically a loaded question: the enquirer wants to know whether the data in question can be retained indefinitely, or whether it can be withheld from disclosures made in response to a subject access request, or whether it can be transferred internationally without restriction.  If the data’s not personal, then the answer is: yes, yes and yes.  If it is personal, then the enquirer needs to start thinking about how to put in place appropriate compliance measures for managing that data.

There are, of course, data types that are so obviously personal that it would be churlish to pretend otherwise: no one could claim that a name, address or telephone number isn’t personal.  But what should you do when confronted with something like an IP address, a global user ID, or a cookie string?  Are these data types “personal”?  If you’re a business trying to operationalise a privacy compliance program, an answer of “maybe” just doesn’t cut it.  Nor does an answer of “err on the side of caution and treat it as personal anyway”, as this can lead to substantial engineering and compliance costs in pursuit of a vague – and possibly even unwarranted – benefit.

So what should you do?  Legal purists might start exploring whether these data types “relate” to an “identified or identifiable person”, as per the Directive.  They might note that the Directive mentions “direct or indirect” identification, including by means of an “identification number” (an obvious hook for arguing an IP address is personal data).  They might explore the content, purpose or result of the data processing, as proposed by the Article 29 Working Party, or point out that these data types “enable data subjects to be ‘singled out’, even if their real names are not known.”  Or they might even argue the (by now slightly fatigued) argument that these data types relate to a device, not to a person – an argument that may once have worked in a world where a single computer was shared by a family of four, but that now looks increasingly weak in a world where your average consumer owns multiple devices, each with multiple unique IDs.

There is an alternative, simpler test though: ask yourself why this data is processed in the first place and what the underlying individuals would therefore expect as a consequence.  For example: Is it collected just to prevent online fraud or is it instead being put to use for targeting purposes? Depending on your answer, would individuals therefore expect to receive a bunch of cookie strings in response to a subject access request?  How would they feel about you retaining their IP address indefinitely if it was held separately from other personal identifiers?

The answers to these questions will of course vary depending on the nature of the business you run – it’s difficult to imagine a Not For Profit realistically being expected to disclose IP addresses contained in web server logs in response to a subject access request, but perhaps not a huge stretch, say, for a targeted ad platform.   The point is simply that trying to apply black and white boundaries to what is, and isn’t, personal will, in most cases, prove an unhelpful exercise and be wholly devoid of context.  That’s why Privacy Impact Assessment are so important as a tool to assess these issues and proposed measured, proportionate responses to them.

The debate over the scope of personal data is far from over, particularly as new technologies come online and regulators and courts continue to publish decisions about what they consider to be personal.  But, faced with practical compliance challenges about how to handle data in a day-to-day context, it’s worth stepping back from legal and regulatory guidance alone.  Of course, I wouldn’t for a second advocate making serious compliance decisions in the absence of legal advice; it’s simply that decisions based on legal merit alone risk not giving due consideration to data subject trust.

And what is data protection about, if not about trust?

 

Anonymisation is great, but don’t undervalue pseudonymisation

Posted on April 26th, 2014 by



Earlier this week, the Article 29 Working Party published its Opinion 05/2014 on Anonymisation Techniques.  The opinion describes (in quite some technical detail) the different anonymisation techniques available to data controllers, their relative values, and makes some good practice suggestions – noting that “Once a dataset is truly anonymised and individuals are no longer identifiable, European data protection law no longer applies“.

This is a very significant point – data, once truly anonymised, is no longer subject to European data protection law.  This means that EU rules governing how long that data can be kept for, whether it can be exported internationally and so on, do not apply.  The net effect of this should be to incentivise controllers to anonymise their datasets, shouldn’t it?

Well, not quite.  Because the truth is that many controllers don’t anonymise their data, but use pseudonymisation techniques instead.  

Difference between anonymisation and pseudonymisation

Anonymisation means transforming personal information into data that “can no longer be used to identify a natural person … [taking into account] ‘all the means likely reasonably to be used’ by either the controller or a third party.  An important factor is that the processing must be irreversible.”  Using anonymisation, the resulting data should not be capable of singling any specific individual out, of being linked to other data about an individual, nor of being used to deduce an individual’s identity.

Conversely, pseudonymisation means “replacing one attribute (typically a unique attribute) in a record by another.  The natural person is therefore still likely to be identified indirectly.”  In simple terms, pseudonymisation means replacing ‘obviously’ personal details with another unique identifier, typically generated through some kind of hashing, encryption or tokenisation function.  For example, “Phil Lee bought item x” could be pseudonymised to “Visitor 15364 bought item x”.

The Working Party is at pains to explain that pseudonymisation is not the same thing as anonymisation: “Data controllers often assume that removing or replacing one or more attributes is enough to make the dataset anonymous.  Many examples have shown that this is not the case…” and “pseudonymisation when used alone will not result in an anonymous dataset.

The value of pseudonymisation

The Working Party lists various “common mistakes” and “shortcomings” of pseudonymisation but curiously, given its prevalence, fails to acknowledge the very important benefits it can deliver, including in terms of:

  • Individuals’ expectations: The average individual sees a very big distinction between data that is directly linked to them (i.e. associated with their name and contact details) and data that is pseudonymised, even if not fully anonymised.  In the context of online targeted advertising, for example, website visitors are very concerned about their web browsing profiles being collected and associated directly with their name and address, but less so with a randomised cookie token that allows them to be recognised, but not directly identified.
  • Data value extraction:  For many businesses, anonymisation is just not an option.  The data they collect typically has a value whose commercialisation, at an individual record level, is fundamental to their business model.  So what they need instead is a solution that enables them to extract value at a record level but also that respects individuals’ privacy by not storing directly identifying details, and pseudonymisation enables this.
  • Reversibility:  In some contexts, reversibility of pseudonymised data can be very important.  For example, in the context of clinical drug trials, it’s important that patients’ pseudonymised trial data can be reversed if needing, say, to contact those patients to alert them to an adverse drug event.  Fully anonymised data in this context would be dangerous and irresponsible.
  • Security:  Finally, pseudonymisation improves the security of data held by controllers.  Should that data be compromised in a data breach scenario, the likelihood that underlying individuals’ identities will be exposed and that they will suffer privacy harm as a result is considerably less.

It would be easy to read the Working Party’s Opinion and conclude that pseudonymisation ultimately serves little purpose, but this would be a foolhardy conclusion to draw.  Controllers for whom anonymisation is not possible should never be disincentivised from implementing pseudonymisation as an alternative – not doing so would be to the detriment of their security and to their data subjects’ privacy.

Instead, pseudonymisation should always be encouraged as a minimum measure intended to facilitate data use in a privacy-respectful way.  As such, it should be an essential part of every controller’s privacy toolkit!

Incentivising compliance through tangible benefits

Posted on September 29th, 2013 by



The secret of compliance is motivation. That motivation does not normally come from the pleasure and certainty derived from ticking all possible boxes on a compliance checklist. Although, having said that, I have come across sufficiently self-disciplined individuals who seem to make a virtue out of achieving the highest degree of data privacy compliance within their organisations. However, this is quite exceptional. In truth, it is very difficult for any organisation – big or small, in the private or public sector – to get its act together simply out of fear of non-compliance with the law. Putting effective policies and procedures in place is never the result of a sheer drive to avoid regulatory punishment. Successful legal compliance is, more often than not, the result of presenting dry and costly legal obligations as something else. In particular, something that provides tangible benefits.

The fact that personal information is a valuable asset is demonstrated daily. Publicly quoted corporate powerhouses whose business model is entirely dependent on people’s data evidence the present. Innovative and fast growing businesses in the tech, digital media, data analytics, life sciences and several other sectors show us the future. In all cases, the consistent message coming not just from boardrooms, but from users, customers and investors, is that data fuels success and opportunity. Needless to say, most of that data is linked to each of us as individuals and, therefore, its use has implications in one way or another for our privacy. So, when looked at from the point of view of an organisation which wishes to exploit that data, regulating data privacy equates regulating the exploitation of an asset.

The term ‘exploitation’ instinctively brings to mind negative connotations. When talking about personal information, whose protection – as is well known – is regarded as a fundamental human right in the EU, the term exploitation is especially problematic. The insinuation that something of such an elevated legal rank is being indiscriminately used to someone’s advantage makes everyone feel uncomfortable. But what about the other meaning of the word? Exploitation is also about making good use of something by harnessing its value. Many responsible and successful businesses, governments and non-profit organisations look at exploiting their assets as a route to sustainability and growth. Exploiting personal information does not need to be negative and, in fact, greater financial profits and popular support – and ultimately, success – will come from responsible, but effective ways of leveraging that asset.

For that reason, it is possible to argue that the most effective way of regulating the exploitation of data as an asset is to prove that responsible exploitation brings benefits that organisations can relate to. In other words, policy making in the privacy sphere should emphasise the business and social benefits – for the private and public sector respectively – of achieving the right level of legal compliance. The rest is likely to follow much more easily and all types of organisations – commercial or otherwise – will endeavour to make the right decisions about the data they collect, use and share. Right for their shareholders, but also for their customers, voters and citizens. The message for policy makers is simple: bring compliance with the law closer to the tangible benefits that motivate decision makers.

This article was first published in Data Protection Law & Policy in September 2013 and is an extract from Eduardo Ustaran’s forthcoming book The Future of Privacy, which is due to be published in November 2013.

Global protection through mutual recognition

Posted on July 23rd, 2013 by



At present, there is a visible mismatch between the globalisation of data and the multinational approach to privacy regulation. Data is global by nature as, regulatory limits aside, it runs unconstrained through wired and wireless networks across countries and continents. Put in a more poetic way, a digital torrent of information flows freely in all possible directions every second of the day without regard for borders, geographical distance or indeed legal regimes and cultures. Data legislation on the other hand is typically attached to a particular jurisdiction – normally a country, sometimes a specific territory within a country and occasionally a selected group of countries. As a result, today, there is no such thing as a single global data protection law that follows the data as it makes its way around the world.

However, there is light at the end of the tunnel. Despite the current trend of new laws in different shapes and flavours emerging from all corners of the planet, there is still a tendency amongst legislators to rely on a principles-based approach, even if that translates into extremely prescriptive obligations in some cases – such as Spain’s applicable data security measures depending on the category of data or Germany’s rules to include certain language in contracts for data processing services. Whether it is lack of imagination or testimony to the sharp brains behind the original attempts to regulate privacy, it is possible to spot a common pedigree in most laws, which is even more visible in the case of any international attempts to frame privacy rules.

When analysed in practice and through the filter of distant geographical locations and moments in time, it is definitely possible to appreciate the similarities in the way privacy principles have been implemented by fairly diverse regulatory frameworks. Take ‘openness’ in the context of transparency, for example. The words may be slightly different and in the EU directive, it may not be expressly named as a principle, but it is consistently everywhere – from the 1980 OECD Guidelines to Safe Harbor and the APEC Privacy Framework. The same applies to the idea of data being collected for specified purposes, being accurate, complete and up to date, and people having access to their own data. Seeing the similarities or the differences between all of these international instruments is a matter of mindset. If one looks at the words, they are not exactly the same. If one looks at the intention, it does not take much effort to see how they all relate.

Being a lawyer, I am well aware of the importance of each and every word and its correct interpretation, so this is not an attempt to brush away the nuances of each regime. But in the context of something like data and the protection of all individuals throughout the world to whom the data relates, achieving some global consistency is vital. The most obvious approach to resolving the data globalisation conundrum would be to identify and put in place a set of global standards that apply on a worldwide basis. That is exactly what a number of privacy regulators backed by a few influential thinkers tried to do with the Madrid Resolution on International Standards on the Protection of Personal Data and Privacy of 2009. Unfortunately, the Madrid Resolution never became a truly influential framework. Perhaps it was a little too European. Perhaps the regulators ran out of steam to press on with the document. Perhaps the right policy makers and stakeholders were not involved. Whatever it was, the reality is that today there is no recognised set of global standards that can be referred to as the one to follow.

So until businesses, politicians and regulators manage to crack a truly viable set of global privacy standards, there is still an urgent need to address the privacy issues raised by data globalisation. As always, the answer is dialogue. Dialogue and a sense of common purpose. The USA and the EU in particular have some important work to do in the context of their trade discussions and review of Safe Harbor. First they must both acknowledge the differences and recognise that an area like privacy is full of historical connotations and fears. But most important of all, they must accept that principles-based frameworks can deliver a universal baseline of privacy protection. This means that efforts must be made by all involved to see what Safe Harbor and EU privacy law have in common – not what they lack. It is through those efforts that we will be able to create an environment of mutual recognition of approaches and ultimately, a global mechanism for protecting personal information.

This article was first published in Data Protection Law & Policy in July 2013.