Archive for the ‘Data as an asset’ Category

Vidal-Hall v Google: A new dawn for data protection claims?

Posted on April 15th, 2015 by



The landmark judgment of the Court of Appeal in Vidal Hall & Ors v Google Inc may signal the dawn of a new beginning for data protection litigants. Prior to this case, the law in England was relatively settled: that in order to incur civil liability under the Data Protection Act 1998, the claimant had to establish at least some form of pecuniary damage (unless the processing related to journalism, art or literature). The wording of section 13(2) appeared unequivocal on this point and it frequently proved to be a troublesome hurdle for claimants – and a powerful shield for defendants.

The requirement, however, was always the source of some controversy and the English courts have tried in recent years to dilute the strictness of the rule.

Then enter Ms Vidal-Hall & co: three individuals who allege that Google has been collecting private information about their internet usage from their Safari browser without their knowledge or consent. Claims were brought under the tort of misuse of private information and under s.13 of the DPA, though there was no claim for pecuniary loss.

This ruling concerned only the very early stages of litigation – whether the claimants were permitted to even serve the claim on Google which, being based in California, were outside of the jurisdiction. Permission was granted by the Court of Appeal and the case will now proceed through the English courts.

Three key rulings lie at the heart of this judgment:

  • There is now no need to establish pecuniary damage to bring a claim under the DPA. Distress alone is sufficient.
  • It is “arguable” that browser generated information (BGI) constitutes “personal data” under the DPA.
  • Misuse of private information should be classified as a tort for the purposes of service out of the jurisdiction.

We take each briefly in turn:

(1) Damages for distress alone are sufficient

The Court of Appeal disapplied the clear wording of domestic legislation on the grounds that the UK Act could not be interpreted compatibly with Article 23 of the EU Directive, and Articles 7, 8 and 47 of the EU Charter of Fundamental Rights. It held that the main purpose of the Data Protection Directive was to protect privacy, rather than economic rights, and it would be “strange” if it could not compensate those individuals who had suffered emotional distress but no pecuniary damage, when distress was likely to be the primary form of damage where there was a contravention.

It is too early to say whether this ruling will in practice open the door to masses of litigation – but there is no doubt that a significant obstacle that previously stood in the way of DPA claimants has now been unambiguously lifted by the Court of Appeal.

(2) Browser-generated information may constitute “personal data”

A further interesting, though less legally ground-breaking, ruling was that the BGI data in this case was arguably “personal data” under the DPA. The Court of Appeal did not decide the issue, but held that there was at least a “serious issue to be tried”.

Google had argued that: (a) the BGI data on its own was anonymous as it did not name or identify any individual; and (b) it kept the BGI data segregated from other data it held from which an individual might be identifiable (e.g. Gmail accounts). Thus, it was not personal data.

In response to Google’s points, the Court considered that it was immaterial that the BGI data did not name the user – what was relevant was that the data comprised of detailed browsing histories and the use of a DoubleClick cookie (a unique identifier which enabled the browsing histories to be linked to a specific device/machine). Taking those two elements together, it was “possible” to equate an individual user with the particular device, thus potentially bringing the data under the definition of “personal data”.

The Court further considered it immaterial that Google in practice segregated the BGI data from other data in its hands. What mattered was whether Google had the other information actually within its possession which it “could” use to identify the data subject, “regardless of whether it does so or not”.

(3) Misuse of private information is a tort

Finally, there was the confirmation that the misuse of private information is a tort for the purposes of service out of the jurisdiction. Not a huge point for our readers, but it will mean that claimants who bring claims under this cause of action will more easily obtain service out of the jurisdiction against foreign defendants.

A turning point…?

So the judgment certainly leaves much food for thought and is a significant turning point in the history of data protection litigation. There may also be a wider knock-on effect within the EU as other Member States that require proof of pecuniary damage look to the English judgment as a basis for opening up pure distress claims in their own jurisdictions.

The thing to bear in mind is that the ruling concerned only the very early stages of litigation – there is still a long road ahead in this thorny litigation and a great deal of legal and factual issues that still need to be resolved.

Cookie droppers may be watching this space with a mixture of fear and fascination.

 

 

Belgian research report claims Facebook tracks the internet use of everyone

Posted on April 1st, 2015 by



A report published by researchers at two Belgian universities claims that Facebook engages in massive tracking of not only its users but also people who have no Facebook account. The report also identifies a number of other violations of EU law.

When Facebook announced, in late 2014, that it would revise its Data Use Policy (DUP) and Terms of Services effective from 30 January 2015, a European Task Force, led by the Data Protection Agencies of the Netherlands, Belgium and Germany, was formed to analyse the new policies and terms.

In Belgium, the State Secretary for Privacy, Bart Tommelein, had urged the Belgian Privacy Commission to start an investigation into Facebook’s privacy policy, which led to the commissioning of the draft report that has now been published. The report concludes that Facebook is acting in violation of applicable European legislation and that “Facebook places too much burden on its users. Users are expected to navigate Facebook’s complex web of settings in search of possible opt-outs“.

The main findings of the report can be summarised as follows:

Tracking through social plug-ins

The researchers found that whenever a user visits a non-Facebook website, Facebook will track that user by default, unless he or she takes steps to opt-out. The report concludes that this default opt-out approach is not in line with the opt-in requirements laid down in the E-privacy Directive.

As far as non-users of Facebook are concerned, the researchers’ findings confirm previous investigations, most notably in Germany, that Facebook places a cookie each time a non-user visits a third-party website which contains a Facebook social plug-in such as the Like-button. Moreover, this cookie is placed regardless of whether the non-user has clicked on that Like button or not. Considering that Facebook does not provide any of this information to such non-users, and that the non-user is not requested to consent to the placing of such cookie, this can also be considered a violation of the E-privacy Directive.

Finally, the report found that both users and non-users who decide to use the opt-out mechanism offered by Facebook receive a cookie during this very opt-out process. This cookie, which has a default duration of two years, enables Facebook to track the user or non-user across all websites that contain its social plug-ins.

Other data protection issues identified

In addition to a number of consumer protection law issues, the report also covers the following topics relating to data protection:

  • Consent: The researchers are of the opinion that Facebook provides only very limited and vague information and that for many data uses, the only choice for users is to simply “take-it-or-leave-it”. This is considered to be a violation of the principle that in order for consent to be valid, it should be freely given, specific, informed and unambiguous as set-out in the Article 29 Working Party’s Opinion on consent (WP 187).
  • Privacy settings: The report further states that the current default settings (opt-out mechanism) remain problematic, not in the least because “users cannot exercise meaningful control over the use of their personal information by Facebook or third parties” which gives them “a false sense of control”.
  • Location data: Finally, the researchers consider that Facebook should offer more granular in-app settings for the sharing of location data, and should provide more detailed information about how, when and why it processes location data. It should also ensure it does not store the location data for longer than is strictly necessary.

Conclusion

The findings of this report do not come as a surprise. Indeed, most of the alleged areas of non-compliance have already been the object of discussions in past years and some have already been investigated by other privacy regulators (see e.g. the German investigations around the ‘like’ button).

The real question now surrounds what action the Belgian Privacy Commission will take on the basis of this report.

On the one hand, as of late, data protection enforcement has been put high on the agenda in Belgium. It seems the Belgian Privacy Commission is more determined than ever to show that its enforcement strategy has changed. This can also be situated in the context of recent muscular declarations from the State Secretary of Privacy that companies like Snapchat and Uber must be investigated to ensure they comply with EU data protection law.

Facebook, on the other hand, questions the authority of the Belgian Privacy Commission to conduct such an investigation, stating that only the Irish DPA is competent to discuss their privacy policies. Facebook has also stated that the report contains factual inaccuracies and expressed regret that the organisation was not contacted by the researchers.

It will therefore be interesting to see how the discussions between Facebook and the Belgian Privacy Commission develop. The President of the Belgian Privacy Commission has declared a number of times that it will not hesitate to take legal action against Facebook if the latter refuses to implement the changes for which Privacy Commission is asking.

This could potentially lead to Facebook being prosecuted, although it is more likely that it will be forced to accept a criminal settlement. In 2011, following the Privacy Commission’s investigation into Google Street View, Google accepted to pay 150.000 EUR as part of a criminal settlement with the public prosecutor.

Will no doubt be continued…

 

 

PART 2 – The regulatory outlook for the Internet of Things

Posted on October 22nd, 2014 by



In Part 1 of this piece I posed a question asking: the Internet of Things – what is it? I argued that even the concept of the Internet of Things (“IoT“) itself is somewhat ill-defined making the point there is no definition of IoT and, even if there were, that the definition will only change. What’s more, IoT will mean different things to different people and talk to something new each year.

For all the commentary, there is not specific IoT law today (sorry there is no Internet of Things (Interconnectivity) Act in the UK (and nor will there be any time soon)). We are left applying a variety of existing laws across telecoms, intellectual property, competition, health and safety and data privacy / security. Equally, with a number of open questions about how the IoT will work, how devices will communicate and identify each other etc., there is also a lack of standards and industry wide co-operation around IoT.

Frequently based around data use and with potentially intrusive application in the consumer space (think wearables, intelligent vehicles and healthtech) there is no doubt that convergence around IoT will fan privacy questions and concerns.

An evolving landscape

This lack of definition, coupled with a nascent landscape of standards, interfaces, and protocols leaves many open questions about future regulation and the application of current laws. On the regulatory front there is little sign of actual law-making or which rules may evolve to influence our approach or analysis.

Across the US, UK and the rest of Europe some of the regulatory bodies with an interest in IoT are diverse with a range of regulatory mandates and sometimes with a defined role confined to specific sectors. Some of these regulators are waking up to potential issues posed by IoT and a few are reaching out to the industry as a whole to consult and stimulate discussion. We’re more likely to see piecemeal regulation addressing specific issues than something all encompassing.

The challenge of new technology

Undoubtedly the Internet of Things will challenge law makers as well as those of us who construe the law. It’s possible that in navigating these challenges and our current matrix of laws and principles that we may influence the regulatory position as a result. Some obvious examples of where these challenges may come from are:

  • Adaptations to spectrum allocation. If more devices want to communicate, many of these will do so wirelessly (whether via short range or wide area comms or mobile). The key is that these exchanges don’t interfere with each other and that there is sufficient capacity available within the allocated spectrum. This may need to be regulated.
  • Equally, as demand increases, with a scarce resource what kind of spectrum allocation is “fair” and “optimal” and is some machine to machine traffic more important than other traffic? With echoes of the net neutrality debate the way this evolves will be interesting. Additionally, if market dominance emerges around one technology will there be competition/anti-trust concerns;
  • The technologies surrounding the IoT will throw up intellectual property and licensing issues. The common standards and exchange and identification protocols themselves may be controlled by interested party or parties or released on an “open” basis. Regulation may need to step-in to promote economic advance via speedy adoption or simply act as an honest broker in a competitive world; and
  • In some applications of IoT the concept of privacy will be challenged. In a decentralised world the thorny issues of consent and reaffirming consent will be challenging. This said, many IoT deployments will not involve personal information or identifiers. Plus, whatever the data, issues around security become more acute.

We have a good idea what issues may be posed, but we don’t yet know which will impose themselves sufficiently to force regulation or market intervention.

Consultation – what IoT means for the policy agenda

There have been some opening shots in this potential regulatory debate because a continued interconnectivity between multiple devices raises potential issues.

  • In issuing a new Consultation: “Promoting investment and innovation in the Internet of Things“, Ofcom (the UK’s communications regulator) kicked off its own learning exercise identify potential policy concerns around:
  • spectrum allocation and providing for potential demand;
  • understanding of the robustness and reliability issues placed upon networks which demand resilience and security. The corresponding issue of privacy is recognised also;
  • a need for each connected device to have an assigned name or identifier and questioning just how those addresses should be determined and potentially how they would be assigned; and
  • understanding their potential role as the UK’s regulator in an area (connectivity) key to the evolution of IoT.

In a varied and quite penetrable paper, Ofcom’s consultation recognises what many will be shouting, their published view “is that industry is best placed to drive the development, standardisation and commercialisation of new technology“. However, it goes on to recognise that “given the potential for significant benefits from the development of the IoT across a range of industry sectors, ][Ofcom[ are interested in views on whether we should be more proactive; for example, in identifying and making available key frequency bands, or in helping to drive technical standards.”

Europe muses while Working Party 29 wades in early warning about privacy

IoT adoption has been on Europe’s “Digital Agenda” for some time and in 2013 it reported back on its own Conclusions of the Internet of Things public consultation. There is also the “Connected Continent” initiative chasing a single EU telecoms market for jobs and growth.   The usual dichotomy is playing out equating technology adoption with “growth” while Europe wrestles with an urge to protect consumers and markets.

In just one such fight with this urge, in the past month the Article 29 Working Party (comprising the data privacy regulators of Europe) published its own Opinion 8/2014 on the Recent Developments on the Internet of Things. Recognising that it’s impossible to predict with any certainty the extent to which the IoT will develop the group also calls out that the development must “respect the many privacy and security challenges which can be associated with IoT“.

Their Opinion focuses on three specific IoT developments:

  • Wearable Computing;
  • Quantified Self; and
  • Domotics (home automation).

This Opinion doesn’t even consider B2B applications and more global issues like “smart cities”, “smart transportations”, as well as M2M (“machine to machine”) developments. Yet, the principles and recommendations their Opinion may well apply outside its strict scope and cover these other developments in the IoT. It’s one of our only guiding lights (and one which applies high standards of responsibility).

As one would expect, the Opinion identifies the “main data protection risks that lie within the ecosystem of the IoT before providing guidance on how the EU legal framework should be applied in this context”. What’s more the Working Party “supports the incorporation of the highest possible guarantees for individual users at the heart of the projects by relevant stakeholders. In particular, users must remain in complete control of their personal data throughout the product lifecycle, and when organisations rely on consent as a basis for processing, the consent should be fully informed, freely given and specific.”

The Fieldfisher team will shortly publish its thoughts and explanation of this Opinion. As one may expect, the IoT can and will challenge the privacy notions of transparency and consent let alone proportionality and purpose limitation. This means that accommodating the EU’s data privacy principles within the application of some IoT will not always be easy. Security poses another tricky concept and conversation. Typically these are issues to be tacked at the design stage and not as a legal afterthought. Step forward the concept of privacy by design (a concept recognised now around the globe).

In time, who knows, we may even see the EU Data Protection Regulation pass and face enhanced privacy obligations in Europe with new focus on “profiling” and legal responsibilities falling beyond the data processor exacting its own force over IoT.

The US is also alive to the potential needs of IoT

But Europe is not alone, with its focus on activity specific laws or laws regulating specific industries, even the US may be addressing particular IoT concerns with legislation. Take the “We Are Watching You Act” currently with Congress and the “Black Box Privacy Protection Act” with the House of Representatives. Each now apparently have a low chance of actually passing, but may regulate monitoring of surveillance by video devices in the home and force car manufacturers to disclose to consumers the presence of event data recorders, or ‘black boxes’, in new automobiles.

A wider US development possibly comes from the Federal Trade Commission who hosted public workshops in 2013, itself interested in privacy and security in the connected world and the growing connectivity of devices. In the FTC’s own words: “[c]onnected devices can communicate with consumers, transmit data back to companies, and compile data for third parties such as researchers, health care providers, or even other consumers, who can measure how their product usage compares with that of their neighbors. The workshop brought together academics, business and industry representatives, and consumer advocacy groups to explore the security and privacy issues in this changing world. The workshop served to inform the Commission about the developments in this area.” Though there are no concrete proposals yet, 2014 has seen a variety of continued commentary around “building trust” and “maximising consumer benefits through consumer control”. With its first IoT enforcement action falling in 2013 (in respect of connected baby monitors from TRENDnet whose feeds were not secure) there’s no doubt the evolution of IoT is on the FTC’s radar.

FTC Chairwomen, Edith Ramirez commented that “The Internet of Things holds great promise for innovative consumer products and services. But consumer privacy and security must remain a priority as companies develop more devices that connect to the Internet“.

No specific law, but plenty of applicable laws

My gut instinct to hold back on my IoT commentary had served me well enough. In the legal sense with little to say, perhaps even now I’ve spoken too soon? What is clear is that we’re immersing ourselves in IoT projects, wearable device launches, health monitoring apps, intelligent vehicles and all the related data sharing already. The application of law to the IoT needs some legal thought and, without specific legislation today, as for many other emerging technologies we must draw upon:

  • Our insight into the existing law across and its current application across different legal fields; and
  • Rather than applying a rule specific to IoT, we have to ask the right questions to build a picture of the technology, the way it communicates and figure out the commercial realities and relative risks posed by these interactions.

Whether the internet of customers, the internet of people, data, processes or even the internet of everything; applied legal analysis will get us far enough until we actually see some substantive law for the IoT. This is today’s IoT challenge.

Mark Webber – Partner, Palo Alto California mark.webber@fieldfisher.com

The legal and practical realities of “personal data”

Posted on September 3rd, 2014 by



Are IP addresses personal data?  It’s a question I’m so frequently asked that I thought I’d pause for a moment to reflect on how the scope of “personal data” has changed since the EU Data Protection Directive’s adoption in 1995.

The Directive itself defines personal data as “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity“.

That’s not the beginning and the end of the story though.  Over the years, various regulatory guidance has been published that has further shaped what we understand by the term “personal data”.  This guidance has taken the form of papers published by the Article 29 Working Party (most notably Opinion 4/2007 on the Concept of Personal Data) and by national regulators like the UK’s Information Commissioner’s Office (see here).  Then throw in various case law that has touched on this issue, like the Durant case in the UK and the European Court of Justice rulings in Bodil Lindqvist (Case C-101/01) and the Google Right to Be Forgotten case (C-131/12), and it’s apparent that an awful lot of time has been spent thinking about this issue by an awful lot of very clever people.

The danger, though, is that the debate over what is and isn’t personal data can often get so weighted down in academic posturing, that the practical realities of managing data often get overlooked.  When I’m asked whether or not data is personal, it’s typically a loaded question: the enquirer wants to know whether the data in question can be retained indefinitely, or whether it can be withheld from disclosures made in response to a subject access request, or whether it can be transferred internationally without restriction.  If the data’s not personal, then the answer is: yes, yes and yes.  If it is personal, then the enquirer needs to start thinking about how to put in place appropriate compliance measures for managing that data.

There are, of course, data types that are so obviously personal that it would be churlish to pretend otherwise: no one could claim that a name, address or telephone number isn’t personal.  But what should you do when confronted with something like an IP address, a global user ID, or a cookie string?  Are these data types “personal”?  If you’re a business trying to operationalise a privacy compliance program, an answer of “maybe” just doesn’t cut it.  Nor does an answer of “err on the side of caution and treat it as personal anyway”, as this can lead to substantial engineering and compliance costs in pursuit of a vague – and possibly even unwarranted – benefit.

So what should you do?  Legal purists might start exploring whether these data types “relate” to an “identified or identifiable person”, as per the Directive.  They might note that the Directive mentions “direct or indirect” identification, including by means of an “identification number” (an obvious hook for arguing an IP address is personal data).  They might explore the content, purpose or result of the data processing, as proposed by the Article 29 Working Party, or point out that these data types “enable data subjects to be ‘singled out’, even if their real names are not known.”  Or they might even argue the (by now slightly fatigued) argument that these data types relate to a device, not to a person – an argument that may once have worked in a world where a single computer was shared by a family of four, but that now looks increasingly weak in a world where your average consumer owns multiple devices, each with multiple unique IDs.

There is an alternative, simpler test though: ask yourself why this data is processed in the first place and what the underlying individuals would therefore expect as a consequence.  For example: Is it collected just to prevent online fraud or is it instead being put to use for targeting purposes? Depending on your answer, would individuals therefore expect to receive a bunch of cookie strings in response to a subject access request?  How would they feel about you retaining their IP address indefinitely if it was held separately from other personal identifiers?

The answers to these questions will of course vary depending on the nature of the business you run – it’s difficult to imagine a Not For Profit realistically being expected to disclose IP addresses contained in web server logs in response to a subject access request, but perhaps not a huge stretch, say, for a targeted ad platform.   The point is simply that trying to apply black and white boundaries to what is, and isn’t, personal will, in most cases, prove an unhelpful exercise and be wholly devoid of context.  That’s why Privacy Impact Assessment are so important as a tool to assess these issues and proposed measured, proportionate responses to them.

The debate over the scope of personal data is far from over, particularly as new technologies come online and regulators and courts continue to publish decisions about what they consider to be personal.  But, faced with practical compliance challenges about how to handle data in a day-to-day context, it’s worth stepping back from legal and regulatory guidance alone.  Of course, I wouldn’t for a second advocate making serious compliance decisions in the absence of legal advice; it’s simply that decisions based on legal merit alone risk not giving due consideration to data subject trust.

And what is data protection about, if not about trust?

 

Anonymisation is great, but don’t undervalue pseudonymisation

Posted on April 26th, 2014 by



Earlier this week, the Article 29 Working Party published its Opinion 05/2014 on Anonymisation Techniques.  The opinion describes (in quite some technical detail) the different anonymisation techniques available to data controllers, their relative values, and makes some good practice suggestions – noting that “Once a dataset is truly anonymised and individuals are no longer identifiable, European data protection law no longer applies“.

This is a very significant point – data, once truly anonymised, is no longer subject to European data protection law.  This means that EU rules governing how long that data can be kept for, whether it can be exported internationally and so on, do not apply.  The net effect of this should be to incentivise controllers to anonymise their datasets, shouldn’t it?

Well, not quite.  Because the truth is that many controllers don’t anonymise their data, but use pseudonymisation techniques instead.  

Difference between anonymisation and pseudonymisation

Anonymisation means transforming personal information into data that “can no longer be used to identify a natural person … [taking into account] ‘all the means likely reasonably to be used’ by either the controller or a third party.  An important factor is that the processing must be irreversible.”  Using anonymisation, the resulting data should not be capable of singling any specific individual out, of being linked to other data about an individual, nor of being used to deduce an individual’s identity.

Conversely, pseudonymisation means “replacing one attribute (typically a unique attribute) in a record by another.  The natural person is therefore still likely to be identified indirectly.”  In simple terms, pseudonymisation means replacing ‘obviously’ personal details with another unique identifier, typically generated through some kind of hashing, encryption or tokenisation function.  For example, “Phil Lee bought item x” could be pseudonymised to “Visitor 15364 bought item x”.

The Working Party is at pains to explain that pseudonymisation is not the same thing as anonymisation: “Data controllers often assume that removing or replacing one or more attributes is enough to make the dataset anonymous.  Many examples have shown that this is not the case…” and “pseudonymisation when used alone will not result in an anonymous dataset.

The value of pseudonymisation

The Working Party lists various “common mistakes” and “shortcomings” of pseudonymisation but curiously, given its prevalence, fails to acknowledge the very important benefits it can deliver, including in terms of:

  • Individuals’ expectations: The average individual sees a very big distinction between data that is directly linked to them (i.e. associated with their name and contact details) and data that is pseudonymised, even if not fully anonymised.  In the context of online targeted advertising, for example, website visitors are very concerned about their web browsing profiles being collected and associated directly with their name and address, but less so with a randomised cookie token that allows them to be recognised, but not directly identified.
  • Data value extraction:  For many businesses, anonymisation is just not an option.  The data they collect typically has a value whose commercialisation, at an individual record level, is fundamental to their business model.  So what they need instead is a solution that enables them to extract value at a record level but also that respects individuals’ privacy by not storing directly identifying details, and pseudonymisation enables this.
  • Reversibility:  In some contexts, reversibility of pseudonymised data can be very important.  For example, in the context of clinical drug trials, it’s important that patients’ pseudonymised trial data can be reversed if needing, say, to contact those patients to alert them to an adverse drug event.  Fully anonymised data in this context would be dangerous and irresponsible.
  • Security:  Finally, pseudonymisation improves the security of data held by controllers.  Should that data be compromised in a data breach scenario, the likelihood that underlying individuals’ identities will be exposed and that they will suffer privacy harm as a result is considerably less.

It would be easy to read the Working Party’s Opinion and conclude that pseudonymisation ultimately serves little purpose, but this would be a foolhardy conclusion to draw.  Controllers for whom anonymisation is not possible should never be disincentivised from implementing pseudonymisation as an alternative – not doing so would be to the detriment of their security and to their data subjects’ privacy.

Instead, pseudonymisation should always be encouraged as a minimum measure intended to facilitate data use in a privacy-respectful way.  As such, it should be an essential part of every controller’s privacy toolkit!

Incentivising compliance through tangible benefits

Posted on September 29th, 2013 by



The secret of compliance is motivation. That motivation does not normally come from the pleasure and certainty derived from ticking all possible boxes on a compliance checklist. Although, having said that, I have come across sufficiently self-disciplined individuals who seem to make a virtue out of achieving the highest degree of data privacy compliance within their organisations. However, this is quite exceptional. In truth, it is very difficult for any organisation – big or small, in the private or public sector – to get its act together simply out of fear of non-compliance with the law. Putting effective policies and procedures in place is never the result of a sheer drive to avoid regulatory punishment. Successful legal compliance is, more often than not, the result of presenting dry and costly legal obligations as something else. In particular, something that provides tangible benefits.

The fact that personal information is a valuable asset is demonstrated daily. Publicly quoted corporate powerhouses whose business model is entirely dependent on people’s data evidence the present. Innovative and fast growing businesses in the tech, digital media, data analytics, life sciences and several other sectors show us the future. In all cases, the consistent message coming not just from boardrooms, but from users, customers and investors, is that data fuels success and opportunity. Needless to say, most of that data is linked to each of us as individuals and, therefore, its use has implications in one way or another for our privacy. So, when looked at from the point of view of an organisation which wishes to exploit that data, regulating data privacy equates regulating the exploitation of an asset.

The term ‘exploitation’ instinctively brings to mind negative connotations. When talking about personal information, whose protection – as is well known – is regarded as a fundamental human right in the EU, the term exploitation is especially problematic. The insinuation that something of such an elevated legal rank is being indiscriminately used to someone’s advantage makes everyone feel uncomfortable. But what about the other meaning of the word? Exploitation is also about making good use of something by harnessing its value. Many responsible and successful businesses, governments and non-profit organisations look at exploiting their assets as a route to sustainability and growth. Exploiting personal information does not need to be negative and, in fact, greater financial profits and popular support – and ultimately, success – will come from responsible, but effective ways of leveraging that asset.

For that reason, it is possible to argue that the most effective way of regulating the exploitation of data as an asset is to prove that responsible exploitation brings benefits that organisations can relate to. In other words, policy making in the privacy sphere should emphasise the business and social benefits – for the private and public sector respectively – of achieving the right level of legal compliance. The rest is likely to follow much more easily and all types of organisations – commercial or otherwise – will endeavour to make the right decisions about the data they collect, use and share. Right for their shareholders, but also for their customers, voters and citizens. The message for policy makers is simple: bring compliance with the law closer to the tangible benefits that motivate decision makers.

This article was first published in Data Protection Law & Policy in September 2013 and is an extract from Eduardo Ustaran’s forthcoming book The Future of Privacy, which is due to be published in November 2013.

Global protection through mutual recognition

Posted on July 23rd, 2013 by



At present, there is a visible mismatch between the globalisation of data and the multinational approach to privacy regulation. Data is global by nature as, regulatory limits aside, it runs unconstrained through wired and wireless networks across countries and continents. Put in a more poetic way, a digital torrent of information flows freely in all possible directions every second of the day without regard for borders, geographical distance or indeed legal regimes and cultures. Data legislation on the other hand is typically attached to a particular jurisdiction – normally a country, sometimes a specific territory within a country and occasionally a selected group of countries. As a result, today, there is no such thing as a single global data protection law that follows the data as it makes its way around the world.

However, there is light at the end of the tunnel. Despite the current trend of new laws in different shapes and flavours emerging from all corners of the planet, there is still a tendency amongst legislators to rely on a principles-based approach, even if that translates into extremely prescriptive obligations in some cases – such as Spain’s applicable data security measures depending on the category of data or Germany’s rules to include certain language in contracts for data processing services. Whether it is lack of imagination or testimony to the sharp brains behind the original attempts to regulate privacy, it is possible to spot a common pedigree in most laws, which is even more visible in the case of any international attempts to frame privacy rules.

When analysed in practice and through the filter of distant geographical locations and moments in time, it is definitely possible to appreciate the similarities in the way privacy principles have been implemented by fairly diverse regulatory frameworks. Take ‘openness’ in the context of transparency, for example. The words may be slightly different and in the EU directive, it may not be expressly named as a principle, but it is consistently everywhere – from the 1980 OECD Guidelines to Safe Harbor and the APEC Privacy Framework. The same applies to the idea of data being collected for specified purposes, being accurate, complete and up to date, and people having access to their own data. Seeing the similarities or the differences between all of these international instruments is a matter of mindset. If one looks at the words, they are not exactly the same. If one looks at the intention, it does not take much effort to see how they all relate.

Being a lawyer, I am well aware of the importance of each and every word and its correct interpretation, so this is not an attempt to brush away the nuances of each regime. But in the context of something like data and the protection of all individuals throughout the world to whom the data relates, achieving some global consistency is vital. The most obvious approach to resolving the data globalisation conundrum would be to identify and put in place a set of global standards that apply on a worldwide basis. That is exactly what a number of privacy regulators backed by a few influential thinkers tried to do with the Madrid Resolution on International Standards on the Protection of Personal Data and Privacy of 2009. Unfortunately, the Madrid Resolution never became a truly influential framework. Perhaps it was a little too European. Perhaps the regulators ran out of steam to press on with the document. Perhaps the right policy makers and stakeholders were not involved. Whatever it was, the reality is that today there is no recognised set of global standards that can be referred to as the one to follow.

So until businesses, politicians and regulators manage to crack a truly viable set of global privacy standards, there is still an urgent need to address the privacy issues raised by data globalisation. As always, the answer is dialogue. Dialogue and a sense of common purpose. The USA and the EU in particular have some important work to do in the context of their trade discussions and review of Safe Harbor. First they must both acknowledge the differences and recognise that an area like privacy is full of historical connotations and fears. But most important of all, they must accept that principles-based frameworks can deliver a universal baseline of privacy protection. This means that efforts must be made by all involved to see what Safe Harbor and EU privacy law have in common – not what they lack. It is through those efforts that we will be able to create an environment of mutual recognition of approaches and ultimately, a global mechanism for protecting personal information.

This article was first published in Data Protection Law & Policy in July 2013.

The conflicting realities of data globalisation

Posted on June 17th, 2013 by



The current data globalisation phenomenon is largely due to the close integration of borderless communications with our everyday comings and goings. Global communications are so embedded in the way we go about our lives that we are hardly aware of how far our data is travelling every second that goes by. But data is always on the move and we don’t even need to leave home to be contributing to this. Ordinary technology right at our fingertips is doing the job for us leaving behind an international trail of data – some more public than other.

The Internet is global by definition. Or more accurately, by design. The original idea behind the Internet was to rely on geographically dispersed computers to transmit packets of information that would be correctly assembled at destination. That concept developed very quickly into a borderless network and today we take it for granted that the Internet is unequivocally global. This effect has been maximised by our ability to communicate whilst on the move. Mobile communications have penetrated our lives at an even greater speed and in a more significant way than the Internet itself.

This trend has led visionaries like Google’s Eric Schmidt to affirm that thanks to mobile technology, the amount of digitally connected people will more than triple – going from the current 2 billion to 7 billion people – very soon. That is more than three times the amount of data generated today. Similarly, the global leader in professional networking, LinkedIn, which has just celebrated its 10th anniversary, is banking on mobile communications as one of the pillars for achieving its mission of connecting the world’s professionals.

As a result, everyone is global – every business, every consumer and every citizen. One of the realities of this situation has been exposed by the recent PRISM revelations, which highlight very clearly the global availability of digital communications data. Perversely, the news about the NSA programme is set to have a direct impact on the current and forthcoming legislative restrictions on international data flows, which is precisely one of the factors disrupting the globalisation of data. In fact, PRISM is already being referred to as a key justification for a tight EU data protection framework and strong jurisdictional limitations on data exports, no matter how non-sensical those limitations may otherwise be.

The public policy and regulatory consequences of the PRISM affair for international data flows are pretty predictable. Future ‘adequacy findings’ by the European Commission as well as Safe Harbor will be negatively affected. We can assume that if the European Commission decides to have a go at seeking a re-negotiation of Safe Harbor, this will be cited as a justification. Things will not end there. Both contractual safeguards and binding corporate rules will be expected to address possible conflicts of law involving data requests for law enforcement or national security reasons in a way that no blanket disclosures are allowed. And of course, the derogations from the prohibition on international data transfers will be narrowly interpreted, particularly when they refer to transfers that are necessary on grounds of public interest.

The conflicting realities of data globalisation could not be more striking. On the one hand, every day practice shows that data is geographically neutral and simply flows across global networks to make itself available to those with access to it. On the other, it is going to take a fair amount of convincing to show that any restrictions on international data flows should be both measured and realistic. To address these conflicting realities we must therefore acknowledge the global nature of the web and Internet communications, the borderless fluidity of the mobile ecosystem and our human ability to embrace the most ambitious innovations and make them ordinary. So since we cannot stop the technological evolution of our time and the increasing value of data, perhaps it is time to accept that regulating data flows should not be about putting up barriers but about applying globally recognised safeguards.

This article was first published in Data Protection Law & Policy in June 2013.

Big data means all data

Posted on April 19th, 2013 by



There is an awesomeness factor in the way data about our digital comings and goings is being captured nowadays.  That awesomeness is such that it cannot even be described in numbers.  In other words, the concept of big data is not about size but about reach.  In the same way that the ‘wow’ of today’s computer memory will turn into a ‘so what’ tomorrow, references to terabytes of data are meaningless to define the power and significance of big data.  The best way to understand big data is to see it as a collection of all possible digital data.  Absolutely all of it.  Some of it will be trivial and most of it will be insignificant in isolation, but when put together its significance becomes clearer – at least to those who have the vision and astuteness to make the most of it.

Take transactional data as a starting point.  One purchase by one person is meaningful up to a point – so if I buy a cookery book, the retailer may be able to infer that I either know someone who is interested in cooking or I am interested in cooking myself.  If many more people buy the same book, apart from suggesting that it may be a good idea to increase the stock of that book, the retailer as well as other interested parties – publishers, food producers, nutritionists – could derive some useful knowledge from those transactions.  If I then buy cooking ingredients, the price of those items alone will give a picture of my spending bracket.  As the number of transactions increases, the picture gets clearer and clearer.  Now multiply the process for every shopper, at every retailer and every transaction.  You automatically have an overwhelming amount of data about what people do with their money – how much they spend, on what, how often and so on.  Is that useful information?  It does not matter, it is simply massive and someone will certainly derive value from it.  

That’s just the purely transactional stuff.  Add information about at what time people turn on their mobile phones, switch on the hot water or check their e-mail, which means of transportation they use to go where and when they enter their workplaces – all easily recordable.  Include data about browsing habits, app usage and means of communication employed.  Then apply a bit of imagination and think about this kind of data gathering in an Internet of Things scenario, where offline everyday activities are electronically connected and digitally managed.  Now add social networking interactions, blogs, tweets, Internet searches and music downloads.  And for good measure, include some data from your GPS, hairdresser and medical appointments, online banking activities and energy company.  When does this stop?  It doesn’t.  It will just keep growing.  It’s big data and is happening now in every household, workplace, school, hospital, car, mobile device and website.

What has happened in an uncoordinated but consistent manner is that all those daily activities have become a massive source of information which someone, somewhere is starting to make use of.  Is this bad?  Not necessarily.  So far, we have seen pretty benign and very positive applications of big data – from correctly spelt Internet searches and useful shopping recommendations to helpful traffic-free driving directions and even predictions in the geographical spread of contagious diseases.  What is even better is that, data misuses aside, the potential of this hugemongous amount of information is as big as the imagination of those who can get their hands on it, which probably means that we have barely started to scratch the surface of it all.

Our understanding of the potential of big data will improve as we become more comfortable and familiar with its dimensions but even now, it is easy to see its economic and social value.  But with value comes responsibility.  Just as those who extract and transport oil must apply utmost care to the handling of such precious but hazardous material, those who amass and manipulate humanity’s valuable data must be responsible and accountable for their part.  It is not only fair but entirely right that the greater the potential, the greater the responsibility, and that anyone entrusted with our information should be accountable to us all.  It should not be up to us to figure out and manage what others are doing with our data.  Frankly, that is simply unachievable in a big data world.  But even if we cannot measure the size of big data, we must still find a way to apportion specific and realistic responsibilities for its exploitation.

 

This article was first published in Data Protection Law & Policy in April 2013.

Smart Meters – new data access and privacy rules for the energy sector

Posted on February 21st, 2013 by



The Department of Energy and Climate Change (DECC) carried out numerous studies and soundings in preparation for the rollout of smart energy meters to over 30 million UK homes between 2014 and 2019, but the most polemical press coverage was elicited by the consultation in Spring 2012 on the data access and privacy issues raised by the valuable energy consumption data (Consumption Data) generated by these new metering devices. Some newspapers cited warnings of “cyber attacks by foreign hackers” and “a spy in every home”, and there was much interest in the concerns highlighted in a report published in June by the European Data Protection Supervisor that the most granular real-time Consumption Data could reveal details such as the daily habits of household members or even tell burglars when a house was unoccupied.

The UK government’s response to this consultation, published on 12th December 2012, sheds considerable light on the data protection compliance measures that must be put in place by energy companies, network operators and others who access Consumption Data such as ‘switching’ websites and energy services suppliers. These requirements will apply alongside (and in addition to) those already set out in the Data Protection Act 1998. The measures will be implemented via amendments to the licence conditions adhered to by energy suppliers (enforced by Ofgem) and a new Smart Energy Code overseen by a dedicated Smart Energy Code Panel. A central information hub controlled by a body known as the Data and Communications Company (DCC) will enable remote access to Consumption Data for suppliers and third parties that have agreed to be bound by the Code.

Background: The aim of the UK government’s smart meters programme is to give consumers real-time information about their energy consumption in the hope that this will help to control costs and eliminate estimated energy bills, on top of the environmental and cost-saving side effects of the behavioural changes such information may encourage. In the long term, it is hoped that smart energy data will lead to fluctuating, real-time energy pricing, enabling consumers to see how expensive it will be to use gas or electricity at any given time of day.

Key rules: There are some key elements to the new framework which apply differently to energy suppliers (such as British Gas and EDF Energy), network operators (companies that own and lease the infrastructure for delivering gas and electricity to premises) and “third parties” such as switching websites and energy companies when they are not acting in the capacity as a supplier to the relevant household.

A crucial aspect of the rules that applies to all parties is the requirement to obtain explicit, opt-in consent before using Consumption Data for any marketing purposes. For other uses, third parties will always need opt-in consent to remotely access Consumption Data of any level of granularity, whereas in order to remotely access the most detailed level of Consumption Data (relating to a period of less than one day), energy suppliers will also be required to obtain opt-in consent.

From a consumer protection perspective, perhaps the most important safeguards introduced by the Stage 1 draft of the Smart Energy Code published in November 2012 are the requirements on third parties requesting Consumption Data from the DCC to:

(a)  take measures to verify that the relevant household member has solicited the services connected with the third party’s data request;

(b)  self certify that the necessary consent has been obtained; and

(c)   provide reminders to consumers about the Consumption Data being collected at appropriate, regular intervals.

Privacy Impact Assessments: In line with Privacy by Design principles promoted by data protection authorities globally, the UK government has developed its own Privacy Impact Assessment to assess and anticipate the potential privacy risks of the smart metering programme as a whole. The idea is that the government’s PIA will be an “umbrella document” and every data controller wishing to access Consumption Data is expected to carry out its own PIA before the new framework comes into force (likely to be this summer). The European Commission is also developing a template PIA for this purpose.

Apart from helping to identify risks to customers and potential company liabilities, PIAs are lauded by the UK Information Commissioner as the best way to protect brand reputation, shape communication strategies and avoid expensive “bolt-on” solutions.

Conclusions: Research carried out as part of the UK government’s Data Access and Privacy consultation showed that the overwhelming concern of consumers questioned was that smart meter data would lead to an increase in direct marketing communications. Many participants did not identify the potential for misuse of Consumption Data until it was explained to them. The less obvious nature of the potential for privacy intrusion of this new data underlines the fact that consent is not a panacea in the case of smart meters (despite the considerable focus on this in the consultation responses).

So, clear and comprehensive information is key. As part of preparing for compliance, companies planning to access Consumption Data should build clear messaging into all customer-facing procedures, including those in respect of all in-person, online and call centre interaction. And whilst some of the finer details of the new rules are yet to be ironed out, it’s clear that every organisation concerned will be expected to digest the details of the new framework now and be fully prepared – including by completing Privacy Impact Assessments – in time for when the regulatory framework comes into force, expected to be June 2013.

A longer version of this article was first published in Data Protection Law & Policy in February 2013.