Archive for the ‘Data as an asset’ Category

PART 2 – The regulatory outlook for the Internet of Things

Posted on October 22nd, 2014 by



In Part 1 of this piece I posed a question asking: the Internet of Things – what is it? I argued that even the concept of the Internet of Things (“IoT“) itself is somewhat ill-defined making the point there is no definition of IoT and, even if there were, that the definition will only change. What’s more, IoT will mean different things to different people and talk to something new each year.

For all the commentary, there is not specific IoT law today (sorry there is no Internet of Things (Interconnectivity) Act in the UK (and nor will there be any time soon)). We are left applying a variety of existing laws across telecoms, intellectual property, competition, health and safety and data privacy / security. Equally, with a number of open questions about how the IoT will work, how devices will communicate and identify each other etc., there is also a lack of standards and industry wide co-operation around IoT.

Frequently based around data use and with potentially intrusive application in the consumer space (think wearables, intelligent vehicles and healthtech) there is no doubt that convergence around IoT will fan privacy questions and concerns.

An evolving landscape

This lack of definition, coupled with a nascent landscape of standards, interfaces, and protocols leaves many open questions about future regulation and the application of current laws. On the regulatory front there is little sign of actual law-making or which rules may evolve to influence our approach or analysis.

Across the US, UK and the rest of Europe some of the regulatory bodies with an interest in IoT are diverse with a range of regulatory mandates and sometimes with a defined role confined to specific sectors. Some of these regulators are waking up to potential issues posed by IoT and a few are reaching out to the industry as a whole to consult and stimulate discussion. We’re more likely to see piecemeal regulation addressing specific issues than something all encompassing.

The challenge of new technology

Undoubtedly the Internet of Things will challenge law makers as well as those of us who construe the law. It’s possible that in navigating these challenges and our current matrix of laws and principles that we may influence the regulatory position as a result. Some obvious examples of where these challenges may come from are:

  • Adaptations to spectrum allocation. If more devices want to communicate, many of these will do so wirelessly (whether via short range or wide area comms or mobile). The key is that these exchanges don’t interfere with each other and that there is sufficient capacity available within the allocated spectrum. This may need to be regulated.
  • Equally, as demand increases, with a scarce resource what kind of spectrum allocation is “fair” and “optimal” and is some machine to machine traffic more important than other traffic? With echoes of the net neutrality debate the way this evolves will be interesting. Additionally, if market dominance emerges around one technology will there be competition/anti-trust concerns;
  • The technologies surrounding the IoT will throw up intellectual property and licensing issues. The common standards and exchange and identification protocols themselves may be controlled by interested party or parties or released on an “open” basis. Regulation may need to step-in to promote economic advance via speedy adoption or simply act as an honest broker in a competitive world; and
  • In some applications of IoT the concept of privacy will be challenged. In a decentralised world the thorny issues of consent and reaffirming consent will be challenging. This said, many IoT deployments will not involve personal information or identifiers. Plus, whatever the data, issues around security become more acute.

We have a good idea what issues may be posed, but we don’t yet know which will impose themselves sufficiently to force regulation or market intervention.

Consultation – what IoT means for the policy agenda

There have been some opening shots in this potential regulatory debate because a continued interconnectivity between multiple devices raises potential issues.

  • In issuing a new Consultation: “Promoting investment and innovation in the Internet of Things“, Ofcom (the UK’s communications regulator) kicked off its own learning exercise identify potential policy concerns around:
  • spectrum allocation and providing for potential demand;
  • understanding of the robustness and reliability issues placed upon networks which demand resilience and security. The corresponding issue of privacy is recognised also;
  • a need for each connected device to have an assigned name or identifier and questioning just how those addresses should be determined and potentially how they would be assigned; and
  • understanding their potential role as the UK’s regulator in an area (connectivity) key to the evolution of IoT.

In a varied and quite penetrable paper, Ofcom’s consultation recognises what many will be shouting, their published view “is that industry is best placed to drive the development, standardisation and commercialisation of new technology“. However, it goes on to recognise that “given the potential for significant benefits from the development of the IoT across a range of industry sectors, ][Ofcom[ are interested in views on whether we should be more proactive; for example, in identifying and making available key frequency bands, or in helping to drive technical standards.”

Europe muses while Working Party 29 wades in early warning about privacy

IoT adoption has been on Europe’s “Digital Agenda” for some time and in 2013 it reported back on its own Conclusions of the Internet of Things public consultation. There is also the “Connected Continent” initiative chasing a single EU telecoms market for jobs and growth.   The usual dichotomy is playing out equating technology adoption with “growth” while Europe wrestles with an urge to protect consumers and markets.

In just one such fight with this urge, in the past month the Article 29 Working Party (comprising the data privacy regulators of Europe) published its own Opinion 8/2014 on the Recent Developments on the Internet of Things. Recognising that it’s impossible to predict with any certainty the extent to which the IoT will develop the group also calls out that the development must “respect the many privacy and security challenges which can be associated with IoT“.

Their Opinion focuses on three specific IoT developments:

  • Wearable Computing;
  • Quantified Self; and
  • Domotics (home automation).

This Opinion doesn’t even consider B2B applications and more global issues like “smart cities”, “smart transportations”, as well as M2M (“machine to machine”) developments. Yet, the principles and recommendations their Opinion may well apply outside its strict scope and cover these other developments in the IoT. It’s one of our only guiding lights (and one which applies high standards of responsibility).

As one would expect, the Opinion identifies the “main data protection risks that lie within the ecosystem of the IoT before providing guidance on how the EU legal framework should be applied in this context”. What’s more the Working Party “supports the incorporation of the highest possible guarantees for individual users at the heart of the projects by relevant stakeholders. In particular, users must remain in complete control of their personal data throughout the product lifecycle, and when organisations rely on consent as a basis for processing, the consent should be fully informed, freely given and specific.”

The Fieldfisher team will shortly publish its thoughts and explanation of this Opinion. As one may expect, the IoT can and will challenge the privacy notions of transparency and consent let alone proportionality and purpose limitation. This means that accommodating the EU’s data privacy principles within the application of some IoT will not always be easy. Security poses another tricky concept and conversation. Typically these are issues to be tacked at the design stage and not as a legal afterthought. Step forward the concept of privacy by design (a concept recognised now around the globe).

In time, who knows, we may even see the EU Data Protection Regulation pass and face enhanced privacy obligations in Europe with new focus on “profiling” and legal responsibilities falling beyond the data processor exacting its own force over IoT.

The US is also alive to the potential needs of IoT

But Europe is not alone, with its focus on activity specific laws or laws regulating specific industries, even the US may be addressing particular IoT concerns with legislation. Take the “We Are Watching You Act” currently with Congress and the “Black Box Privacy Protection Act” with the House of Representatives. Each now apparently have a low chance of actually passing, but may regulate monitoring of surveillance by video devices in the home and force car manufacturers to disclose to consumers the presence of event data recorders, or ‘black boxes’, in new automobiles.

A wider US development possibly comes from the Federal Trade Commission who hosted public workshops in 2013, itself interested in privacy and security in the connected world and the growing connectivity of devices. In the FTC’s own words: “[c]onnected devices can communicate with consumers, transmit data back to companies, and compile data for third parties such as researchers, health care providers, or even other consumers, who can measure how their product usage compares with that of their neighbors. The workshop brought together academics, business and industry representatives, and consumer advocacy groups to explore the security and privacy issues in this changing world. The workshop served to inform the Commission about the developments in this area.” Though there are no concrete proposals yet, 2014 has seen a variety of continued commentary around “building trust” and “maximising consumer benefits through consumer control”. With its first IoT enforcement action falling in 2013 (in respect of connected baby monitors from TRENDnet whose feeds were not secure) there’s no doubt the evolution of IoT is on the FTC’s radar.

FTC Chairwomen, Edith Ramirez commented that “The Internet of Things holds great promise for innovative consumer products and services. But consumer privacy and security must remain a priority as companies develop more devices that connect to the Internet“.

No specific law, but plenty of applicable laws

My gut instinct to hold back on my IoT commentary had served me well enough. In the legal sense with little to say, perhaps even now I’ve spoken too soon? What is clear is that we’re immersing ourselves in IoT projects, wearable device launches, health monitoring apps, intelligent vehicles and all the related data sharing already. The application of law to the IoT needs some legal thought and, without specific legislation today, as for many other emerging technologies we must draw upon:

  • Our insight into the existing law across and its current application across different legal fields; and
  • Rather than applying a rule specific to IoT, we have to ask the right questions to build a picture of the technology, the way it communicates and figure out the commercial realities and relative risks posed by these interactions.

Whether the internet of customers, the internet of people, data, processes or even the internet of everything; applied legal analysis will get us far enough until we actually see some substantive law for the IoT. This is today’s IoT challenge.

Mark Webber – Partner, Palo Alto California mark.webber@fieldfisher.com

The legal and practical realities of “personal data”

Posted on September 3rd, 2014 by



Are IP addresses personal data?  It’s a question I’m so frequently asked that I thought I’d pause for a moment to reflect on how the scope of “personal data” has changed since the EU Data Protection Directive’s adoption in 1995.

The Directive itself defines personal data as “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity“.

That’s not the beginning and the end of the story though.  Over the years, various regulatory guidance has been published that has further shaped what we understand by the term “personal data”.  This guidance has taken the form of papers published by the Article 29 Working Party (most notably Opinion 4/2007 on the Concept of Personal Data) and by national regulators like the UK’s Information Commissioner’s Office (see here).  Then throw in various case law that has touched on this issue, like the Durant case in the UK and the European Court of Justice rulings in Bodil Lindqvist (Case C-101/01) and the Google Right to Be Forgotten case (C-131/12), and it’s apparent that an awful lot of time has been spent thinking about this issue by an awful lot of very clever people.

The danger, though, is that the debate over what is and isn’t personal data can often get so weighted down in academic posturing, that the practical realities of managing data often get overlooked.  When I’m asked whether or not data is personal, it’s typically a loaded question: the enquirer wants to know whether the data in question can be retained indefinitely, or whether it can be withheld from disclosures made in response to a subject access request, or whether it can be transferred internationally without restriction.  If the data’s not personal, then the answer is: yes, yes and yes.  If it is personal, then the enquirer needs to start thinking about how to put in place appropriate compliance measures for managing that data.

There are, of course, data types that are so obviously personal that it would be churlish to pretend otherwise: no one could claim that a name, address or telephone number isn’t personal.  But what should you do when confronted with something like an IP address, a global user ID, or a cookie string?  Are these data types “personal”?  If you’re a business trying to operationalise a privacy compliance program, an answer of “maybe” just doesn’t cut it.  Nor does an answer of “err on the side of caution and treat it as personal anyway”, as this can lead to substantial engineering and compliance costs in pursuit of a vague – and possibly even unwarranted – benefit.

So what should you do?  Legal purists might start exploring whether these data types “relate” to an “identified or identifiable person”, as per the Directive.  They might note that the Directive mentions “direct or indirect” identification, including by means of an “identification number” (an obvious hook for arguing an IP address is personal data).  They might explore the content, purpose or result of the data processing, as proposed by the Article 29 Working Party, or point out that these data types “enable data subjects to be ‘singled out’, even if their real names are not known.”  Or they might even argue the (by now slightly fatigued) argument that these data types relate to a device, not to a person – an argument that may once have worked in a world where a single computer was shared by a family of four, but that now looks increasingly weak in a world where your average consumer owns multiple devices, each with multiple unique IDs.

There is an alternative, simpler test though: ask yourself why this data is processed in the first place and what the underlying individuals would therefore expect as a consequence.  For example: Is it collected just to prevent online fraud or is it instead being put to use for targeting purposes? Depending on your answer, would individuals therefore expect to receive a bunch of cookie strings in response to a subject access request?  How would they feel about you retaining their IP address indefinitely if it was held separately from other personal identifiers?

The answers to these questions will of course vary depending on the nature of the business you run – it’s difficult to imagine a Not For Profit realistically being expected to disclose IP addresses contained in web server logs in response to a subject access request, but perhaps not a huge stretch, say, for a targeted ad platform.   The point is simply that trying to apply black and white boundaries to what is, and isn’t, personal will, in most cases, prove an unhelpful exercise and be wholly devoid of context.  That’s why Privacy Impact Assessment are so important as a tool to assess these issues and proposed measured, proportionate responses to them.

The debate over the scope of personal data is far from over, particularly as new technologies come online and regulators and courts continue to publish decisions about what they consider to be personal.  But, faced with practical compliance challenges about how to handle data in a day-to-day context, it’s worth stepping back from legal and regulatory guidance alone.  Of course, I wouldn’t for a second advocate making serious compliance decisions in the absence of legal advice; it’s simply that decisions based on legal merit alone risk not giving due consideration to data subject trust.

And what is data protection about, if not about trust?

 

Anonymisation is great, but don’t undervalue pseudonymisation

Posted on April 26th, 2014 by



Earlier this week, the Article 29 Working Party published its Opinion 05/2014 on Anonymisation Techniques.  The opinion describes (in quite some technical detail) the different anonymisation techniques available to data controllers, their relative values, and makes some good practice suggestions – noting that “Once a dataset is truly anonymised and individuals are no longer identifiable, European data protection law no longer applies“.

This is a very significant point – data, once truly anonymised, is no longer subject to European data protection law.  This means that EU rules governing how long that data can be kept for, whether it can be exported internationally and so on, do not apply.  The net effect of this should be to incentivise controllers to anonymise their datasets, shouldn’t it?

Well, not quite.  Because the truth is that many controllers don’t anonymise their data, but use pseudonymisation techniques instead.  

Difference between anonymisation and pseudonymisation

Anonymisation means transforming personal information into data that “can no longer be used to identify a natural person … [taking into account] ‘all the means likely reasonably to be used’ by either the controller or a third party.  An important factor is that the processing must be irreversible.”  Using anonymisation, the resulting data should not be capable of singling any specific individual out, of being linked to other data about an individual, nor of being used to deduce an individual’s identity.

Conversely, pseudonymisation means “replacing one attribute (typically a unique attribute) in a record by another.  The natural person is therefore still likely to be identified indirectly.”  In simple terms, pseudonymisation means replacing ‘obviously’ personal details with another unique identifier, typically generated through some kind of hashing, encryption or tokenisation function.  For example, “Phil Lee bought item x” could be pseudonymised to “Visitor 15364 bought item x”.

The Working Party is at pains to explain that pseudonymisation is not the same thing as anonymisation: “Data controllers often assume that removing or replacing one or more attributes is enough to make the dataset anonymous.  Many examples have shown that this is not the case…” and “pseudonymisation when used alone will not result in an anonymous dataset.

The value of pseudonymisation

The Working Party lists various “common mistakes” and “shortcomings” of pseudonymisation but curiously, given its prevalence, fails to acknowledge the very important benefits it can deliver, including in terms of:

  • Individuals’ expectations: The average individual sees a very big distinction between data that is directly linked to them (i.e. associated with their name and contact details) and data that is pseudonymised, even if not fully anonymised.  In the context of online targeted advertising, for example, website visitors are very concerned about their web browsing profiles being collected and associated directly with their name and address, but less so with a randomised cookie token that allows them to be recognised, but not directly identified.
  • Data value extraction:  For many businesses, anonymisation is just not an option.  The data they collect typically has a value whose commercialisation, at an individual record level, is fundamental to their business model.  So what they need instead is a solution that enables them to extract value at a record level but also that respects individuals’ privacy by not storing directly identifying details, and pseudonymisation enables this.
  • Reversibility:  In some contexts, reversibility of pseudonymised data can be very important.  For example, in the context of clinical drug trials, it’s important that patients’ pseudonymised trial data can be reversed if needing, say, to contact those patients to alert them to an adverse drug event.  Fully anonymised data in this context would be dangerous and irresponsible.
  • Security:  Finally, pseudonymisation improves the security of data held by controllers.  Should that data be compromised in a data breach scenario, the likelihood that underlying individuals’ identities will be exposed and that they will suffer privacy harm as a result is considerably less.

It would be easy to read the Working Party’s Opinion and conclude that pseudonymisation ultimately serves little purpose, but this would be a foolhardy conclusion to draw.  Controllers for whom anonymisation is not possible should never be disincentivised from implementing pseudonymisation as an alternative – not doing so would be to the detriment of their security and to their data subjects’ privacy.

Instead, pseudonymisation should always be encouraged as a minimum measure intended to facilitate data use in a privacy-respectful way.  As such, it should be an essential part of every controller’s privacy toolkit!

Incentivising compliance through tangible benefits

Posted on September 29th, 2013 by



The secret of compliance is motivation. That motivation does not normally come from the pleasure and certainty derived from ticking all possible boxes on a compliance checklist. Although, having said that, I have come across sufficiently self-disciplined individuals who seem to make a virtue out of achieving the highest degree of data privacy compliance within their organisations. However, this is quite exceptional. In truth, it is very difficult for any organisation – big or small, in the private or public sector – to get its act together simply out of fear of non-compliance with the law. Putting effective policies and procedures in place is never the result of a sheer drive to avoid regulatory punishment. Successful legal compliance is, more often than not, the result of presenting dry and costly legal obligations as something else. In particular, something that provides tangible benefits.

The fact that personal information is a valuable asset is demonstrated daily. Publicly quoted corporate powerhouses whose business model is entirely dependent on people’s data evidence the present. Innovative and fast growing businesses in the tech, digital media, data analytics, life sciences and several other sectors show us the future. In all cases, the consistent message coming not just from boardrooms, but from users, customers and investors, is that data fuels success and opportunity. Needless to say, most of that data is linked to each of us as individuals and, therefore, its use has implications in one way or another for our privacy. So, when looked at from the point of view of an organisation which wishes to exploit that data, regulating data privacy equates regulating the exploitation of an asset.

The term ‘exploitation’ instinctively brings to mind negative connotations. When talking about personal information, whose protection – as is well known – is regarded as a fundamental human right in the EU, the term exploitation is especially problematic. The insinuation that something of such an elevated legal rank is being indiscriminately used to someone’s advantage makes everyone feel uncomfortable. But what about the other meaning of the word? Exploitation is also about making good use of something by harnessing its value. Many responsible and successful businesses, governments and non-profit organisations look at exploiting their assets as a route to sustainability and growth. Exploiting personal information does not need to be negative and, in fact, greater financial profits and popular support – and ultimately, success – will come from responsible, but effective ways of leveraging that asset.

For that reason, it is possible to argue that the most effective way of regulating the exploitation of data as an asset is to prove that responsible exploitation brings benefits that organisations can relate to. In other words, policy making in the privacy sphere should emphasise the business and social benefits – for the private and public sector respectively – of achieving the right level of legal compliance. The rest is likely to follow much more easily and all types of organisations – commercial or otherwise – will endeavour to make the right decisions about the data they collect, use and share. Right for their shareholders, but also for their customers, voters and citizens. The message for policy makers is simple: bring compliance with the law closer to the tangible benefits that motivate decision makers.

This article was first published in Data Protection Law & Policy in September 2013 and is an extract from Eduardo Ustaran’s forthcoming book The Future of Privacy, which is due to be published in November 2013.

Global protection through mutual recognition

Posted on July 23rd, 2013 by



At present, there is a visible mismatch between the globalisation of data and the multinational approach to privacy regulation. Data is global by nature as, regulatory limits aside, it runs unconstrained through wired and wireless networks across countries and continents. Put in a more poetic way, a digital torrent of information flows freely in all possible directions every second of the day without regard for borders, geographical distance or indeed legal regimes and cultures. Data legislation on the other hand is typically attached to a particular jurisdiction – normally a country, sometimes a specific territory within a country and occasionally a selected group of countries. As a result, today, there is no such thing as a single global data protection law that follows the data as it makes its way around the world.

However, there is light at the end of the tunnel. Despite the current trend of new laws in different shapes and flavours emerging from all corners of the planet, there is still a tendency amongst legislators to rely on a principles-based approach, even if that translates into extremely prescriptive obligations in some cases – such as Spain’s applicable data security measures depending on the category of data or Germany’s rules to include certain language in contracts for data processing services. Whether it is lack of imagination or testimony to the sharp brains behind the original attempts to regulate privacy, it is possible to spot a common pedigree in most laws, which is even more visible in the case of any international attempts to frame privacy rules.

When analysed in practice and through the filter of distant geographical locations and moments in time, it is definitely possible to appreciate the similarities in the way privacy principles have been implemented by fairly diverse regulatory frameworks. Take ‘openness’ in the context of transparency, for example. The words may be slightly different and in the EU directive, it may not be expressly named as a principle, but it is consistently everywhere – from the 1980 OECD Guidelines to Safe Harbor and the APEC Privacy Framework. The same applies to the idea of data being collected for specified purposes, being accurate, complete and up to date, and people having access to their own data. Seeing the similarities or the differences between all of these international instruments is a matter of mindset. If one looks at the words, they are not exactly the same. If one looks at the intention, it does not take much effort to see how they all relate.

Being a lawyer, I am well aware of the importance of each and every word and its correct interpretation, so this is not an attempt to brush away the nuances of each regime. But in the context of something like data and the protection of all individuals throughout the world to whom the data relates, achieving some global consistency is vital. The most obvious approach to resolving the data globalisation conundrum would be to identify and put in place a set of global standards that apply on a worldwide basis. That is exactly what a number of privacy regulators backed by a few influential thinkers tried to do with the Madrid Resolution on International Standards on the Protection of Personal Data and Privacy of 2009. Unfortunately, the Madrid Resolution never became a truly influential framework. Perhaps it was a little too European. Perhaps the regulators ran out of steam to press on with the document. Perhaps the right policy makers and stakeholders were not involved. Whatever it was, the reality is that today there is no recognised set of global standards that can be referred to as the one to follow.

So until businesses, politicians and regulators manage to crack a truly viable set of global privacy standards, there is still an urgent need to address the privacy issues raised by data globalisation. As always, the answer is dialogue. Dialogue and a sense of common purpose. The USA and the EU in particular have some important work to do in the context of their trade discussions and review of Safe Harbor. First they must both acknowledge the differences and recognise that an area like privacy is full of historical connotations and fears. But most important of all, they must accept that principles-based frameworks can deliver a universal baseline of privacy protection. This means that efforts must be made by all involved to see what Safe Harbor and EU privacy law have in common – not what they lack. It is through those efforts that we will be able to create an environment of mutual recognition of approaches and ultimately, a global mechanism for protecting personal information.

This article was first published in Data Protection Law & Policy in July 2013.

The conflicting realities of data globalisation

Posted on June 17th, 2013 by



The current data globalisation phenomenon is largely due to the close integration of borderless communications with our everyday comings and goings. Global communications are so embedded in the way we go about our lives that we are hardly aware of how far our data is travelling every second that goes by. But data is always on the move and we don’t even need to leave home to be contributing to this. Ordinary technology right at our fingertips is doing the job for us leaving behind an international trail of data – some more public than other.

The Internet is global by definition. Or more accurately, by design. The original idea behind the Internet was to rely on geographically dispersed computers to transmit packets of information that would be correctly assembled at destination. That concept developed very quickly into a borderless network and today we take it for granted that the Internet is unequivocally global. This effect has been maximised by our ability to communicate whilst on the move. Mobile communications have penetrated our lives at an even greater speed and in a more significant way than the Internet itself.

This trend has led visionaries like Google’s Eric Schmidt to affirm that thanks to mobile technology, the amount of digitally connected people will more than triple – going from the current 2 billion to 7 billion people – very soon. That is more than three times the amount of data generated today. Similarly, the global leader in professional networking, LinkedIn, which has just celebrated its 10th anniversary, is banking on mobile communications as one of the pillars for achieving its mission of connecting the world’s professionals.

As a result, everyone is global – every business, every consumer and every citizen. One of the realities of this situation has been exposed by the recent PRISM revelations, which highlight very clearly the global availability of digital communications data. Perversely, the news about the NSA programme is set to have a direct impact on the current and forthcoming legislative restrictions on international data flows, which is precisely one of the factors disrupting the globalisation of data. In fact, PRISM is already being referred to as a key justification for a tight EU data protection framework and strong jurisdictional limitations on data exports, no matter how non-sensical those limitations may otherwise be.

The public policy and regulatory consequences of the PRISM affair for international data flows are pretty predictable. Future ‘adequacy findings’ by the European Commission as well as Safe Harbor will be negatively affected. We can assume that if the European Commission decides to have a go at seeking a re-negotiation of Safe Harbor, this will be cited as a justification. Things will not end there. Both contractual safeguards and binding corporate rules will be expected to address possible conflicts of law involving data requests for law enforcement or national security reasons in a way that no blanket disclosures are allowed. And of course, the derogations from the prohibition on international data transfers will be narrowly interpreted, particularly when they refer to transfers that are necessary on grounds of public interest.

The conflicting realities of data globalisation could not be more striking. On the one hand, every day practice shows that data is geographically neutral and simply flows across global networks to make itself available to those with access to it. On the other, it is going to take a fair amount of convincing to show that any restrictions on international data flows should be both measured and realistic. To address these conflicting realities we must therefore acknowledge the global nature of the web and Internet communications, the borderless fluidity of the mobile ecosystem and our human ability to embrace the most ambitious innovations and make them ordinary. So since we cannot stop the technological evolution of our time and the increasing value of data, perhaps it is time to accept that regulating data flows should not be about putting up barriers but about applying globally recognised safeguards.

This article was first published in Data Protection Law & Policy in June 2013.

Big data means all data

Posted on April 19th, 2013 by



There is an awesomeness factor in the way data about our digital comings and goings is being captured nowadays.  That awesomeness is such that it cannot even be described in numbers.  In other words, the concept of big data is not about size but about reach.  In the same way that the ‘wow’ of today’s computer memory will turn into a ‘so what’ tomorrow, references to terabytes of data are meaningless to define the power and significance of big data.  The best way to understand big data is to see it as a collection of all possible digital data.  Absolutely all of it.  Some of it will be trivial and most of it will be insignificant in isolation, but when put together its significance becomes clearer – at least to those who have the vision and astuteness to make the most of it.

Take transactional data as a starting point.  One purchase by one person is meaningful up to a point – so if I buy a cookery book, the retailer may be able to infer that I either know someone who is interested in cooking or I am interested in cooking myself.  If many more people buy the same book, apart from suggesting that it may be a good idea to increase the stock of that book, the retailer as well as other interested parties – publishers, food producers, nutritionists – could derive some useful knowledge from those transactions.  If I then buy cooking ingredients, the price of those items alone will give a picture of my spending bracket.  As the number of transactions increases, the picture gets clearer and clearer.  Now multiply the process for every shopper, at every retailer and every transaction.  You automatically have an overwhelming amount of data about what people do with their money – how much they spend, on what, how often and so on.  Is that useful information?  It does not matter, it is simply massive and someone will certainly derive value from it.  

That’s just the purely transactional stuff.  Add information about at what time people turn on their mobile phones, switch on the hot water or check their e-mail, which means of transportation they use to go where and when they enter their workplaces – all easily recordable.  Include data about browsing habits, app usage and means of communication employed.  Then apply a bit of imagination and think about this kind of data gathering in an Internet of Things scenario, where offline everyday activities are electronically connected and digitally managed.  Now add social networking interactions, blogs, tweets, Internet searches and music downloads.  And for good measure, include some data from your GPS, hairdresser and medical appointments, online banking activities and energy company.  When does this stop?  It doesn’t.  It will just keep growing.  It’s big data and is happening now in every household, workplace, school, hospital, car, mobile device and website.

What has happened in an uncoordinated but consistent manner is that all those daily activities have become a massive source of information which someone, somewhere is starting to make use of.  Is this bad?  Not necessarily.  So far, we have seen pretty benign and very positive applications of big data – from correctly spelt Internet searches and useful shopping recommendations to helpful traffic-free driving directions and even predictions in the geographical spread of contagious diseases.  What is even better is that, data misuses aside, the potential of this hugemongous amount of information is as big as the imagination of those who can get their hands on it, which probably means that we have barely started to scratch the surface of it all.

Our understanding of the potential of big data will improve as we become more comfortable and familiar with its dimensions but even now, it is easy to see its economic and social value.  But with value comes responsibility.  Just as those who extract and transport oil must apply utmost care to the handling of such precious but hazardous material, those who amass and manipulate humanity’s valuable data must be responsible and accountable for their part.  It is not only fair but entirely right that the greater the potential, the greater the responsibility, and that anyone entrusted with our information should be accountable to us all.  It should not be up to us to figure out and manage what others are doing with our data.  Frankly, that is simply unachievable in a big data world.  But even if we cannot measure the size of big data, we must still find a way to apportion specific and realistic responsibilities for its exploitation.

 

This article was first published in Data Protection Law & Policy in April 2013.

Smart Meters – new data access and privacy rules for the energy sector

Posted on February 21st, 2013 by



The Department of Energy and Climate Change (DECC) carried out numerous studies and soundings in preparation for the rollout of smart energy meters to over 30 million UK homes between 2014 and 2019, but the most polemical press coverage was elicited by the consultation in Spring 2012 on the data access and privacy issues raised by the valuable energy consumption data (Consumption Data) generated by these new metering devices. Some newspapers cited warnings of “cyber attacks by foreign hackers” and “a spy in every home”, and there was much interest in the concerns highlighted in a report published in June by the European Data Protection Supervisor that the most granular real-time Consumption Data could reveal details such as the daily habits of household members or even tell burglars when a house was unoccupied.

The UK government’s response to this consultation, published on 12th December 2012, sheds considerable light on the data protection compliance measures that must be put in place by energy companies, network operators and others who access Consumption Data such as ‘switching’ websites and energy services suppliers. These requirements will apply alongside (and in addition to) those already set out in the Data Protection Act 1998. The measures will be implemented via amendments to the licence conditions adhered to by energy suppliers (enforced by Ofgem) and a new Smart Energy Code overseen by a dedicated Smart Energy Code Panel. A central information hub controlled by a body known as the Data and Communications Company (DCC) will enable remote access to Consumption Data for suppliers and third parties that have agreed to be bound by the Code.

Background: The aim of the UK government’s smart meters programme is to give consumers real-time information about their energy consumption in the hope that this will help to control costs and eliminate estimated energy bills, on top of the environmental and cost-saving side effects of the behavioural changes such information may encourage. In the long term, it is hoped that smart energy data will lead to fluctuating, real-time energy pricing, enabling consumers to see how expensive it will be to use gas or electricity at any given time of day.

Key rules: There are some key elements to the new framework which apply differently to energy suppliers (such as British Gas and EDF Energy), network operators (companies that own and lease the infrastructure for delivering gas and electricity to premises) and “third parties” such as switching websites and energy companies when they are not acting in the capacity as a supplier to the relevant household.

A crucial aspect of the rules that applies to all parties is the requirement to obtain explicit, opt-in consent before using Consumption Data for any marketing purposes. For other uses, third parties will always need opt-in consent to remotely access Consumption Data of any level of granularity, whereas in order to remotely access the most detailed level of Consumption Data (relating to a period of less than one day), energy suppliers will also be required to obtain opt-in consent.

From a consumer protection perspective, perhaps the most important safeguards introduced by the Stage 1 draft of the Smart Energy Code published in November 2012 are the requirements on third parties requesting Consumption Data from the DCC to:

(a)  take measures to verify that the relevant household member has solicited the services connected with the third party’s data request;

(b)  self certify that the necessary consent has been obtained; and

(c)   provide reminders to consumers about the Consumption Data being collected at appropriate, regular intervals.

Privacy Impact Assessments: In line with Privacy by Design principles promoted by data protection authorities globally, the UK government has developed its own Privacy Impact Assessment to assess and anticipate the potential privacy risks of the smart metering programme as a whole. The idea is that the government’s PIA will be an “umbrella document” and every data controller wishing to access Consumption Data is expected to carry out its own PIA before the new framework comes into force (likely to be this summer). The European Commission is also developing a template PIA for this purpose.

Apart from helping to identify risks to customers and potential company liabilities, PIAs are lauded by the UK Information Commissioner as the best way to protect brand reputation, shape communication strategies and avoid expensive “bolt-on” solutions.

Conclusions: Research carried out as part of the UK government’s Data Access and Privacy consultation showed that the overwhelming concern of consumers questioned was that smart meter data would lead to an increase in direct marketing communications. Many participants did not identify the potential for misuse of Consumption Data until it was explained to them. The less obvious nature of the potential for privacy intrusion of this new data underlines the fact that consent is not a panacea in the case of smart meters (despite the considerable focus on this in the consultation responses).

So, clear and comprehensive information is key. As part of preparing for compliance, companies planning to access Consumption Data should build clear messaging into all customer-facing procedures, including those in respect of all in-person, online and call centre interaction. And whilst some of the finer details of the new rules are yet to be ironed out, it’s clear that every organisation concerned will be expected to digest the details of the new framework now and be fully prepared – including by completing Privacy Impact Assessments – in time for when the regulatory framework comes into force, expected to be June 2013.

A longer version of this article was first published in Data Protection Law & Policy in February 2013.

 

Big Data at risk

Posted on February 1st, 2013 by



“The amount of data in our world has been exploding, and analysing large data sets — so-called Big Data — will become a key basis of competition, underpinning new waves of productivity growth, innovation and consumer surplus”.  Not my words, but those of the McKinsey Global Institute (the business and economics research arm of McKinsey) in a report that evidences like no other the value of data for future economic growth.  However, that value will be seriously at risk if the European Parliament accepts the proposal for a pan-European Regulation currently on the table.

Following the publication by the European Commission last year of a proposal for a General Data Protection Regulation aimed at replacing the current national data protection laws across the EU, at the beginning of 2013, Jan Philipp Albrecht (Rapporteur for the LIBE Committee, which is leading the European Parliament’s position on this matter) published his proposed revised draft Regulation.  

Albrecht’s proposal introduces a wide definition of ‘profiling’, which was covered by the Commission’s proposal but not defined.  Profiling is defined in Albrecht’s proposal as “any form of automated processing of personal data intended to evaluate certain personal aspects relating to a natural person or to analyse or predict in particular that natural person’s performance at work, economic situation, location, health, personal preferences, reliability or behaviour“. 

Neither the Commission’s original proposal nor Albrecht’s proposal define “automated processing”.  However, the case law of the European Court of Justice suggests that processing of personal data by automated means (or automated processing) should be understood by contrast with manual processing.   In other words, automated processing is processing carried out by using computers whilst manual processing is processing carried out manually or on paper.  Therefore, the logical conclusion is that the collection of information via the Internet or from transactional records and the placing of that information onto a database — which is the essence of Big Data — will constitute automated processing for the purposes of the definition of profiling in Albrecht’s proposal.

If we link to that the fact that, in a commercial context, all that data will typically be used first to analyse people’s technological comings and goings, and then to make decisions based on perceived preferences and expected behaviours, it is obvious that most activities involving Big Data will fall within the definition of profiling.

The legal threat is therefore very clear given that, under Albrecht’s proposal, any data processing activities that qualify as ‘profiling’ will be unlawful by default unless those are activities are:

*      necessary for entering into or performing a contract at the request of the individual – bearing in mind that “contractual necessity” is very strictly interpreted by the EU data protection authorities to the point that if the processing is not strictly necessary from the point of view of the individuals themselves, it will not be regarded as necessary;

*      expressly authorised by EU or Member State law – which means that a statutory provision has to specifically allow such activities; or

*      with the individual’s consent – which must be specific, informed, explicit and freely given, taking into account that under Albrecht’s proposal, consent is not valid where the data controller is in a dominant market position or where the provision of a service is made conditional on the permission to use someone’s data.

In addition, there is a blanket prohibition on profiling activities involving sensitive personal data, discriminatory activities or children data.

So the outlook is simple: either the European Parliament figures out how to regulate profiling activities in a more balanced way or Big Data will become No Data.

 

Killing the Internet

Posted on January 25th, 2013 by



The beginning of 2013 could not have been more dramatic for the future of European data protection.  After months of deliberations, veiled announcements and guarded statements, the rapporteur of the European Parliament’s committee responsible for taking forward the ongoing legislative reform has revealed his position loudly and clearly.  Jan Albrecht’s proposal is by no means the final say of the Parliament but it is an indication of where an MEP who has thought long and hard about what the new data protection law should look like stands.  The reactions have been equally loud.  The European Commission has calmly welcomed the proposal, whilst some Member States’ governments have expressed serious concerns about its potential impact on the information economy.  Amongst the stakeholders, the range of opinions vary quite considerably – Albrecht’s approach is praised by regulators whilst industry leaders have massive misgivings about it.  So who is right?  Is this proposal the only possible way of truly protecting our personal information or have the bolts been tightened too much?

There is nothing more appropriate than a dispassionate legal analysis of some key elements of Albrecht’s proposal to reveal the truth: if the current proposal were to become law today, many of the most popular and successful Internet services we use daily would become automatically unlawful.  In other words, there are some provisions in Albrecht’s draft proposal that when combined together would not only cripple the Internet as we know it, but they would stall one of the most promising building blocks of our economic prosperity, the management and exploitation of personal information.  Sensationalist?  Consider this:

*     Traditionally, European data protection law has required that in order to collect and use personal data at all, one has to meet a lawful ground for processing.  The European Commission had intended to carry on with this tradition but ensuring that the so-called ‘legitimate interests’ ground, which permits data uses that do not compromise the fundamental rights and freedoms of individuals, remained available.  Albrecht proposes to replace this balancing exercise with a list of what qualifies as a legitimate interest and a list of what doesn’t.  The combination of both lists have the effect of ruling out any data uses which involve either data analytics or simply the processing of large amounts of personal data, so the obvious outcome is that the application of the ‘legitimate interests’ ground to common data collection activities on the Internet is no longer possible.

*     Albrecht’s aim of relegating reliance on the ‘legitimate interests’ ground to very residual cases is due to the fact that he sees individual’s consent as the primary basis for all data uses.  However, the manner and circumstances under which consent may be obtained are strictly limited.  Consent is not valid if the recipient is in a dominant market position.  Consent for the use of data is not valid either if presented as a condition of the terms of a contract and the data is not strictly necessary for the provision of the relevant service.  All that means that if a service is offered for free to the consumer – like many of the most valuable things on the Internet – but the provider of that service is seeking to rely on the value of the information generated by the user to operate as a business, there will not be a lawful way for that information to be used.

*     To finish things off, Albrecht delivers a killing blow through the concept of ‘profiling’.  Defined as automated processing aimed at analysing things like preferences and behaviour, it covers what has become the pillar of e-commerce and is set to change the commercial practices of every single consumer-facing business going forward.  However, under Albrecht’s proposal, such practices are automatically banned and only permissible with the consent of the individual, which as shown above, is pretty much mission impossible.

The collective effect of these provisions is truly devastating.  This is not an exaggeration.  It is the outcome of a simple legal analysis of a proposal deliberately aimed at restricting activities seen as a risk to people.  The decision that needs to be made now is whether such a risk is real or perceived and, in any event, sufficiently great to merit curtailing the development of the most sophisticated and widely used means of communication ever invented. 

 
This article was first published in Data Protection Law & Policy in January 2013.