Archive for the ‘Mobile telecoms’ Category

Do lifestyle apps and wearable devices collect “health data”?

Posted on February 16th, 2015 by

The European Commission (the “Commission”) recently asked the Article 29 Working Party (“WP29”) to clarify the definition of “health data” in relation to lifestyle/wellbeing apps and wearable devices.  In its response the WP29 took a cautious approach, one that may lead to compliance difficulties for app developers and device manufacturers. Defining “health data” Wearable devices such as Fitbit, a wristband which collects data about a user’s movements and sleep patterns, are becoming increasingly popular.  The WP29 sensibly notes that the proposed use of such data and its scale should be factors in determining whether it is sensitive.  Not all lifestyle apps collect health data.  Should the data be retained on the device, or deleted after a set period, or kept separate from other data sets, it will unlikely be considered health data. The difficulty of defining health data is that non-health data can become health data depending on the duration of collection, proposed use, or combination with other data sets.  A pedometer that measures the number of steps a user takes each day in isolation does not collect health data, but in combination with body mass index and mood data could build up a picture of an individual’s health state. However, the WP29 then defines health data so broadly that it is likely to capture almost any data collected through a lifestyle app or wearable device.  Health data self-evidently includes medical records, information about diagnosis or treatment and medical history.  The group notes that national legislators, courts and data protection authorities have found it also includes whether an individual wears glasses, whether he smokes or drinks, his IQ, any allergies, any support groups he attends, any tax deductions for home alterations and medical product purchase history.  The data do not have to be collected on a medical device and do not have to indicate poor health.  When conclusions about an individual’s health status are drawn as a result of combining non-health data they become health data, even if the conclusions are incorrect.  The danger for users of wearable technology is that they may be unaware their data are being transmitted from their device or combined with other sets. The WP29 goes on to say that under the draft General Data Protection Regulation (the “draft Regulation”), the definition will include test samples and any information on an individual’s physical state or disease risk.  Data from apps measuring heart rates or tobacco consumption would thus be included. The role of consent The WP29 conclude that explicit consent is the best justification for the processing of health data.  The app developer or device manufacturer will thus have to provide clear information on the well-defined purpose(s) of the processing, whether it will be covered by professional confidentiality and whether it will be combined with other data sets.  The group suggests providing examples of the consequences of combining the data, likely purposes of processing and types of third party processors. This echoes last year’s global apps sweep (discussed in a previous blog post) coordinated by the Global Privacy Enforcement Network (“GPEN”), which found that most app developers fail to provide sufficient information for their processing activities.  However, an obligation to provide even more information would appear to contradict GPEN’s recent open letter to app stores to make concise privacy policies for all apps a mandatory requirement.  In a similar vein, a UK government report from November 2014 argued that consent obtained from lengthy incomprehensible unread terms and conditions is arguably meaningless. To give an example, a Fitbit user who has bought the device to intentionally track his fitness regime and is likely well aware that the app is collecting his location data and heart rate.  This raises the question of whether the action of purchasing the product itself could infer consent if sufficient information is provided in the packaging.  The WP29 feel that updated consent is required each time the data are processed for a further purpose.  This will cause a headache for developers who must provide sufficiently well-defined purposes but who will want to avoid seeking fresh consent if they later decide on a new purpose.  It is also hard to see how a piece of wearable technology with little or no interface could request consent at a later stage. Pseudonymised exemption? The WP29 took the opportunity to reiterate its concerns about the Council’s suggestion that pseudonymised data should be subject to lighter touch regulation.  The Council feels this is justified, provided safeguards of requiring the highest technical standards and measures taken to prevent re-identification are put in place. The group also called for the draft Regulation to require specific consent to the further processing of health data for research purposes.  Researchers may feel it necessary to list all possible purposes, which could frighten data subjects away from allowing their data to be used in important scientific research. The guidance in practice The WP29 stated that the recent Commission consultation on mHealth found there was “great interest” in strong privacy tools and strengthened enforcement of data protection rules.  In fact, less than half of respondents felt strong privacy and security tools are required and less than half asked for increased transparency of information.  Some warned of the risks of overregulation. The misuse of health data has potentially serious and irreversible consequences.  For that reason it was awarded protected status by the European legislators.  However, given the popularity of wearable devices and their likely proliferation (the Commission’s mHealth Green Paper predicts that personal sensor data are expected to increase from 10% of all stored data to an astonishing 90% within the next decade), it seems a more practical approach could be taken going forward.  In guiding data controllers who wish to take advantage of this information windfall, the WP29 need to strike a balance between protecting the potentially sensitive data collected by wearable devices whilst avoiding overly strict regulatory controls that will be difficult to implement in practice and may unnecessarily encumber the user’s experience.

Global apps sweep: should developers be worried?

Posted on October 24th, 2014 by

A recent sweep with participation from 26 data protection authorities (“DPA”) across the world revealed a high proportion of mobile apps are accessing large amounts of personal data without meeting their data privacy obligations.

How did they do?

Not well. Out of the 1,200 apps surveyed, 85% failed to clearly explain how they were collecting and using personal information, 59% did not display basic privacy information and one in three requested excessive personal information. Another common finding was that many apps fail to tailor privacy information to the small screen.

The Information Commissioner’s Office (“ICO”), as the UK’s DPA, surveyed 50 apps including many household names which produced results in line with the global figures.

Rare examples of good practice included pop-up notifications asking permission prior to additional data being collected and basic privacy information with links to more detailed information for users who wish to know more.

What are they told to do?

As a result, Ofcom and ICO have produced some guidance for users on how to use apps safely. It is written in consumer-friendly language and contains straightforward advice on some common pitfalls such as checking content ratings or always logging out of banking apps.

Contrast this with the 25 page guidance from ICO aimed at app developers, which has drawn some criticism for being overly lengthy and complex. Rather than participate in research and point to long guidance documents, it would be more effective to promote simple rules (eg. requiring pop-up notifications) and holding app stores accountable for non-compliance. However, site tests demonstrate users are irritated enough by constant pop-ups to stop using the site, so developers are reluctant to implement them.

Why is it important?

The lack of compliance is all the more alarming if read in conjunction with Ofcom research which surveyed a range of UK app users. Users view the apps environment as a safer, more contained space than browser-based internet access. Many believed apps are discrete pieces of software with little interconnectivity, were unaware of the virus threat or that apps can continue to run in the background. There is implicit trust in established brands and recognised app stores, who users felt must monitor and vet all apps before selling them. Peer recommendation also played a significant role in deciding whether to download an app.

This means little, if any, attention is paid to privacy policies and permission requests. Users interviewed generally felt full permission had to be granted prior to using the app and were frustrated by the lack of ability to accept some and refuse other permissions.

So what’s the risk for developers?

ICO has the power to fine those companies who breach the relevant laws up to £500,000. The threshold for issuing a fine is high, however, and this power has not yet been used in the context of mobile apps. Having said this, we know that ‘internet and mobile’ are one of ICO’s priority areas for enforcement action.

Perhaps a more realistic and potentially more damaging risk is the reputational and brand damage associated with being named and shamed publically. ICO is more likely, when a lower level of harm has been caused, to seek undertakings that the offending company will change its practices. As we know, ICO publishes its enforcement actions on its website. For a company whose business model relies on processing data and on peer recommendations as the main way to grow its user base and its brand, the trust of its users is paramount and hard to rebuild once lost.

ICO has said it will be contacting app developers who need to improve their data collection and processing practices. The next stage for persistent offenders would be enforcement action.

Developers would be wise to pay attention. If enforcement action is not in itself a concern, ICO’s research showed almost half app users have decided against downloading an app due to privacy concerns. If that’s correct, privacy is important to mobile app users and could ‘make’ or ‘break’ a new app.

Update 12 December 2014

The 26 DPAs who took part in the global sweep have since written to seven major app stores including those of Apple, Google and Microsoft.  In their letter of 9 December they urge the marketplaces to make links to privacy policies mandatory, rather than optional, for those apps that collect personal data.

PART 2 – The regulatory outlook for the Internet of Things

Posted on October 22nd, 2014 by

In Part 1 of this piece I posed a question asking: the Internet of Things – what is it? I argued that even the concept of the Internet of Things (“IoT“) itself is somewhat ill-defined making the point there is no definition of IoT and, even if there were, that the definition will only change. What’s more, IoT will mean different things to different people and talk to something new each year.

For all the commentary, there is not specific IoT law today (sorry there is no Internet of Things (Interconnectivity) Act in the UK (and nor will there be any time soon)). We are left applying a variety of existing laws across telecoms, intellectual property, competition, health and safety and data privacy / security. Equally, with a number of open questions about how the IoT will work, how devices will communicate and identify each other etc., there is also a lack of standards and industry wide co-operation around IoT.

Frequently based around data use and with potentially intrusive application in the consumer space (think wearables, intelligent vehicles and healthtech) there is no doubt that convergence around IoT will fan privacy questions and concerns.

An evolving landscape

This lack of definition, coupled with a nascent landscape of standards, interfaces, and protocols leaves many open questions about future regulation and the application of current laws. On the regulatory front there is little sign of actual law-making or which rules may evolve to influence our approach or analysis.

Across the US, UK and the rest of Europe some of the regulatory bodies with an interest in IoT are diverse with a range of regulatory mandates and sometimes with a defined role confined to specific sectors. Some of these regulators are waking up to potential issues posed by IoT and a few are reaching out to the industry as a whole to consult and stimulate discussion. We’re more likely to see piecemeal regulation addressing specific issues than something all encompassing.

The challenge of new technology

Undoubtedly the Internet of Things will challenge law makers as well as those of us who construe the law. It’s possible that in navigating these challenges and our current matrix of laws and principles that we may influence the regulatory position as a result. Some obvious examples of where these challenges may come from are:

  • Adaptations to spectrum allocation. If more devices want to communicate, many of these will do so wirelessly (whether via short range or wide area comms or mobile). The key is that these exchanges don’t interfere with each other and that there is sufficient capacity available within the allocated spectrum. This may need to be regulated.
  • Equally, as demand increases, with a scarce resource what kind of spectrum allocation is “fair” and “optimal” and is some machine to machine traffic more important than other traffic? With echoes of the net neutrality debate the way this evolves will be interesting. Additionally, if market dominance emerges around one technology will there be competition/anti-trust concerns;
  • The technologies surrounding the IoT will throw up intellectual property and licensing issues. The common standards and exchange and identification protocols themselves may be controlled by interested party or parties or released on an “open” basis. Regulation may need to step-in to promote economic advance via speedy adoption or simply act as an honest broker in a competitive world; and
  • In some applications of IoT the concept of privacy will be challenged. In a decentralised world the thorny issues of consent and reaffirming consent will be challenging. This said, many IoT deployments will not involve personal information or identifiers. Plus, whatever the data, issues around security become more acute.

We have a good idea what issues may be posed, but we don’t yet know which will impose themselves sufficiently to force regulation or market intervention.

Consultation – what IoT means for the policy agenda

There have been some opening shots in this potential regulatory debate because a continued interconnectivity between multiple devices raises potential issues.

  • In issuing a new Consultation: “Promoting investment and innovation in the Internet of Things“, Ofcom (the UK’s communications regulator) kicked off its own learning exercise identify potential policy concerns around:
  • spectrum allocation and providing for potential demand;
  • understanding of the robustness and reliability issues placed upon networks which demand resilience and security. The corresponding issue of privacy is recognised also;
  • a need for each connected device to have an assigned name or identifier and questioning just how those addresses should be determined and potentially how they would be assigned; and
  • understanding their potential role as the UK’s regulator in an area (connectivity) key to the evolution of IoT.

In a varied and quite penetrable paper, Ofcom’s consultation recognises what many will be shouting, their published view “is that industry is best placed to drive the development, standardisation and commercialisation of new technology“. However, it goes on to recognise that “given the potential for significant benefits from the development of the IoT across a range of industry sectors, ][Ofcom[ are interested in views on whether we should be more proactive; for example, in identifying and making available key frequency bands, or in helping to drive technical standards.”

Europe muses while Working Party 29 wades in early warning about privacy

IoT adoption has been on Europe’s “Digital Agenda” for some time and in 2013 it reported back on its own Conclusions of the Internet of Things public consultation. There is also the “Connected Continent” initiative chasing a single EU telecoms market for jobs and growth.   The usual dichotomy is playing out equating technology adoption with “growth” while Europe wrestles with an urge to protect consumers and markets.

In just one such fight with this urge, in the past month the Article 29 Working Party (comprising the data privacy regulators of Europe) published its own Opinion 8/2014 on the Recent Developments on the Internet of Things. Recognising that it’s impossible to predict with any certainty the extent to which the IoT will develop the group also calls out that the development must “respect the many privacy and security challenges which can be associated with IoT“.

Their Opinion focuses on three specific IoT developments:

  • Wearable Computing;
  • Quantified Self; and
  • Domotics (home automation).

This Opinion doesn’t even consider B2B applications and more global issues like “smart cities”, “smart transportations”, as well as M2M (“machine to machine”) developments. Yet, the principles and recommendations their Opinion may well apply outside its strict scope and cover these other developments in the IoT. It’s one of our only guiding lights (and one which applies high standards of responsibility).

As one would expect, the Opinion identifies the “main data protection risks that lie within the ecosystem of the IoT before providing guidance on how the EU legal framework should be applied in this context”. What’s more the Working Party “supports the incorporation of the highest possible guarantees for individual users at the heart of the projects by relevant stakeholders. In particular, users must remain in complete control of their personal data throughout the product lifecycle, and when organisations rely on consent as a basis for processing, the consent should be fully informed, freely given and specific.”

The Fieldfisher team will shortly publish its thoughts and explanation of this Opinion. As one may expect, the IoT can and will challenge the privacy notions of transparency and consent let alone proportionality and purpose limitation. This means that accommodating the EU’s data privacy principles within the application of some IoT will not always be easy. Security poses another tricky concept and conversation. Typically these are issues to be tacked at the design stage and not as a legal afterthought. Step forward the concept of privacy by design (a concept recognised now around the globe).

In time, who knows, we may even see the EU Data Protection Regulation pass and face enhanced privacy obligations in Europe with new focus on “profiling” and legal responsibilities falling beyond the data processor exacting its own force over IoT.

The US is also alive to the potential needs of IoT

But Europe is not alone, with its focus on activity specific laws or laws regulating specific industries, even the US may be addressing particular IoT concerns with legislation. Take the “We Are Watching You Act” currently with Congress and the “Black Box Privacy Protection Act” with the House of Representatives. Each now apparently have a low chance of actually passing, but may regulate monitoring of surveillance by video devices in the home and force car manufacturers to disclose to consumers the presence of event data recorders, or ‘black boxes’, in new automobiles.

A wider US development possibly comes from the Federal Trade Commission who hosted public workshops in 2013, itself interested in privacy and security in the connected world and the growing connectivity of devices. In the FTC’s own words: “[c]onnected devices can communicate with consumers, transmit data back to companies, and compile data for third parties such as researchers, health care providers, or even other consumers, who can measure how their product usage compares with that of their neighbors. The workshop brought together academics, business and industry representatives, and consumer advocacy groups to explore the security and privacy issues in this changing world. The workshop served to inform the Commission about the developments in this area.” Though there are no concrete proposals yet, 2014 has seen a variety of continued commentary around “building trust” and “maximising consumer benefits through consumer control”. With its first IoT enforcement action falling in 2013 (in respect of connected baby monitors from TRENDnet whose feeds were not secure) there’s no doubt the evolution of IoT is on the FTC’s radar.

FTC Chairwomen, Edith Ramirez commented that “The Internet of Things holds great promise for innovative consumer products and services. But consumer privacy and security must remain a priority as companies develop more devices that connect to the Internet“.

No specific law, but plenty of applicable laws

My gut instinct to hold back on my IoT commentary had served me well enough. In the legal sense with little to say, perhaps even now I’ve spoken too soon? What is clear is that we’re immersing ourselves in IoT projects, wearable device launches, health monitoring apps, intelligent vehicles and all the related data sharing already. The application of law to the IoT needs some legal thought and, without specific legislation today, as for many other emerging technologies we must draw upon:

  • Our insight into the existing law across and its current application across different legal fields; and
  • Rather than applying a rule specific to IoT, we have to ask the right questions to build a picture of the technology, the way it communicates and figure out the commercial realities and relative risks posed by these interactions.

Whether the internet of customers, the internet of people, data, processes or even the internet of everything; applied legal analysis will get us far enough until we actually see some substantive law for the IoT. This is today’s IoT challenge.

Mark Webber – Partner, Palo Alto California

Part 1: Cutting through the Internet of Things hyperbole

Posted on October 15th, 2014 by

I’ve held back writing anything about the Internet of Things (or “IoT“) because there are so many developments playing out in the market. Not to mention so much “noise”.

Then something happened: “It’s Official: The Internet Of Things Takes Over Big Data As The Most Hyped Technology” read a Forbes headline. “Big data”, last week’s darling, is condemned to the “Trough of Disillusionment” while Gartner moves IoT to the very top of its 2014 emerging technologies Hype Cycle. Something had to be said.

The key point for me is that the IoT is “emerging”. What’s more, few are entirely sure where they are on this uncharted journey of adoption. IoT has reached an inflexion point and a point where businesses and others realise that identifying with the Internet of Things may drive sales, shareholder value or merely kudos. We all want a piece of this pie.

In Part 1 of this two part exploration of IoT, I explore what the Internet of Things actually is.

IoT –what is it?

Applying Gartner’s parlance, one thing is clear; when any tech theme hits the “Peak of Expectations” the “Trough of Disillusionment” will follow because, as with any emerging technology, it will be sometime until there is pervasive adoption of IoT. In fact, for IoT, Gartner says widespread adoption could be 5 to 10 years away. However, this inflexion point is typically the moment in time when the tech industry’s big guns ride into town and, just as with cloud (remember some folk trying to trade mark the word?!), this will only drive further development and adoption. But also further hype.

The world of machine to machine (“M2M“) communications involved the connection of different devices which previously did not have the ability to communicate. For many, the Internet of Things is something more, as Ofcom (the UK’s communications regulator) set out in its UK consultation, IoT is a broader term, “describing the interconnection of multiple M2M applications, often enabling the exchange of data across multiple industry sectors“.

The Internet of Things will be the world’s most massive device market and save companies billions of dollars” shouted Business Week in October 2014, happy to maintain the hype but also acknowledging in its opening paragraph that IoT is “beginning to grow significantly“. No question, IoT is set to enable large numbers of previously unconnected devices to connect and then communicate sharing data with one another. Today we are mainly contemplating rather than experiencing this future.

But what actually is it?

The emergence of IoT is driving some great debate. When assessing what IoT is and what it means for business models, the law and for commerce generally, arguably there are more questions than there are answers. In an exploratory piece in ZDNET Richie Etwaru called out a few of these unanswered questions and prompted some useful debate and feedback. The top three questions raised by Ritchie were:

  • How will things be identified? – believing we have to get to a point where there are standards for things to be sensed and connected;
  • What will the word trust mean to “things” in IoT? – making the point we need to redefine trust in edge computing; and
  • How will connectivity work? – Is there something like IoTML (The Internet of Things Markup Language) to enable trust and facilitate this communication?

None of these questions are new, but his piece reinforces that we don’t quite know what IoT is and how some of its technical questions will be addressed. It’s likely that standardisation or industry practice and adoption around certain protocols and practices will answer some of these questions in due course. As a matter of public policy we may see law makers intervene to shape some of these standards or drive particular kinds of adoption. There will be multiple answers to the “what is IoT?” question for some time. I suspect in time different flavours and business models will come to the fore. Remember when every cloud seminar spent the first 15 minute defining cloud models and reiterating extrapolations for the future size of the cloud market? Brace yourselves!

I’ve been making the same points about “cloud” for the past 5 years – like cloud the IoT is a fungible concept. So, as with cloud, don’t assume IoT has definitive meaning. As with cloud, don’t expect there is any specific Internet of Things law (yet?). As Part 2 of this piece will discuss, law makers have spotted there’s something new which may need regulatory intervention to cultivate it for the good of all but they’ve also realised  that there’s something which may grow with negative consequences – something that may need to be brought into check. Privacy concerns particularly have raised their head early and we’ve seen early EU guidance in an opinion from the Article 29 Working Party, but there is still no specific IoT law. How can there be when there is still little definition?

Realities of a converged world

For some time we’ve been excited about the convergence of people, business and things. Gartner reminds us that “[t]he Internet of Things and the concept of blurring the physical and virtual worlds are strong concepts in this stage. Physical assets become digitalized and become equal actors in the business value chain alongside already-digital entities“.   In other words; a land of opportunity but an ill-defined “blur” of technology and what is real and merely conceptual within our digital age.

Of course the IoT world is also a world bumping up against connectivity, the cloud and mobility. Of course there are instances of IoT out there today. Or are there? As with anything that’s emerging the terminology and definition of the Internet of Things is emerging too. Yes there is a pervasiveness of devices, yes some of these devices connect and communicate, and yes devices that were not necessarily designed to interact are communicating, but are these examples of the Internet of Things? Break these models down into constituent parts for applied legal thought and does it necessarily matter?

Philosophical, but for a reason

My point? As with any complex technological evolution, as lawyers we cannot apply laws, negotiate contracts or assess risk or the consequences for privacy without a proper understanding of the complex ecosystem we’re applying these concepts to. Privacy consequences cannot be assessed in isolation and without considering how the devices, technology and data actually interact. Be aware that the IoT badge means nothing legally and probably conveys little factual information around “how” something works. It’s important to ask questions. Important not to assume.

In Part 2 of this piece I will discuss some early signs of how the law may be preparing to deal with all these emerging trends? Of course the answer is that it probably already does and it probably has the flexibility to deal with many elements of IoT yet to emerge.

Data security breach notification: it’s coming your way!

Posted on July 2nd, 2013 by

Data breach notification laws have existed in the US for several years. California was the first state to introduce a data breach notification law in 2002, followed soon after by forty-five other US states. In 2012, the US Senate introduced a Data Security and Breach Notification Act which, if enacted, would establish a national data security and beach notification standard for the protection of consumer’s electronic personal information across the US.

In Europe, data breach notification has only drawn attention at a political and legislative level following recent press coverage of data breach scandals. Nevertheless, the numerous debates, initiatives and legislative proposals that have appeared in recent months are evidence of Europe’s growing interest in this topic, and recognition of the need to regulate. As an example, the EU Commission’s Directorate General for Communications Network, Content and Technology (DG CONNECT) recently proposed to “explore the extension of security breach notification provisions, as part of the modernisation of the EU personal data protection regulatory framework” in its Digital Agenda for Europe (action 34).

From a legislative perspective, things have been moving forward rather steadily for several years. In 2009, the European legislator adopted a pan-European data breach notification requirement for the first time, under the amended ePrivacy directive 2002/58/EC (“ePrivacy directive”). True, the directive only applies to “providers of publicly available electronic communications services” (mainly telecom operators and ISPs), but in a limited number of EU Member States the ePrivacy directive was implemented with a much broader scope (e.g., Germany). In June 2013, the European Commission released a new regulation explaining the technical implementing measures for data breach notification by telecom operators and ISPs.

Following this first legislative step, the European Commission has recently made two further legislative proposals. The first, which has drawn the most attention, was the European Commission’s proposal of a new regulation to replace the current Data Protection Directive 95/46/EC. If adopted, this Regulation would introduce a general obligation for all data controllers, across business sectors, to notify the regulator in case of a breach without undue delay, and not later than 24 hours after having become aware of it. Companies would also have to report data breaches that could adversely affect individuals without undue delay. This Regulation would apply not only to organizations that are established on the territory of the EU, but also to those that are not established within the EU, but target EU citizens either by offering them goods and services, or by monitoring their behaviour.

Needless to say, in Brussels, stakeholders and lobbyists have been actively campaigning against the proposed data breach provisions for months on the grounds that they are unfriendly to business, cumbersome and impractical. Following the debates at the European Parliament and the Council of Ministers on the proposed Regulation, a less prescriptive, more business-friendly version of the data breach provisions may end up being adopted. Currently, discussions are ongoing in an attempt to limit the scope of the data breach requirements to breaches that are “likely to severely affect the rights and freedoms of individuals”. The deadline for reporting breaches could also be extended to 72 hours. At this point, it is impossible to predict with certainty what will be the final wording of those provisions. However, there does seem to be a consensus among the EU institutions and member states that, one way or another, a data breach notification requirement must be introduced in the Regulation.

Secondly, the European Commission has proposed a directive that aims to impose new measures to ensure a high common level of network and information security across the EU. The Directive concerns public administrations and market operators, namely “providers of information society services” (i.e., e-commerce platforms, internet payment gateways, social networks, search engines, cloud computing services, application stores) and “operators of critical infrastructure that are essential for the maintenance of vital economic and societal activities in the fields of energy, transport, banking, stock exchanges and health.” The Directive would require them to report significant cyber incidents (e.g., an electricity outage, the unavailability of an online booking engine, or the compromise of air traffic control due to an outage or a cyber attack) to a national competent authority.

So what does this tell companies?

First, that data security in general and data breach notification in particular are drawing more and more attention, and thus cannot be ignored. As was the case a few years ago in the US, data breach notification is bound to become one of the hottest legal issues in Europe in the coming years. The legal framework for data breach notification may still be a work-in-progress, but nevertheless it is becoming a reality in Europe. Second, companies should not wait until data breach laws come into force in Europe to start implementing an action plan for handling data breaches. While data breach notification may not yet be a legal requirement for all companies in Europe, the reputational damage caused by a single data breach should motivate companies to implement robust data breach handling procedures. Finally, data breach notification can be viewed as a competitive advantage that enables companies to be more forthcoming and transparent vis-à-vis clients and customers who entrust them with their personal data.

For more information on data security breach notification rules in France, view my article in English: “Complying with Data Breach Requirements in France” (first published in BNA’s World Data Protection Report); and in French: “La notification des violations de données à caractère personnel: analyse et décryptage” (first published in Lamy Droit de l’Immatériel) .

The conflicting realities of data globalisation

Posted on June 17th, 2013 by

The current data globalisation phenomenon is largely due to the close integration of borderless communications with our everyday comings and goings. Global communications are so embedded in the way we go about our lives that we are hardly aware of how far our data is travelling every second that goes by. But data is always on the move and we don’t even need to leave home to be contributing to this. Ordinary technology right at our fingertips is doing the job for us leaving behind an international trail of data – some more public than other.

The Internet is global by definition. Or more accurately, by design. The original idea behind the Internet was to rely on geographically dispersed computers to transmit packets of information that would be correctly assembled at destination. That concept developed very quickly into a borderless network and today we take it for granted that the Internet is unequivocally global. This effect has been maximised by our ability to communicate whilst on the move. Mobile communications have penetrated our lives at an even greater speed and in a more significant way than the Internet itself.

This trend has led visionaries like Google’s Eric Schmidt to affirm that thanks to mobile technology, the amount of digitally connected people will more than triple – going from the current 2 billion to 7 billion people – very soon. That is more than three times the amount of data generated today. Similarly, the global leader in professional networking, LinkedIn, which has just celebrated its 10th anniversary, is banking on mobile communications as one of the pillars for achieving its mission of connecting the world’s professionals.

As a result, everyone is global – every business, every consumer and every citizen. One of the realities of this situation has been exposed by the recent PRISM revelations, which highlight very clearly the global availability of digital communications data. Perversely, the news about the NSA programme is set to have a direct impact on the current and forthcoming legislative restrictions on international data flows, which is precisely one of the factors disrupting the globalisation of data. In fact, PRISM is already being referred to as a key justification for a tight EU data protection framework and strong jurisdictional limitations on data exports, no matter how non-sensical those limitations may otherwise be.

The public policy and regulatory consequences of the PRISM affair for international data flows are pretty predictable. Future ‘adequacy findings’ by the European Commission as well as Safe Harbor will be negatively affected. We can assume that if the European Commission decides to have a go at seeking a re-negotiation of Safe Harbor, this will be cited as a justification. Things will not end there. Both contractual safeguards and binding corporate rules will be expected to address possible conflicts of law involving data requests for law enforcement or national security reasons in a way that no blanket disclosures are allowed. And of course, the derogations from the prohibition on international data transfers will be narrowly interpreted, particularly when they refer to transfers that are necessary on grounds of public interest.

The conflicting realities of data globalisation could not be more striking. On the one hand, every day practice shows that data is geographically neutral and simply flows across global networks to make itself available to those with access to it. On the other, it is going to take a fair amount of convincing to show that any restrictions on international data flows should be both measured and realistic. To address these conflicting realities we must therefore acknowledge the global nature of the web and Internet communications, the borderless fluidity of the mobile ecosystem and our human ability to embrace the most ambitious innovations and make them ordinary. So since we cannot stop the technological evolution of our time and the increasing value of data, perhaps it is time to accept that regulating data flows should not be about putting up barriers but about applying globally recognised safeguards.

This article was first published in Data Protection Law & Policy in June 2013.

A Brave New World Demands Brave New Thinking

Posted on June 3rd, 2013 by

Much has been said in the past few weeks and months about Google Glass, Google’s latest innovation that will see it shortly launch Internet-connected glasses with a small computer display in the corner of one lens that is visible to, and voice-controlled by, the wearer. The proposed launch capabilities of the device itself are—in pure computing terms—actually relatively modest: the ability to search the web, bring up maps, take photographs and video and share to social media.

So far, so iPhone.

But, because users wear and interact with Google Glass wherever they go, they will have a depth of relationship with their device that far exceeds any previous relationship between man and computer. Then throw in the likely short- to mid-term evolution of the device—augmented reality, facial recognition—and it becomes easy to see why Google Glass is so widely heralded as The Next Big Thing.

Of course, with an always-on, always-worn and always-connected, photo-snapping, video-recording, social media-sharing device, the privacy issues are a-plenty, ranging from the potential for crowd-sourced law enforcement surveillance to the more mundane forgetting-to-remove-Google-Glass-when-visiting-the-men’s-room scenario. These concerns have seen a very heated debate play out across the press, on TV and, of course, on blogs and social media.

But to focus the privacy debate just on Google Glass really misses the point. Google Glass is the headline-grabber, but in reality it’s just the tip of the iceberg when it comes to the wearable computing products that will increasingly be hitting the market over the coming years. Pens, watches, glasses (Baidu is launching its own smart glasses too), shoes, whatever else you care to think of—will soon all be Internet-connected. And it doesn’t stop at wearable computing either; think about Internet-connected home appliances: We can already get Internet-connected TVs, game consoles, radios, alarm clocks, energy meters, coffee machines, home safety cameras, baby alarms and cars. Follow this trend and, pretty soon, every home appliance and personal accessory will be Internet-connected.

All of these connected devices—this “Internet of Things”—collect an enormous volume of information about us, and in general, as consumers we want them: They simplify, organize and enhance our lives. But, as a privacy community, our instinct is to recoil at the idea of a growing pool of networked devices that collect more and more information about us, even if their purpose is ultimately to provide services we want.

The consequence of this tends to be a knee-jerk insistence on ever-strengthened consent requirements and standards: Surely the only way we can justify such a vast collection of personal information, used to build incredibly intricate profiles of our interests, relationships and behaviors, is to predicate collection on our explicit consent. That has to be right, doesn’t it?

The short answer to this is “no”—though not, as you might think, for the traditionally given reasons that users don’t like consent pop-ups or that difficulties arise when users refuse, condition or withdraw their consents. 

Instead, it’s simply that explicit consent is lazy. Sure, in some circumstances it may be warranted, but to look to explicit consent as some kind of data collection panacea will drive poor compliance that delivers little real protection for individuals.


Because when you build compliance around explicit consent notices, it’s inevitable that those notices will become longer, all-inclusive, heavily caveated and designed to guard against risk. Consent notices become seen as a legal issue, not a design issue, inhibiting the adoption of Privacy by Design development so that—rather than enhancing user transparency, they have the opposite effect. Instead, designers build products with little thought to privacy, safe in the knowledge that they can simply ‘bolt on’ a detailed consent notice as a ‘take it or leave it’ proposition on installation or first use, just like terms of service are now. And, as technology becomes ever more complicated, so it becomes ever more likely that consumers won’t really understand what it is they’re consenting to anyway, no matter how well it’s explained. It’s also a safe bet that users will simply ignore any notice that stands between them and the service they want to receive. If you don’t believe me, then look at cookie consent as a case in point.

Instead, it’s incumbent upon us as privacy professionals to think up a better solution. One that strikes a balance between the legitimate expectations of the individual with regard to his or her privacy and the legitimate interests of the business with regard to its need to collect and use data. One that enables the business to deliver innovative new products and services to consumers in a way that demonstrates respect for their data and engenders their trust and which does not result in lazy, consent-driven compliance. One that encourages controllers to build privacy functionality into their products from the very outset, not address it as an afterthought.

Maybe what we need is a concept of an online “personal space.”

In the physical world, whether through the rules of social etiquette, an individual’s body language or some other indicator, we implicitly understand that there is an invisible boundary we must respect when standing in close physical proximity to another person. A similar concept could be conceived for the online world—ironically, Big Data profiles could help here. Or maybe it’s as simple as promoting a concept of “surprise minimization” as proposed by the California attorney general in her guidance on mobile privacy—the concept that, through Privacy by Design methodologies, you avoid surprising individuals by collecting data from or about them that, in the given context, they would not expect or want.

Whatever the solution is, we’re entering a brave new world; it demands some brave new thinking.

This post first published on the IAPP Privacy Perspectives here.

Privacy pointers for appreneurs

Posted on May 31st, 2013 by

While parts of the global economy are continuing to suffer serious economic shocks, an individual with a computer, internet access and the necessary know-how can join the increasing ranks of the appreneurs – people developing and hoping to make money from apps. Buoyed by the stories of wunderkids such as 17 year old Nick D’Aloisio who sold his Summly app to Yahoo for around £18m earlier this year, many are seeking to become appillionaires! And undoubtedly a rosy future will beckon for those fortunate enough to hit on the right app at the right time.

As the popularity of mobile and tablet devices rises, the proliferation of apps will continue. But some apps will sink without a trace and some will become global hits. Amidst all the excitement, those developing apps would do well to consider certain essential privacy pointers in order to anticipate any potential obstacles to widespread adoption and in order to avoid any unwelcome regulator attention down the road. These include:

1. Think Privacy from the beginning – design your app so that it shows an understanding of privacy issues from the start i.e. include settings that give an individual control over what data you collect about them, usually through providing an opt-out;

2. Tell individuals what you’re doing – include a notice setting out how you use their data, make sure that the notice is accessible and in a language that people can understand, and adopt a ‘surprise minimisation’ approach so that you can reasonably argue that individuals would not be surprised by the data you collect on them in a given context;

3. Decide whether you’re sharing the data you collect with anyone else – if so, make sure that there’s a good reason to share the data, tell individuals about the data sharing and check to see whether there are any rules that require you to obtain individuals’ consent before sharing their data i.e. for marketing purposes;

4. Check to see whether you’re collecting special types of data – be aware that certain types of data (such as location data or health data) are considered more intrusive and you may need to obtain an individual’s consent before collecting this data;

5. Implement an implied consent solution when using cookies or other tracking technologies in the EU – the debate is pretty much over on how to comply with the EU cookie rule since implied consent is increasingly being adopted by regulators (see Phil Lee’s recent blog)

While an initiative scrutinising App privacy policies and practices (similar to the ‘Internet Sweep Day’ we have seen initiated recently by the Global Privacy Enforcement Network) is probably some time off, appreneurs that can get privacy ‘right’ from the start will have a competitive advantage over those that do not.

Is BYOD secure for your company?

Posted on May 24th, 2013 by

Over the past years, BYOD has developed rapidely and has even become common practise within some companies. More and more employees are using their electronic devices (e.g., smartphones and tablets) at work. The benefits for companies are undisputable in terms of cost-saving, work productivity, and the functionalities that smart devices can offer to employees. However, BYOD can also pose a threat for the security of a company’s information network  and systems when used without the proper level of security. On May 15, 2013, the French Agency for the Security of Information Systems (ANSSI) released a technical paper advizing companies to implement stronger security measures when authorizing their employees to use electronic devices.

The agency notes that the current security standards used by companies are insufficient to protect efficiently their professional data. Electronic devices enable to store lots of data obtained directly (e.g., emails, agenda, contacts, photos, documents, SMS) or indirectly (navigation data, geolocation data, history). Some of this data may be considered sensitive by companies (e.g., access codes and passwords, security certifications) and may be used fraudulently to access business information stored on the company’s professional network. Thus, the use of electronic devices in the work place contains a risk that business data may be modified, destroyed or disclosed unlawfully. In particular, the risk of a data security breach deriving from the use of an electronic device is quite high due to the numerous functionalities that they offer. This risk is generally explained by the vulnerability of the information systems installed on electronic devices, but also the wrongful behaviour of employees who are not properly informed about the risks.

The Agency realizes that it is unrealistic to want to reach a high level of security when using mobile devices, regardless of the security parameters used. Nevertheless, the Agency recommends that companies implement certain security parameters in order to mitigate the risk of a security incident. These security parameters should be installed on the employee’s device within a unique profile that he/she cannot modify. In addition to the technical measures, companies should also implement organizational measures, such as a security policy and an internal document explaining to employees the authorized uses of IT systems and devices. Finally, those security measures should be reassessed throughout the lifecycle of the electronic device (i.e., inherent security of the device, security of the information system before the device is used by the employee, security conditions applied to the entire pool of electronic devices, reinitializing the electronic devices before they are reaffected).

The twenty-one security measures that are outlined in the Agency’s paper are categorized as follows:

– access control: renewal of the password every three months; automatic lock-down of the device after five minutes; use of a PIN code when sensitive data are stored on the device; limit the number of attempts to unlock the device;

– security of applications: prohibit the ‘by default’ use of the on-line store for applications; prohibit the unauthorized installation of applications; block the geolocation functionality when not used for certain applications; switch off the geolocation functionality when not used; install security patches on a regular basis;

– security of data and communications:  wireless connections (e.g., Bluetooth, Wi-Fi) must be deactivated when not used; avoid connecting to unknown wireless networks when possible; apply robust encryption to the internal storage of the device; sensitive data must be shared by using encrypted communication channels in order to maintain the confidentiality and integrity of the data; 

– security of the information system: automatically upgrade information systems on a regular basis by installing security patches; if needed, reinitialize the device entirely once per year.

The Agency explains that these security parameters are incompatible with a BYOD policy involving the combined use of an electronic device both for private and professional purposes. The Agency recommends that professional devices be used exclusively for that purpose (meaning that employees should have a separate device for private purposes), and if the same device is used professionally and privately, that both environments be separated efficiently.

The Agency’s paper is available (in French) by clicking on the following link: NP_Ordiphones_NoteTech[1]

The familiar perils of the mobile ecosystem

Posted on March 18th, 2013 by

I had not heard the word ‘ecosystem’ since school biology lessons.  But all of a sudden, someone at a networking event dropped the ‘e’ word and these days, no discussion about mobile communications takes place without the word ‘ecosystem’ being uttered in almost every sentence.   An ecosystem is normally defined as a community of living things helping each other out (some more willingly than others) in a relatively contained environment.  The point of an ecosystem is that completely different organisms – each with different purposes and priorities – are able to co-exist in a more or less harmonious but eclectic way.  The parallel between that description and what is happening in the mobile space is evident.  Mobile communications have evolved around us to adopt a life of their own and separate from traditional desktop based computing and web browsing.  Through the interaction of very different players, our experience of communications on the go via smart devices has become an intrinsic part of our everyday lives. 

Mobile apps in particular have penetrated our devices and lifestyles in the most natural of ways.  Studies show that apparently an average smartphone user downloads 37 apps.  The fact that the term ‘app’ was listed as Word of the Year in 2010 by the American Dialect Society is quite telling.  Originally conceived to provide practical functions like calendars, calculators and ring tones, mobile apps bring us anything that can be digitised and has a role to play in our lives.  In other words, our use of technology has never been as close and personal.  Our mobile devices are an extension of ourselves and mobile apps are an accurate tool to record our every move (in some cases, literally!).  As a result, the way in which we use mobile devices tells a very accurate story of who we are, what we do and what we are about.  Conspiracy theories aside, it is a fact that smartphones are the perfect surveillance device and most of us don’t even know it!

Policy makers and regulators throughout the world are quickly becoming very sensitive to the privacy risks of mobile apps.  Enforcement is the loudest mechanism to show that nervousness but the proliferation of guidance around compliance with the law in relation to the development, provision and operation of apps has been a clear sign of the level of concern.  Regulators in Canada, the USA and more recently in Europe have voiced sombre concerns about such risks.  The close and intimate relationship between the (almost always on) devices and their users is widely seen as an aggravating factor of the potential for snooping, data collection and profiling.  Canadian regulators are particularly concerned about the seeming lightning speed of the app development cycle and the ability to reach hundreds of thousands of users within a very short period of time.  Another generally shared concern is the fragmentation between the many players in the mobile ecosystem – telcos, handset manufacturers, operating system providers, app stores, app developers, app operators and of course anybody else who wants a piece of the rich mobile cake – and the complexity that this adds to it.

All of that appears to compromise undisputed traditional principles of privacy and data protection: transparency, individuals’ control over their data and purpose limitation.  It is easy to see why that is the case.  How can we even attempt to understand – let alone control – all of the ways in which the information generated by our non-stop use of apps may potentially be used when all such uses are not yet known, the communication device is undersized and our eagerness to start using the app acts as a blindfold?  No matter how well intended the regulators’ guidance may be, it is always going to be a tall order to follow, particularly when the expectations of those regulators in terms of the quality of the notice and consent are understandably high.  In addition, the bulk of the guidance has been targeted at app developers, a key but in many cases insignificant player in the whole ecosystem.  Why is the enthusiastic but humble app developer the focus of the compliance guidelines when some of the other parties – led by the operator of the app, which is probably the most visible party to the user – play a much greater role in determining which data will be used and by whom?

Thanks to their ubiquity, physical proximity to the user and personal nature, mobile communications and apps pose a massive regulatory challenge to those who make and interpret privacy rules, and an even harder compliance conundrum to those who have to observe them.  That is obviously not a reason to give up and efforts must be made by anyone who plays a part to contribute to the solution.  People are entitled to use mobile technology in a private, productive and safe way.  But we must acknowledge that this new ecosystem is so complex that granting people full control of the data generated by such use is unlikely to be viable.  As with any other rapidly evolving technology, the privacy perils are genuine but attention must be given to all players and, more importantly, to any mechanisms that allow us to distinguish between legitimate and inappropriate uses of data.  Compliance with data protection in relation to apps should be about giving people what they want whilst avoiding what they would not want.

This article was first published in Data Protection Law & Policy in March 2013.