Archive for the ‘Data security’ Category

A New ISO Standard for Cloud Computing

Posted on November 5th, 2014 by



The summer of 2014 saw another ISO Standard published by the International Standards Organisation (ISO). ISO27018:2014 is a voluntary standard governing the processing of personal data in the public cloud.

With the catchy title of “Information technology – Security techniques – Code of the practice for protection of personally identifiable information (PII) in public clouds acting as PII processors” (“ISO27018“), it is perhaps not surprising that this long awaited standard is yet to slip off the tongue of every cloud enthusiast.  European readers may have assumed references to PII meant this standard was framed firmly on the US – wrong!

What is ISO27018?

ISO27018 sets out a framework of “commonly accepted control objectives, controls and guidelines” which can be followed by any data processors processing personal data on behalf of another party in the public cloud.

ISO27018 has been crafted by ISO to have broad application from large to small and from public entity to government of non-profit.

What is it trying to achieve?

Negotiations in cloud deals which involve the processing of personal data tend to be heavily influenced by the customer’s perceptions of heightened data risk and sometimes very real challenges to data privacy compliance. This is hurdle for many cloud adopters as they relinquish control over data and rely on the actions of another (and sometimes those under its control) to maintain adequate safeguards. In Europe, until we see the new Regulation perhaps, a data processor has no statutory obligations when processing personal data on behalf of another. ISO27018 goes some way to impose a level of responsibility for the personal information it processes.

ISO27018’s introductory pages call out its objectives:

  • It’s a tool to help the public cloud provider to comply with applicable obligations: for example there are requirements that the public cloud provider only processes personal information in accordance with the customer’s instructions and that they should assist the customer in cases of data subject access requests;
  • It’s an enabler of transparency allowing the provider to demonstrate why their cloud services are well governed: imposing good governance obligations on the public cloud provider around its information security organisation (eg the segregation of duties) and objectives around human resource security prior to (and during employment) and encouraging programmatic awareness and training. Plus it echoes the asset management and access controls elements of other ISO standards (see below);
  • It will assist the customer and vendor in documenting contractual obligations: by addressing typical contractually imposed accountability requirements; data breach notification, imposing adequate confidentially obligations on individuals touching on data and flowing down technical and organisation measures to sub-processors as well as requiring the documentation of data location. This said, a well advised customer may wish to delve deeper as this is not a full replacement for potential data controller to processor controls; and
  • It offers the public cloud customer a mechanism to exercise audit and compliance rights: with ISO27018’s potential application across disparate cloud environments, it remains to be seen whether a third party could certify compliance against some of the broader data control objectives contained in ISO27018. However, a regular review and reporting and/or conformity reviews may provide a means for vendor or third party verification (potentially of more use where shared and/or virtualised server environments practically frustrate direct data, systems and data governance practice audit by the customer).

ISO27018 goes some way towards delivering these safeguards. It is also a useful tool for a customer to evaluate the cloud services and data handling practices of a potential supplier. But it’s not simple and it’s not a substitute for imposing compliance and control via contract.

A responsible framework for public cloud processors

Privacy laws around the world prescribe nuanced, and sometimes no, obligations upon those who determine the manner in which personal information is used. Though ISO27018 is not specifically aimed at the challenges posed by European data protection laws, or any other jurisdiction for that matter, it is flexible enough to accommodate many of the inevitable variances. It cannot fit all current and may not fit to future rules. However, in building this flexibility, it loses some of its potential bite to generality.

Typically entities adopting ISO27001 (Information security management) are seeking to protect their own assets data but it is increasingly a benchmark standard for data management and handling among cloud vendors. ISO27018 builds upon the ISO27002 (Information technology – Security technique – Code of practice for information security controls) reflecting its controls, but adapting these for public cloud by mapping back to ISO27002 obligations where they remain relevant and supplementing these controls where necessary by prescribing additional controls for public cloud service provision (as set out separately in Annex A to ISO27018). As you may therefor expect, ISO27018 explicitly anticipates that a personal information controller would be subject to wider obligations than those specified and aimed at processors.

Adopting ISO27018

Acknowledging that the standard cannot be all-encompassing, and that the flavours of cloud are wide and varied, ISO27018 calls for an assessment to be made across applicable personal information “protection requirements”.  ISO27018 calls for the organisation to:

  • Assess the legal, statutory, regulatory and contractual obligations of it and its partners (noting particularly that some of these may mandate particular controls (for example preserving the need for written contractual obligations in relation to data security under Directive (95/46/EC) 7th Principle));
  • To complete a risk assessment across its business strategy and information risk profile; and
  • To factor in corporate policies (which may, at times, go further than the law for reasons of principle, global conformity or because of third party influences).

What ISO27018 should help with

ISO27018 offers a reference point for controllers who wish to adopt cloud solutions run by third party providers. It is a cloud computing information security control framework which may form part of a wider contractual commitment to protect and secure personal information.

As we briefly explained in an earlier post in our tech blog, the European Union has also spelled out its desire to promote uniform standard setting in cloud computing. ISO27018 could satisfy the need for broadly applicable, auditable data management framework for public cloud provision. But it’s not EU specific and lacks some of the rigour an EU based customer may seek.

What ISO27018 won’t help with

ISO27018 is not an exhaustive framework. There are a few obvious flaws:

  • It’s been designed for use in conjunction with the information security controls and objectives set out in ISO27002 and ISO27001 which provide general information security frameworks. This is a high threshold for small or emerging providers (many of which do not meet all these controls or certify to these standards today). So more accessible for large enterprise providers but something to weigh up – the more controls there are the more ways there are to slip up;
  • It may be used as a benchmark for security and, coupled with contractual commitments to meet and maintain selected elements of ISO27018, it won’t be relevant to all cloud solutions and compliance situations (though some will use it as if it were);
  • It perpetuates the use of the PII moniker which, already holding specific US legal connotation (i.e. narrower application) is now used is a wider defined context under ISO27018 (in fact PII under ISO27018 is closer to the definition of personal data under EU Directive 95/46/EC). This could confuse the stakeholders in multi-national deals and the corresponding use of PII in the full title to ISO27014 potentially misleads around the standard’s potentially applicability and use cases;
  • ISO27018 is of no use in situations where the cloud provider is (or assumes the role) of data controller and it assumes all data in the cloud is personal data (so watch this space for ISO27017 (coming soon) which will apply to any data (personal or otherwise)); and
  • For EU based data controllers, other than constructing certain security controls, ISO27018 is not a mechanism or alternative route to legitimise international data transfers outside of the European Economic Area. Additional controls will have to be implemented to ensure such data enjoys adequate protection.

What now?

ISO27018 is a voluntary standard and not law and it won’t entirely replace the need for specific contractual obligations around processing, accessing and transferring personal data. In a way its ultimate success can be gauged by the extent of eventual adoption. It will be used to differentiate, but it will not always answer all the questions a well-informed cloud adaptor should be asking.

It may be used in whole or in part and may be asserted and used alongside or as a part of contractual obligations, information handling best practice or simply a benchmark which a business will work towards. Inevitability there will be those who treat the Standard as if it is the law without thought about what they are seeking to protect against and what potential wrongs they are seeking to right.  If so, they will not reap the value of this kind of framework.

 

What does EU regulatory guidance on the Internet of Things mean in practice? Part 1

Posted on October 31st, 2014 by



The Internet of Things (IoT) is likely to be the next big thing, a disruptive technological step that will change the way in which we live and work, perhaps as fundamentally as the ‘traditional’ Internet did. No surprise then that everyone wants a slice of that pie and that there is a lot of ‘noise’ out there. This is so despite the fact that to a large extent we’re not really sure about what the term ‘Internet of Things’ means – my colleague Mark Webber explores this question in his recent blog. Whatever the IoT is or is going to become, one thing is certain: it is all about the data.

There is also no doubt that the IoT triggers challenging legal issues that businesses, lawyers, legislators and regulators need to get their heads around in the months and years to come. Mark discusses these challenges in the second part of his blog (here), where he considers the regulatory outlook and briefly discusses the recent Article 29 Working Party Opinion on the Internet of Things.

Shortly after the WP29 Opinion was published, Data Protection and Privacy Commissioners from Europe and elsewhere in the world adopted the Mauritius Declaration on the Internet of Things. It is aligned to the WP29 Opinion, so it seems that privacy regulators are forming a united front on privacy in the IoT. This is consistent with their drive towards closer international cooperation – see for instance the latest Resolution on Enforcement Cooperation and the Global Cross Border Enforcement Cooperation Agreement (here).

The regulatory mind-set

You only need to read the first few lines of the Opinion and the Declaration to get a sense of the regulatory mind-set: the IoT can reveal ‘intimate details'; ‘sensor data is high in quantity, quality and sensitivity’ and the inferences that can be drawn from this data are ‘much bigger and sensitive’, especially when the IoT is seen alongside other technological trends such as cloud computing and big data analytics. The challenges are ‘huge’, ‘some new, some more traditional, but then amplified with regard to the exponential increase of data processing’, and include ‘data losses, infection by malware, but also unauthorized access to personal data, intrusive use of wearable devices or unlawful surveillance’.

In other words, in the minds of privacy regulators, it does not get much more intrusive (and potentially unlawful) than this, and if the IoT is left unchecked, it is the quickest way to an Orwellian dystopia. Not a surprise then that the WP29 supports the incorporation of the highest possible guarantees, with users remaining in complete control of their personal data, which is best achieved by obtaining fully informed consent. The Mauritius Declaration echoes these expectations.

What the regulators say

Here are the main highlights from the WP29 Opinion:

  1. Anyone who uses an IoT object, device, phone or computer situated in the EU to collect personal data is captured by EU data protection law. No surprises here.
  2. Data that originates from networked ‘things’ is personal data, potentially even if it is pseudonymised or anonymised (!), and even if it does not relate to individuals but rather relates to their environment. In other words, pretty much all IoT data should be treated as personal data.
  3. All actors who are involved in the IoT or process IoT data (including device manufacturers, social platforms, third party app developers, other third parties and IoT data platforms) are, or at least are likely to be, data controllers, i.e. responsible for compliance with EU data protection law.
  4. Device manufacturers are singled out as having to take more practical steps than other actors to ensure data protection compliance (see below). Presumably, this is because they have a direct relationship with the end user and are able to collect ‘more’ data than other actors.
  5. Consent is the first legal basis that should be principally relied on in the IoT. In addition to the usual requirements (specific, informed, freely given and freely revocable), end users should be enabled to provide (or withdraw) granular consent: for all data collected by a specific thing; for specific data collected by anything; and for a specific data processing. However, in practice it is difficult to obtain informed consent, because it is difficult to provide sufficient notice in the IoT.
  6. Controllers are unlikely to be able to process IoT data on the basis that it is on their legitimate interests to do so, because it is clear that this processing significantly affects the privacy rights of individuals. In other words, in the IoT there is a strong regulatory presumption against the legitimate interests ground and in favour of consent as the legitimate basis of processing.
  7. IoT devices constitute ‘terminal devices’ for EU law purposes, which means that any storage of information, or access to information stored, on an IoT device requires the end user’s consent (note: the requirement applies to any information, not just personal data).
  8. Transparency is absolutely essential to ensure that the processing is fair and that consent is valid. There are specific concerns around transparency in the IoT, for instance in relation to providing notice to individuals who are not the end users of a device (e.g. providing notice to a passer-by whose photo is taken by a smart watch).
  9. The right of individuals to access their data extends not only to data that is displayed to them (e.g. data about calories burnt that is displayed on a mobile app), but also the raw data processed in the background to provide the service (e.g. the biometric data collected by a wristband to calculate the calories burnt).
  10. There are additional specific concerns and corresponding expectations around purpose limitation, data minimisation, data retention, security and enabling data subjects to exercise their rights.

 

It is also worth noting that some of the expectations set out in the Opinion do not currently have an express statutory footing, but rather reflect provisions of the draft EU Data Protection Regulation (which may or may not become law): privacy impact assessments, privacy by design, privacy by default, security by design and the right to data portability feature prominently in the WP29 Opinion.

The regulators’ recommendations

The WP29 makes recommendations regarding what IoT stakeholders should do in practice to comply with EU data protection law. The highlights include:

  1. All actors who are involved in the IoT or process IoT data as controllers should, carry out Privacy Impact Assessments and implement Privacy by Design and Privacy by Default solutions; should delete raw data as soon as they have extracted the data they require; and should empower users to be in control in accordance with the ‘principle of self-determination of data’.
  2. In addition, device manufacturers should:
    1. follow a security by design principle;
    2. obtain consents that are granular (see above), and the granularity should extend to enabling users to determine the time and frequency of data collection;
    3. notify other actors in the IoT supply chain as soon as a data subject withdraws their consent or opposes a data processing activity;
    4. limit device finger printing to prevent location tracking;
    5. aggregate data locally on the devices to limit the amount of data leaving the device;
    6. provide users with tools to locally read, edit and modify data before it is shared with other parties;
    7. provide interfaces to allow users to extract aggregated and raw data in a structured and commonly used format; and
    8. enable privacy proxies that inform users about what data is collected, and facilitate local storage and processing without transmitting data to the manufacturer.
  3. The Opinion sets out additional specific expectations for app developers, social platforms, data platforms, IoT device owners and additional data recipients.

 

Comment

I have no doubt that there are genuinely good intentions behind the WP29 Opinion and the Mauritius Declaration. What I am not sure about is whether the approach of the regulators will encourage behaviours that protect privacy without stifling innovation and impeding the development of the IoT. I am not even sure if, despite the good intentions, in the end the Opinion will encourage ‘better’ privacy protections in the IoT. I explain why I have these concerns and how I think organisations should be approaching privacy compliance in the IoT in Part 2 of this piece.

PART 2 – The regulatory outlook for the Internet of Things

Posted on October 22nd, 2014 by



In Part 1 of this piece I posed a question asking: the Internet of Things – what is it? I argued that even the concept of the Internet of Things (“IoT“) itself is somewhat ill-defined making the point there is no definition of IoT and, even if there were, that the definition will only change. What’s more, IoT will mean different things to different people and talk to something new each year.

For all the commentary, there is not specific IoT law today (sorry there is no Internet of Things (Interconnectivity) Act in the UK (and nor will there be any time soon)). We are left applying a variety of existing laws across telecoms, intellectual property, competition, health and safety and data privacy / security. Equally, with a number of open questions about how the IoT will work, how devices will communicate and identify each other etc., there is also a lack of standards and industry wide co-operation around IoT.

Frequently based around data use and with potentially intrusive application in the consumer space (think wearables, intelligent vehicles and healthtech) there is no doubt that convergence around IoT will fan privacy questions and concerns.

An evolving landscape

This lack of definition, coupled with a nascent landscape of standards, interfaces, and protocols leaves many open questions about future regulation and the application of current laws. On the regulatory front there is little sign of actual law-making or which rules may evolve to influence our approach or analysis.

Across the US, UK and the rest of Europe some of the regulatory bodies with an interest in IoT are diverse with a range of regulatory mandates and sometimes with a defined role confined to specific sectors. Some of these regulators are waking up to potential issues posed by IoT and a few are reaching out to the industry as a whole to consult and stimulate discussion. We’re more likely to see piecemeal regulation addressing specific issues than something all encompassing.

The challenge of new technology

Undoubtedly the Internet of Things will challenge law makers as well as those of us who construe the law. It’s possible that in navigating these challenges and our current matrix of laws and principles that we may influence the regulatory position as a result. Some obvious examples of where these challenges may come from are:

  • Adaptations to spectrum allocation. If more devices want to communicate, many of these will do so wirelessly (whether via short range or wide area comms or mobile). The key is that these exchanges don’t interfere with each other and that there is sufficient capacity available within the allocated spectrum. This may need to be regulated.
  • Equally, as demand increases, with a scarce resource what kind of spectrum allocation is “fair” and “optimal” and is some machine to machine traffic more important than other traffic? With echoes of the net neutrality debate the way this evolves will be interesting. Additionally, if market dominance emerges around one technology will there be competition/anti-trust concerns;
  • The technologies surrounding the IoT will throw up intellectual property and licensing issues. The common standards and exchange and identification protocols themselves may be controlled by interested party or parties or released on an “open” basis. Regulation may need to step-in to promote economic advance via speedy adoption or simply act as an honest broker in a competitive world; and
  • In some applications of IoT the concept of privacy will be challenged. In a decentralised world the thorny issues of consent and reaffirming consent will be challenging. This said, many IoT deployments will not involve personal information or identifiers. Plus, whatever the data, issues around security become more acute.

We have a good idea what issues may be posed, but we don’t yet know which will impose themselves sufficiently to force regulation or market intervention.

Consultation – what IoT means for the policy agenda

There have been some opening shots in this potential regulatory debate because a continued interconnectivity between multiple devices raises potential issues.

  • In issuing a new Consultation: “Promoting investment and innovation in the Internet of Things“, Ofcom (the UK’s communications regulator) kicked off its own learning exercise identify potential policy concerns around:
  • spectrum allocation and providing for potential demand;
  • understanding of the robustness and reliability issues placed upon networks which demand resilience and security. The corresponding issue of privacy is recognised also;
  • a need for each connected device to have an assigned name or identifier and questioning just how those addresses should be determined and potentially how they would be assigned; and
  • understanding their potential role as the UK’s regulator in an area (connectivity) key to the evolution of IoT.

In a varied and quite penetrable paper, Ofcom’s consultation recognises what many will be shouting, their published view “is that industry is best placed to drive the development, standardisation and commercialisation of new technology“. However, it goes on to recognise that “given the potential for significant benefits from the development of the IoT across a range of industry sectors, ][Ofcom[ are interested in views on whether we should be more proactive; for example, in identifying and making available key frequency bands, or in helping to drive technical standards.”

Europe muses while Working Party 29 wades in early warning about privacy

IoT adoption has been on Europe’s “Digital Agenda” for some time and in 2013 it reported back on its own Conclusions of the Internet of Things public consultation. There is also the “Connected Continent” initiative chasing a single EU telecoms market for jobs and growth.   The usual dichotomy is playing out equating technology adoption with “growth” while Europe wrestles with an urge to protect consumers and markets.

In just one such fight with this urge, in the past month the Article 29 Working Party (comprising the data privacy regulators of Europe) published its own Opinion 8/2014 on the Recent Developments on the Internet of Things. Recognising that it’s impossible to predict with any certainty the extent to which the IoT will develop the group also calls out that the development must “respect the many privacy and security challenges which can be associated with IoT“.

Their Opinion focuses on three specific IoT developments:

  • Wearable Computing;
  • Quantified Self; and
  • Domotics (home automation).

This Opinion doesn’t even consider B2B applications and more global issues like “smart cities”, “smart transportations”, as well as M2M (“machine to machine”) developments. Yet, the principles and recommendations their Opinion may well apply outside its strict scope and cover these other developments in the IoT. It’s one of our only guiding lights (and one which applies high standards of responsibility).

As one would expect, the Opinion identifies the “main data protection risks that lie within the ecosystem of the IoT before providing guidance on how the EU legal framework should be applied in this context”. What’s more the Working Party “supports the incorporation of the highest possible guarantees for individual users at the heart of the projects by relevant stakeholders. In particular, users must remain in complete control of their personal data throughout the product lifecycle, and when organisations rely on consent as a basis for processing, the consent should be fully informed, freely given and specific.”

The Fieldfisher team will shortly publish its thoughts and explanation of this Opinion. As one may expect, the IoT can and will challenge the privacy notions of transparency and consent let alone proportionality and purpose limitation. This means that accommodating the EU’s data privacy principles within the application of some IoT will not always be easy. Security poses another tricky concept and conversation. Typically these are issues to be tacked at the design stage and not as a legal afterthought. Step forward the concept of privacy by design (a concept recognised now around the globe).

In time, who knows, we may even see the EU Data Protection Regulation pass and face enhanced privacy obligations in Europe with new focus on “profiling” and legal responsibilities falling beyond the data processor exacting its own force over IoT.

The US is also alive to the potential needs of IoT

But Europe is not alone, with its focus on activity specific laws or laws regulating specific industries, even the US may be addressing particular IoT concerns with legislation. Take the “We Are Watching You Act” currently with Congress and the “Black Box Privacy Protection Act” with the House of Representatives. Each now apparently have a low chance of actually passing, but may regulate monitoring of surveillance by video devices in the home and force car manufacturers to disclose to consumers the presence of event data recorders, or ‘black boxes’, in new automobiles.

A wider US development possibly comes from the Federal Trade Commission who hosted public workshops in 2013, itself interested in privacy and security in the connected world and the growing connectivity of devices. In the FTC’s own words: “[c]onnected devices can communicate with consumers, transmit data back to companies, and compile data for third parties such as researchers, health care providers, or even other consumers, who can measure how their product usage compares with that of their neighbors. The workshop brought together academics, business and industry representatives, and consumer advocacy groups to explore the security and privacy issues in this changing world. The workshop served to inform the Commission about the developments in this area.” Though there are no concrete proposals yet, 2014 has seen a variety of continued commentary around “building trust” and “maximising consumer benefits through consumer control”. With its first IoT enforcement action falling in 2013 (in respect of connected baby monitors from TRENDnet whose feeds were not secure) there’s no doubt the evolution of IoT is on the FTC’s radar.

FTC Chairwomen, Edith Ramirez commented that “The Internet of Things holds great promise for innovative consumer products and services. But consumer privacy and security must remain a priority as companies develop more devices that connect to the Internet“.

No specific law, but plenty of applicable laws

My gut instinct to hold back on my IoT commentary had served me well enough. In the legal sense with little to say, perhaps even now I’ve spoken too soon? What is clear is that we’re immersing ourselves in IoT projects, wearable device launches, health monitoring apps, intelligent vehicles and all the related data sharing already. The application of law to the IoT needs some legal thought and, without specific legislation today, as for many other emerging technologies we must draw upon:

  • Our insight into the existing law across and its current application across different legal fields; and
  • Rather than applying a rule specific to IoT, we have to ask the right questions to build a picture of the technology, the way it communicates and figure out the commercial realities and relative risks posed by these interactions.

Whether the internet of customers, the internet of people, data, processes or even the internet of everything; applied legal analysis will get us far enough until we actually see some substantive law for the IoT. This is today’s IoT challenge.

Mark Webber – Partner, Palo Alto California mark.webber@fieldfisher.com

Part 1: Cutting through the Internet of Things hyperbole

Posted on October 15th, 2014 by



I’ve held back writing anything about the Internet of Things (or “IoT“) because there are so many developments playing out in the market. Not to mention so much “noise”.

Then something happened: “It’s Official: The Internet Of Things Takes Over Big Data As The Most Hyped Technology” read a Forbes headline. “Big data”, last week’s darling, is condemned to the “Trough of Disillusionment” while Gartner moves IoT to the very top of its 2014 emerging technologies Hype Cycle. Something had to be said.

The key point for me is that the IoT is “emerging”. What’s more, few are entirely sure where they are on this uncharted journey of adoption. IoT has reached an inflexion point and a point where businesses and others realise that identifying with the Internet of Things may drive sales, shareholder value or merely kudos. We all want a piece of this pie.

In Part 1 of this two part exploration of IoT, I explore what the Internet of Things actually is.

IoT –what is it?

Applying Gartner’s parlance, one thing is clear; when any tech theme hits the “Peak of Expectations” the “Trough of Disillusionment” will follow because, as with any emerging technology, it will be sometime until there is pervasive adoption of IoT. In fact, for IoT, Gartner says widespread adoption could be 5 to 10 years away. However, this inflexion point is typically the moment in time when the tech industry’s big guns ride into town and, just as with cloud (remember some folk trying to trade mark the word?!), this will only drive further development and adoption. But also further hype.

The world of machine to machine (“M2M“) communications involved the connection of different devices which previously did not have the ability to communicate. For many, the Internet of Things is something more, as Ofcom (the UK’s communications regulator) set out in its UK consultation, IoT is a broader term, “describing the interconnection of multiple M2M applications, often enabling the exchange of data across multiple industry sectors“.

The Internet of Things will be the world’s most massive device market and save companies billions of dollars” shouted Business Week in October 2014, happy to maintain the hype but also acknowledging in its opening paragraph that IoT is “beginning to grow significantly“. No question, IoT is set to enable large numbers of previously unconnected devices to connect and then communicate sharing data with one another. Today we are mainly contemplating rather than experiencing this future.

But what actually is it?

The emergence of IoT is driving some great debate. When assessing what IoT is and what it means for business models, the law and for commerce generally, arguably there are more questions than there are answers. In an exploratory piece in ZDNET Richie Etwaru called out a few of these unanswered questions and prompted some useful debate and feedback. The top three questions raised by Ritchie were:

  • How will things be identified? – believing we have to get to a point where there are standards for things to be sensed and connected;
  • What will the word trust mean to “things” in IoT? – making the point we need to redefine trust in edge computing; and
  • How will connectivity work? – Is there something like IoTML (The Internet of Things Markup Language) to enable trust and facilitate this communication?

None of these questions are new, but his piece reinforces that we don’t quite know what IoT is and how some of its technical questions will be addressed. It’s likely that standardisation or industry practice and adoption around certain protocols and practices will answer some of these questions in due course. As a matter of public policy we may see law makers intervene to shape some of these standards or drive particular kinds of adoption. There will be multiple answers to the “what is IoT?” question for some time. I suspect in time different flavours and business models will come to the fore. Remember when every cloud seminar spent the first 15 minute defining cloud models and reiterating extrapolations for the future size of the cloud market? Brace yourselves!

I’ve been making the same points about “cloud” for the past 5 years – like cloud the IoT is a fungible concept. So, as with cloud, don’t assume IoT has definitive meaning. As with cloud, don’t expect there is any specific Internet of Things law (yet?). As Part 2 of this piece will discuss, law makers have spotted there’s something new which may need regulatory intervention to cultivate it for the good of all but they’ve also realised  that there’s something which may grow with negative consequences – something that may need to be brought into check. Privacy concerns particularly have raised their head early and we’ve seen early EU guidance in an opinion from the Article 29 Working Party, but there is still no specific IoT law. How can there be when there is still little definition?

Realities of a converged world

For some time we’ve been excited about the convergence of people, business and things. Gartner reminds us that “[t]he Internet of Things and the concept of blurring the physical and virtual worlds are strong concepts in this stage. Physical assets become digitalized and become equal actors in the business value chain alongside already-digital entities“.   In other words; a land of opportunity but an ill-defined “blur” of technology and what is real and merely conceptual within our digital age.

Of course the IoT world is also a world bumping up against connectivity, the cloud and mobility. Of course there are instances of IoT out there today. Or are there? As with anything that’s emerging the terminology and definition of the Internet of Things is emerging too. Yes there is a pervasiveness of devices, yes some of these devices connect and communicate, and yes devices that were not necessarily designed to interact are communicating, but are these examples of the Internet of Things? Break these models down into constituent parts for applied legal thought and does it necessarily matter?

Philosophical, but for a reason

My point? As with any complex technological evolution, as lawyers we cannot apply laws, negotiate contracts or assess risk or the consequences for privacy without a proper understanding of the complex ecosystem we’re applying these concepts to. Privacy consequences cannot be assessed in isolation and without considering how the devices, technology and data actually interact. Be aware that the IoT badge means nothing legally and probably conveys little factual information around “how” something works. It’s important to ask questions. Important not to assume.

In Part 2 of this piece I will discuss some early signs of how the law may be preparing to deal with all these emerging trends? Of course the answer is that it probably already does and it probably has the flexibility to deal with many elements of IoT yet to emerge.

Creating a successful data retention policy

Posted on April 22nd, 2014 by



With the excitement generated by the recent news that the European Court of Justice has, in effect, struck down the EU’s Data Retention Directive (see our earlier post here), now seems as a good a time as any to re-visit the topic of data retention generally.

Whereas the Data Retention Directive required ISPs and telcos to hold onto communications metadata, the Data Protection Directive is sector-blind and pulls in exactly the opposite direction: put another way, it requires all businesses not to hold onto personal data for longer than is “necessary”.

That’s the kind of thing that’s easy for a lawyer to say, but difficult to implement in practice.  How do you know if it’s “necessary” to continue holding data?  How long does “necessary” last?  How do you explain to internal business stakeholders that what they consider “necessary” (i.e. commercially desirable) is not the same thing as what the law considers “necessary”?

Getting the business on-side

For any CPO, compliance officer or in-house lawyer looking to create their company’s data retention policy, you’ll need to get the business on-side.  Suggesting to the business that it deletes valuable company data after set periods of time may not initially be well-received but, for your policy to be a success, you’ll ultimately need the business’s support.

To get this buy-in, you need to communicate the advantages of a data retention policy and, fortunately, these are numerous.  Consider, for example:

  • Reduced IT expenditure:  By deleting data at defined intervals, you reduce the overall amount of data you’ll be storing.  That in turn means you need fewer systems to host that data, less archiving, back-ups and offsite storage, making significant cost savings and keeping your CFO happy.
  • Improved security:  It seems obvious, but it’s amazing how often this is overlooked.  The less you hold, the less – frankly – you have to lose.  Nobody wants to be making a data breach notification to a regulator AND explaining why they were continuing to hold on to 20 year old records in the first place.
  • Minimised data disclosures:  Most businesses are familiar with the rights individuals have to request access to their personal information, as well as the attendant business disruption these requests can cause.  As with the above point, the less data you hold, the less you’ll need to disclose in response to one of these requests (meaning the less effort – and resource – you need to put into finding that data).  This holds true for litigation disclosure requests too.
  • Legal compliance:  Last, but by no means least, you need a data retention policy for legal compliance – after all, it’s the law not to hold data for longer than “necessary”.  Imagine a DPA contacting you and asking for details of your data retention policy.  It would be a bad place to be in if you didn’t have something ready to hand over.  

Key considerations

Once you have persuaded the business that creating a data retention policy is a good idea, the next task is then to go off and design one!  This will involve input from various internal stakeholders (particularly IT staff) so it’s important you approach them with a clear vision for how to address some of the critical retention issues.

Among the important points to consider are:

  • Scope of the policy:  What data is in-scope?  Are you creating a data retention policy just for, say, HR data or across all data processed by the business?  There’s a natural tension here between achieving full compliance and keeping the project manageable (i.e. not biting off more than you can chew).  It may be easier to “prove” that your policy works on just one dataset first and then roll it out to additional, wider datasets later.
  • One-size-fits-all vs. country-by-country approach:  Do you create a policy setting one-size-fits-all retention limits across all EU (possibly worldwide) geographies, or set nationally-driven limits with the result that records kept for, say, 6 years in one country must be deleted after just two in another?  Again, the balance to be struck here is between one of compliance and risk versus practicality and ease of administration.
  • Records retention vs. data retention:  Will your policy operate at the “record” level or the “data” level?  The difference is this: a record (such as a record of a customer transaction) may comprise multiple data elements (e.g. name, cardholder number, item purchased, date etc.)  A crucial decision then is whether your policy should operate at the “record” level (so that the entire customer transaction record is deleted after [x] years) or at the “data”  level (so that, e.g., the cardholder number is deleted after [x] years but other data elements are kept for a longer period).  This is a point where it is particularly important to discuss with IT stakeholders what is actually achievable.
  • Maximum vs minimum retention periods:  Apart from setting maximum data retention periods, there may be  commercial, legal or operational reasons for the business to want to set minimum retention periods as well – e.g. for litigation defence purposes.  At an early stage, you’ll need to liaise with colleagues in HR, IT, Accounting and Legal teams to identify whether any such reasons exist and, if so, whether these should be reflected in your policy.
  • Other relevant considerations:  What other external factors will impact the data retention policy you design? Aside from legal and commercial requirements, is the business subject to, for example, sector-specific rules, agreements with local Works’ Councils, or even third party audit requirements (e.g. privacy seal certifications – particularly common in Germany)?  These factors all need to be identified and their potential impact on your data retention policy considered at an early stage.   

Getting it right at the beginning means that the subsequent stages of your data retention policy design and roll out should become much smoother – you’ll get the support you need from the business and you’ll have dealt with the difficult questions in a considered, strategic way upfront rather than in a piecemeal (and likely, inconsistent) fashion as the policy evolves.

And with so much to benefit from adopting a retention policy, why would you wait any longer?

Beware: Europe’s take on the notification of personal data breaches to individuals

Posted on April 10th, 2014 by



Article 29 Working Party (“WP 29“) has recently issued an Opinion on Personal Data Breach Notification (the “Opinion“). The Opinion focuses on the interpretation of the criteria under which individuals should be notified about the breaches that affect their personal data.

Before we analyse the take aways from the Opinion, let’s take a step back: are controllers actually required to notify personal data breaches?

In Europe, controllers have, for a while now, been either legally required or otherwise advised to consider notifying personal data breaches to data protection regulators and/or subscribers or individuals.

Today, the only EU-wide personal data breach notification requirement derives from Directive 2002/58/EC, as amended by Directive 2009/136/EC, (the “e-Privacy Directive“) and  applies to providers of publicly available electronic communications services. In some EU member states (for example, in Germany), this requirement has been extended to controllers in other sectors or to all  controllers. Similarly, some data protection regulators have issued guidance whereby controllers are advised to report data breaches under certain circumstances.

Last summer, the European Commission adopted Regulation 611/2013 (the “Regulation“), (see our blog regarding the Regulation here), which  sets out the technical implementing measures concerning the circumstances, format and procedure for data breach notification required under Article 4 of the e-Privacy Directive.

In a nutshell, providers  must notify individuals of breaches that are likely to adversely affect their personal data or privacy without undue delay and taking account of: (i) the nature and content of the personal data concerned; (ii) the likely consequences of the personal data breach for the individual concerned (e.g. identify theft, fraud, distress, etc); and (iii) the circumstances of the personal data breach. Providers are exempt to notify individuals (not regulators) if they have demonstrated to the satisfaction of the data protection regulator that they have implemented appropriate technological protection measures to render that data unintelligible to any person who is not authorised to access it.

The Opinion provides guidance on how controllers may interpret this notification requirement by analysing 7 practical scenarios of breaches that will meet the ‘adverse effect’ test. For each of them, the  WP 29 identifies the potential consequences and adverse effects of the breach and the security safeguards which might have reduced the risk of the breach occurring in the first place or, indeed, might have exempted the controller from notifying the breach to individuals all together.

From the Opinion, it is worth highlighting:

The test. The ‘adverse effect’ test is interpreted broadly to include ‘secondary effects’. The  WP 29 clearly states that all the potential consequences and potential adverse effects are to be taken into account. This interpretation may be seen a step too far as not all ‘potential’ consequences are ‘likely’ to happen and will probably lead to a conservative interpretation of the notification requirement across Europe.

Security is key. Controllers should put in place security measures that are appropriate to the risk presented by the processing with emphasis on the implementation of those controls rendering data unintelligible. Compliance with data security requirements should result in the mitigation of the risks of personal data breaches and even, potentially, in the application of the exception to notify individuals about the breach. Examples of security measures that are identified to be likely to reduce the risk of a breach occurring are: encryption (with strong key); hashing (with strong key), back-ups, physical and logical access controls and regular monitoring of vulnerabilities.

Procedure. Controllers should have procedures in place to manage personal data breaches. This will involve a detailed analysis of the breach and its potential consequences. In the Opinion, the  data breaches fall under three categories, namely, availability, integrity or confidentiality breaches. The application of this model may help controllers analyse the breach too.

How many individuals? The number of individuals affected by the breach should not have a bearing on the decision of whether or not to notify them.

Who must notify? It is explicitly stated in the Opinion that breach notification constitutes good practice for all controllers, even for those who are currently not required to notify by law.

There is a growing consensus in Europe that it is only a matter of time before an EU-wide personal data breach notification requirement that applies to all controllers (regardless of the sector they are in) is in place. Indeed, this will be the case if/when the proposed General Data Protection Regulation is approved. Under it, controllers would be subject to strict notification requirements both to data protection regulators and individuals. This Opinion provides some insight into  how the  European regulators may interpret these requirements under the General Data Protection Regulation.

Therefore, controllers will be well-advised to prepare for what is coming their way (see previous blog here). Focus should be on the application of security measures (in order to prevent a breach and the adverse effects to individuals once a breach has occurred) and on putting procedures in place to effectively manage breaches. Start today, burying the head in the sand is just no longer an option.

Progress update on the EU Cybersecurity Strategy

Posted on March 13th, 2014 by



Background

On 28 February 2014, the European Commission hosted a “High Level Conference on the EU Cybersecurity Strategy” in Brussels.  The conference provided an opportunity for EU policy-makers, industry representatives and other interested parties to assess the progress of the EU Cybersecurity Strategy, which was adopted by the European Commission on 7 February 2013.

Keynote speech by EU Digital Agenda Commissioner Neelie Kroes

The implementation of the EU Cybersecurity Strategy comes at a time when public and private actors face escalating cyber threats.  During her keynote speech at the conference, Commissioner Kroes reiterated the dangers of weak cybersecurity measures by asserting that “without security, there is no privacy.

She further highlighted the reputational and financial impact of cyber threats, commenting that over 75% of small businesses and 93% of large businesses have suffered a cyber breach, according to a recent study.  However, Commissioner Kroes also emphasised that effective EU cybersecurity practices could constitute a commercial advantage for the 28 MemberState bloc in an increasingly interconnected global marketplace.

Status of the draft EU Cybersecurity Directive

The EU Cybersecurity Strategy’s flagship legal instrument is draft Directive 2013/0027 concerning measures to ensure a high common level of network and information security across the Union (“draft EU Cybersecurity Directive”).  In a nutshell, the draft EU Cybersecurity Directive seeks to impose certain mandatory obligations on “public administrations” and “market operators” with the aim of harmonising and strengthening cybersecurity across the EU. In particular, it includes an obligation to report security incidents to the competent national regulator.

The consensus at the conference was that further EU institutional reflection is required on some aspects of the draft EU Cybersecurity Directive, such as (1) the scope of obligations, i.e., which entities are included as “market operators”; (2) how Member State cooperation would work in practice; (3) the role of the National Competent Authorities’ (“NCAs”); and (4) the criminal dimension and notification requirement to law enforcement authorities by NCAs.  The scope of obligations is a particularly contentious issue as EU decision-makers consider whether to include certain entities, such as software manufacturers, hardware manufacturers, and internet platforms, within the scope of the Directive.

The next few months will be a crucial period for the legislative passage of the draft law.  Indeed, the European Parliament voted on 13 March 2014 in the Plenary session to adopt its draft Report on the Directive.  The Council will now spend March – May 2014 working on the basis of the Parliament’s report to achieve a Council “common approach”.  The dossier will then likely be revisited after the European Parliament elections in May 2014.  The expected timeline for adoption remains “December 2014″ but various decision-making scenarios are possible depending on the outcome of the elections.

Once adopted, Member States will have 18 months to transpose the Directive into national law (meaning an approximate deadline of mid-2016).  As a minimum harmonisation Directive, Member States could go beyond the provisions of the adopted Directive with their national transpositions, for instance, by reinstating internet platforms within the definition of a “market operator”. 

One of the challenges for organizations will be achieving compliance with possibly conflicting notification requirements between the draft EU Cybersecurity Directive (i.e., obligation to report security incidents to the competent national regulator), the existing ePrivacy Directive (i.e., obligation for telecom operators to notify personal data breaches to the regulator and to individuals affected) and, if adopted, the EU Data Protection Regulation (i.e., obligation for all data controllers to notify personal data security breaches to the regulator and to individuals affected).  So far, EU legislators have not provided any guidance as to how these legal requirements would coexist in practice.

Industry’s perspective on the EU Cybersecurity Strategy

During the conference, representatives from organisations such as Belgacom and SWIFT highlighted the real and persistent threat facing companies. Calls were made for international coordination on cybersecurity standards and laws to avoid conflicting regulatory requirements.  Interventions also echoed the earlier sentiments of Commissioner Kroes in that cybersecurity offers significant growth opportunities for EU industry. 

Business spoke of the need to “become paranoid” about the cyber threat and implement “security by design” to protect data.  Finally, trust, collaboration and cooperation between Member States, public and private actors were viewed as essential to ensure EU cyber resilience.

The Privacy Regulatory Bear Market and playing political football with business

Posted on January 23rd, 2014 by



2014 has kicked off in very dramatic fashion on the privacy law regulatory enforcement front; the French data protection regulator, CNIL, has just fined Google €150,000 for alleged Privacy Policy failings, an amount described as ‘pocket money’ by the EU politician who is in charge of toughening up European data protection law, European Commissioner Reding; the FTC, the US consumer protection regulator, has just taken disciplinary action against a number of US companies that have breached the ‘Safe Harbor’ agreement between the EU and US on the export of personal data from Europe to the US. So, what’s going on here?

Looking at the bigger picture of privacy law enforcement, penalties and sanctions, the climate has been getting worse for businesses year-on-year; the cycle of tougher regulatory responses to privacy problems began around 2006. The regulatory rhetoric has also been getting stronger and darker over the cycle.

The bigger picture tells us that there is a ‘Regulatory Bear Market’ right at the beating heart of the international privacy law system. Like a financial bear market, this is the consequence of negative sentiment, pessimism and a loss of confidence, in the sense that privacy law regulators are downbeat about the performance of businesses when it comes to compliance with their privacy law obligations. This leads to negative and adverse outcomes, including the imposition of large financial penalties and negative rhetoric in press statements, television appearances and guidance and policy documents.

In Europe, the most visible fruit of the Regulatory Bear Market is the current law reform process led by Commissioner Reding, which will toughen up data protection law in ways that most businesses have not yet adjusted to. For instance, fines of up to 5% of the annual worldwide turnover of the business may be imposed. Translating this threatened change into real monetary values has been hard up until now, but Commissioner Reding has just said that the Google fine might be as much as $1bn under the new regime, a staggering sum, which is sure to water the eyes of Chief Financial Officers everywhere.

If that wasn’t bad enough, it seems that the business community may be forced to pay the price for the government failings revealed by Edward Snowdon. There is plenty of evidence out there already to suggest that the corporate world is becoming the football in the political game that is being played out between the EU, other countries and the US as a result of Snowdon’s disclosures.

One piece of evidence is the ‘Euro Cloud’ idea, which seems to be very popular in certain parts of the European Parliament. This idea says that in order to prevent US snooping on online activities and electronic communications, personal data of European citizens should be kept in European data centres. Regardless of whether Euro Cloud could ever stop snooping, which many experts doubt, the key significance of the idea is that businesses will have to change their business models because of the actions of governments over which they have had no control. The capital cost of doing this will be born by business, not the policitians who back the idea, or the governments who are carrying out snooping. The underlying threat, of course, is that businesses that do not play ball will be faced with sanctions. Governments commit the crimes, businesses pay the fines.

Another example is the FTC action mentioned earlier. How very convenient it is to make examples of businesses at exactly the time when, due to the Snowdon disclosures, the Safe Harbour data export rules that they are accused of breaching are being re-examined by EU politicians for fitness for purpose. It might look to some observers as if the US regulator is willing to sacrifice some US companies on the altar of European political opinion simply to sate the lust for blood.

The corporate world has always been the football in critical political games and business leaders will be resigned to this as being a natural and inevitable facet of being in business. What they may not have factored in to their business plans and balance sheets is that the game is now playing out over personal data and privacy. If not, they need to re-adjust quickly, otherwise the Regulatory Bear Market will bite them.

FTC in largest-ever Safe Harbor enforcement action

Posted on January 22nd, 2014 by



Yesterday, the Federal Trade Commission (“FTC“) announced that it had agreed to settle with 12 US businesses for alleged breaches of the US Safe Harbor framework. The companies involved were from a variety of industries and each handled a large amount of consumer data. But aside from the surprise of the large number of companies involved, what does this announcement really tell us about the state of Safe Harbor?

This latest action suggests that the FTC is ramping up its Safe Harbor enforcement in response to recent criticisms from the European Commission and European Parliament about the integrity of Safe Harbor (see here and here) – particularly given that one of the main criticisms about the framework was its historic lack of rigorous enforcement.

Background to the current enforcement

So what did the companies in question do? The FTC’s complaints allege that the companies involved ‘deceptively claimed they held current certifications under the U.S.-EU Safe Harbor framework‘. Although participation in the framework is voluntary, if you publicise that you are Safe Harbor certified then you must, of course, maintain an up-to-date Safe Harbor registration with the US Department of Commerce and comply with your Safe Harbor commitments 

Key compliance takeaways

In this instance, the FTC alleges that the businesses involved had claimed to be Safe Harbor certified when, in fact, they weren’t. The obvious message here is don’t claim to be Safe Harbor certified if you’re not!  

The slightly more subtle compliance takeaway for businesses who are correctly Safe Harbor certified is that they should have in place processes to ensure:

  • that they keep their self-certifications up-to-date by filing timely annual re-certifications;
  • that their privacy policies accurately reflect the status of their self-certification – and if their certifications lapse, that there are processes to adjust those policies accordingly; and
  • that the business is fully meeting all of its Safe Harbor commitments in practice – there must be actual compliance, not just paper compliance.

The “Bigger Picture” for European data exports

Despite this decisive action by the FTC, European concerns about the integrity of Safe Harbor are likely to persist.  If anything, this latest action may serve only to reinforce concerns that some US businesses are either falsely claiming to be Safe Harbor certified when they are not or are not fully living up to their Safe Harbor commitments. 

The service provider community, and especially cloud businesses, will likely feel this pressure most acutely.  Many customers already perceive Safe Harbor to be “unsafe” for data exports and are insisting that their service providers adopt other EU data export compliance solutions.  So what other solutions are available?

While model contract have the benefit of being a ‘tried and tested’ solution, the suite of contracts required for global data exports is simply unpalatable to many businesses.  The better solution is, of course, Binding Corporate Rules (BCR) – a voluntary set of self-regulatory policies adopted by the businesses that satisfy EU data protection standards and which are submitted to, and authorised by, European DPAs.  Since 2012, service providers have been able to adopt processor BCR, and those that do find that this provides them with a greater degree of flexibility to manage their internal data processing arrangements while, at the same time, continuing to afford a high degree of protection for the data they process.       

It’s unlikely that Safe Harbor will be suspended or disappear – far too many US businesses are dependent upon it for their EU/CH to US data flows.  However, the Safe Harbor regime will likely change in response to EU concerns and, over time, will come under increasing amounts of regulatory and customer pressure.  So better to consider alternative data export solutions now and start planning accordingly rather than find yourself caught short!

 

Cyber: Safety first!

Posted on November 12th, 2013 by



In case you haven’t noticed, the European Institutions (as well as the UK Government and those on the other side of the pond) have been ramping up their digital agendas in recent months, each seeking to instil the importance of cyber security on citizens and businesses alike. 

It’s all about raising cyber security awareness, but essentially the message is this: companies must understand their systems and data, and must take a proportionate, risk-based approach to keeping them secure.  They must build resilient networks and communications systems, and protect our critical infrastructures. As the threats against this landscape continue to increase, there is a corresponding decline in consumer trust, so what is important is to demonstrate you have the ability and agility to counter those threats and show you are committed to data and cyber security.  Ultimately that will build trust.

Raising cyber security awareness has no doubt been assisted somewhat by the recent “Snowden revelations” but it is very easy to get distracted by all the sensationalist headlines.  Despite what goes on in the law enforcement and intelligence worlds, we shouldn’t lose sight of the importance of building trust and building a strong and resilient digital economy.

This week in Germany the 2nd Cyber Security Summit took place, with a notable Keynote speech given by Neelie Kroes (the Vice President of the European Commission, Digital Agenda) about how to make Europe the world’s safest online environment.  A copy of the speech is available here.

Ms Kroes highlighted three trends that have appeared in the digital age.  Firstly, the recognition that the online world provides us all with huge benefits – let’s face it, we all use and rely on technologies every minute of every day.  But with these benefits comes the second trend; risks.  Cyber attacks, data loss, identify theft – the list goes on. 

The third trend then is that these risks ultimately lead to significant costs (both in terms of mitigating against risk and dealing with problems that risks result in).  Indeed, Ms Kroes points out the frequency of data security breaches suffered each year and says that the resulting costs (particularly for major incidents) “could amount to over a quarter of a trillion dollars“. 

That, I’m sure, isn’t an exaggeration.  The UK Information Commissioner can fine companies up to £500,000, and businesses must be shuddering at the thought of the €100m / 5% AWWT fines that are proposed under the latest draft of the EU Data Protection Regulation.  But that is just the fines themselves; what about all the other stuff?  The reality is that there are all sorts of other expenses such as outlays for detection of breaches, escalation, notification, after-event mitigation, containment and response, not to mention legal and other professional fees.

But with all that in mind, let’s also go back to Neelie Kroes’ “first trend” and think about all the benefits the digital world can offer.  Let’s make sure we can reap those benefits by building effective cyber defences into our business strategies.  That’s going to involve some investment, but it’s also going to provide a level of protection against many of the significant costs associated with a security incident.  And perhaps most importantly of all, demonstrating you are ahead of the game will help build trust; a vital commodity in today’s digital world.