Archive for the ‘95 directive’ Category

WP29 Guidance on the right to be forgotten

Posted on December 18th, 2014 by



On 26 November the Article 29 Working Party (“WP29“) issued WP225 (the “Opinion“). Part I of the Opinion provides guidance on the interpretation of the Court of Justice of the European Union ruling on Google Spain and Inc v the Spanish Data Protection Authority and Mario Costeja Gonzalez (the “Ruling“) and in part II the WP29 provides a list of common criteria that the European Regulators would take into account when considering right to be forgotten (“RTBF“) related complaints from individuals.

The Opinion is in line with the Ruling but it further elaborates on certain legal and practical aspects of it and it offers, as a result, an invaluable insight into European Regulators’ vision of the future of the RTBF.

Some of the main ‘take-aways’ are highlighted below:

Territorial scope

One of the most controversial conclusions in the Opinion is that limiting the de-listing to the EU domains of the search engines cannot be considered sufficient to satisfactorily guarantee the rights of the data subjects and that therefore de-listing decisions should be implemented in all relevant domains, including “.com”.

The above confirms the trend of extending the application of EU privacy laws (and regulatory powers) beyond the traditional interpretation of current territorial scope rules under the Data Protection Directive and will present search engines with legal uncertainly and operational challenges.

Material scope

The Opinion argues that the precedent set out by the judgment only applies to generalist search engines and not to search engines with a limited scope of action (for instance, search engines within a website).

Even though such clarification is to be welcome, where does this leave non-search engine controllers that receive right to be forgotten requests?

What will happen in practice?

In the Opinion, the WP29 advises that:

  • Individuals should be able to exercise their rights using “any adequate means” and cannot be forced by search engines to use specific electronic forms or procedures.
  • Search engines must follow national data protection laws when dealing with requests.
  • Both search engines and individuals must provide “sufficient” explanations in their requests/decisions.
  • Search engines must inform individuals that they can turn to the Regulators if they decide not to de-list the relevant materials.
  • Search engines are encouraged to publish their de-listing criteria.
  • Search engines should not inform users that some results to their queries have been de-listed. WP29’s preference is that this information is provided generically.
  • The WP29 also advises that search engines should not inform the original publishers of the information that has been de-listed about the fact that some pages have been de-listed in response to a RTBF request.

 

You thought consent applies only to cookies?! Then guess again!

Posted on December 13th, 2014 by



Imagine this: you walk into a big department store. You pick up a pair of running shoes and take them to the counter to purchase. The store has thousands of visitors every day, so to the sales assistant, you’re just another nameless face in the crowd.

As you’re buying the shoes, the sales assistant hands you a note. On it is written some kind of seemingly meaningless number “Hteushrbt6123987!”. You ask the sales assistant what this means. “Oh,” he says, “it’s just a way for us to remember that you like sports equipment. This number is unique to you, so we make a note of it and record the fact that you like running shoes. Next time you come in, we’ll ask you for the number and look it up on our systems. That’ll tell us that you like running shoes, so we’ll then show you other sports products we think may interest you.”

Slightly bemused, you pocket the paper, leave the store and return home. But, sometime later, you return to the store. As you enter, another shop assistant asks you if the store has ever given you a piece of paper with a number on it. You root around in your pockets, find the note, and hand it over. The shop assistant examines it, and taps away on a little handheld device he’s carrying. “Ah!” he says, “Number Hteushrbt6123987! You like running shoes, don’t you? Maybe you’d like to see some other running gear we have in stock? We have some new running vests in, you know – let me show you!”

If such a thing existed, this is how cookie-based targeted advertising would work in the offline world. The note handed to you by the shop assistant represents, of course, a cookie: a piece of information stored with you that enables you (and so your shopping preferences) to be recognized next time you visit the shop so that the merchant can show you products it thinks will interest you – all without knowing your real name, address or other directly identifying details.

Depending on your personal preferences, you may think this is great (“They showed me stuff I wanted but without needing to know my personal details!”) or creepy (“They may not know my name, but that number is all they need to track and surveil me!”) That’s a debate that fiercely divides opinion in the privacy community.

Imagining fingerprinting in the offline world

But cookies aren’t the only way to identify someone. Imagine if instead of being handed a note, the sales assistant instead jotted down some of your personal characteristics: your age, height, weight and gender; the color of your hair (and whether you have any hair at all!); whether or not you wear glasses; your nationality and so on. We’re all unique, so if the sales assistant recorded enough of these details, the store wouldn’t need your name or to give you a number – they could recognize you simply from the information they’d collected about you: “Ah, yes, you’re the 6 foot, 36 year old dark-haired British male, weighing 180 pounds and wearing glasses, who likes running shoes. Let me show you our latest sportswear items!”

In privacy terms, we call a uniquely defining aggregation of personal characteristics a ‘fingerprint’. Perhaps you have heard the term ‘device fingerprinting’ discussed as an alternative technology to cookies in the online world? In an online context, websites can collect device characteristics about the desktop or mobile based device visiting them – such as its IP address, browser type, screen resolution, installed font pack and so on. Gather enough of these details and you have a ‘device fingerprint’.

Fingerprinting and consent

Over the past few years, some businesses have been swinging away from using cookies and towards using other tracking technologies, like device fingerprinting, because of concerns about EU “cookie consent” requirements. The thinking goes that if website cookies require consent, then a ‘cookieless’ technology like device fingerprinting should avoid the need for consent.

For online businesses, the attractions are obvious: no more ugly cookie banners, no cumbersome user consent experiences, no more paying third party cookie compliance vendors. That logic may seem sound; unfortunately, it’s wrong.

This is because “cookie consent” is a misnomer: it isn’t about cookies at all – it’s about online tracking, in whatever form that takes. This is clear both from the wording of Article 5(3) of the e-Privacy Directive (which creates the consent requirement but never uses the term “cookie”, referring instead to “information”) and from recent guidance on device fingerprinting published by the Article 29 Working Party (here). The long and short of it is that when an online service tracks its visitors by any means – cookies, device fingerprinting, LSOs, pixels, scripts or any other technology – consent requirements will apply.

Choosing a consent strategy

What’s less clear is what form that consent needs to take – namely, whether consent needs to be obtained on an opt-in basis (i.e. the assistant asks you if it’s ok to hand you the piece of paper with the number on it) or whether it can be implied if the visitor doesn’t opt-out (i.e. the assistant hands you the note with the number, and tells you to throw it away if you don’t want it). Because of this complexity, we keep a table of these different opt-in and opt-out standards around the EU, which you can see here.

Deciding on the correct consent strategy for your online operations can be tricky, and depends on a number of factors including the necessity of the tracking you do, the context in which you do it, and the countries across which you operate (do you, for example, want a ‘one size fits all’ consent standard across all website operations or a country-by-country approach to consent based on local legal requirements and risk?)

But, whatever you do, don’t do nothing. That would be like having the shop assistant reach over the till to superglue the number to you while your back was turned.

And none of us would want a world where that would be acceptable.

A New ISO Standard for Cloud Computing

Posted on November 5th, 2014 by



The summer of 2014 saw another ISO Standard published by the International Standards Organisation (ISO). ISO27018:2014 is a voluntary standard governing the processing of personal data in the public cloud.

With the catchy title of “Information technology – Security techniques – Code of the practice for protection of personally identifiable information (PII) in public clouds acting as PII processors” (“ISO27018“), it is perhaps not surprising that this long awaited standard is yet to slip off the tongue of every cloud enthusiast.  European readers may have assumed references to PII meant this standard was framed firmly on the US – wrong!

What is ISO27018?

ISO27018 sets out a framework of “commonly accepted control objectives, controls and guidelines” which can be followed by any data processors processing personal data on behalf of another party in the public cloud.

ISO27018 has been crafted by ISO to have broad application from large to small and from public entity to government of non-profit.

What is it trying to achieve?

Negotiations in cloud deals which involve the processing of personal data tend to be heavily influenced by the customer’s perceptions of heightened data risk and sometimes very real challenges to data privacy compliance. This is hurdle for many cloud adopters as they relinquish control over data and rely on the actions of another (and sometimes those under its control) to maintain adequate safeguards. In Europe, until we see the new Regulation perhaps, a data processor has no statutory obligations when processing personal data on behalf of another. ISO27018 goes some way to impose a level of responsibility for the personal information it processes.

ISO27018’s introductory pages call out its objectives:

  • It’s a tool to help the public cloud provider to comply with applicable obligations: for example there are requirements that the public cloud provider only processes personal information in accordance with the customer’s instructions and that they should assist the customer in cases of data subject access requests;
  • It’s an enabler of transparency allowing the provider to demonstrate why their cloud services are well governed: imposing good governance obligations on the public cloud provider around its information security organisation (eg the segregation of duties) and objectives around human resource security prior to (and during employment) and encouraging programmatic awareness and training. Plus it echoes the asset management and access controls elements of other ISO standards (see below);
  • It will assist the customer and vendor in documenting contractual obligations: by addressing typical contractually imposed accountability requirements; data breach notification, imposing adequate confidentially obligations on individuals touching on data and flowing down technical and organisation measures to sub-processors as well as requiring the documentation of data location. This said, a well advised customer may wish to delve deeper as this is not a full replacement for potential data controller to processor controls; and
  • It offers the public cloud customer a mechanism to exercise audit and compliance rights: with ISO27018’s potential application across disparate cloud environments, it remains to be seen whether a third party could certify compliance against some of the broader data control objectives contained in ISO27018. However, a regular review and reporting and/or conformity reviews may provide a means for vendor or third party verification (potentially of more use where shared and/or virtualised server environments practically frustrate direct data, systems and data governance practice audit by the customer).

ISO27018 goes some way towards delivering these safeguards. It is also a useful tool for a customer to evaluate the cloud services and data handling practices of a potential supplier. But it’s not simple and it’s not a substitute for imposing compliance and control via contract.

A responsible framework for public cloud processors

Privacy laws around the world prescribe nuanced, and sometimes no, obligations upon those who determine the manner in which personal information is used. Though ISO27018 is not specifically aimed at the challenges posed by European data protection laws, or any other jurisdiction for that matter, it is flexible enough to accommodate many of the inevitable variances. It cannot fit all current and may not fit to future rules. However, in building this flexibility, it loses some of its potential bite to generality.

Typically entities adopting ISO27001 (Information security management) are seeking to protect their own assets data but it is increasingly a benchmark standard for data management and handling among cloud vendors. ISO27018 builds upon the ISO27002 (Information technology – Security technique – Code of practice for information security controls) reflecting its controls, but adapting these for public cloud by mapping back to ISO27002 obligations where they remain relevant and supplementing these controls where necessary by prescribing additional controls for public cloud service provision (as set out separately in Annex A to ISO27018). As you may therefor expect, ISO27018 explicitly anticipates that a personal information controller would be subject to wider obligations than those specified and aimed at processors.

Adopting ISO27018

Acknowledging that the standard cannot be all-encompassing, and that the flavours of cloud are wide and varied, ISO27018 calls for an assessment to be made across applicable personal information “protection requirements”.  ISO27018 calls for the organisation to:

  • Assess the legal, statutory, regulatory and contractual obligations of it and its partners (noting particularly that some of these may mandate particular controls (for example preserving the need for written contractual obligations in relation to data security under Directive (95/46/EC) 7th Principle));
  • To complete a risk assessment across its business strategy and information risk profile; and
  • To factor in corporate policies (which may, at times, go further than the law for reasons of principle, global conformity or because of third party influences).

What ISO27018 should help with

ISO27018 offers a reference point for controllers who wish to adopt cloud solutions run by third party providers. It is a cloud computing information security control framework which may form part of a wider contractual commitment to protect and secure personal information.

As we briefly explained in an earlier post in our tech blog, the European Union has also spelled out its desire to promote uniform standard setting in cloud computing. ISO27018 could satisfy the need for broadly applicable, auditable data management framework for public cloud provision. But it’s not EU specific and lacks some of the rigour an EU based customer may seek.

What ISO27018 won’t help with

ISO27018 is not an exhaustive framework. There are a few obvious flaws:

  • It’s been designed for use in conjunction with the information security controls and objectives set out in ISO27002 and ISO27001 which provide general information security frameworks. This is a high threshold for small or emerging providers (many of which do not meet all these controls or certify to these standards today). So more accessible for large enterprise providers but something to weigh up – the more controls there are the more ways there are to slip up;
  • It may be used as a benchmark for security and, coupled with contractual commitments to meet and maintain selected elements of ISO27018, it won’t be relevant to all cloud solutions and compliance situations (though some will use it as if it were);
  • It perpetuates the use of the PII moniker which, already holding specific US legal connotation (i.e. narrower application) is now used is a wider defined context under ISO27018 (in fact PII under ISO27018 is closer to the definition of personal data under EU Directive 95/46/EC). This could confuse the stakeholders in multi-national deals and the corresponding use of PII in the full title to ISO27014 potentially misleads around the standard’s potentially applicability and use cases;
  • ISO27018 is of no use in situations where the cloud provider is (or assumes the role) of data controller and it assumes all data in the cloud is personal data (so watch this space for ISO27017 (coming soon) which will apply to any data (personal or otherwise)); and
  • For EU based data controllers, other than constructing certain security controls, ISO27018 is not a mechanism or alternative route to legitimise international data transfers outside of the European Economic Area. Additional controls will have to be implemented to ensure such data enjoys adequate protection.

What now?

ISO27018 is a voluntary standard and not law and it won’t entirely replace the need for specific contractual obligations around processing, accessing and transferring personal data. In a way its ultimate success can be gauged by the extent of eventual adoption. It will be used to differentiate, but it will not always answer all the questions a well-informed cloud adaptor should be asking.

It may be used in whole or in part and may be asserted and used alongside or as a part of contractual obligations, information handling best practice or simply a benchmark which a business will work towards. Inevitability there will be those who treat the Standard as if it is the law without thought about what they are seeking to protect against and what potential wrongs they are seeking to right.  If so, they will not reap the value of this kind of framework.

 

What does EU regulatory guidance on the Internet of Things mean in practice? Part 2

Posted on November 1st, 2014 by



In Part 1 of this piece I summarised the key points from the recent Article 29 Working Party (WP29) Opinion on the Internet of Things (IoT), which are largely reflected in the more recent Mauritius Declaration adopted by the Data Protection and Privacy Commissioners from Europe and elsewhere in the world. I expressed my doubts that the approach of the regulators will encourage the right behaviours while enabling us to reap the benefits that the IoT promises to deliver. Here is why I have these concerns.

Thoughts about what the regulators say

As with previous WP29 Opinions (think cloud, for example), the regulators have taken a very broad brush approach and have set the bar so high, that there is a risk that their guidance will be impossible to meet in practice and, therefore, may be largely ignored. What we needed at this stage was a somewhat more balanced and nuanced guidance that aimed for good privacy protections while taking into account the technological and operational realities and the public interest in allowing the IoT to flourish.

I am also unsure whether certain statements in the Opinion can withstand rigorous legal analysis. For instance, isn’t it a massive generalisation to suggest that all data collected by things should be treated as personal, even if it is anonymised or it relates to the ‘environment’ of individuals as opposed to ‘an identifiable individual’? How does this square with the pretty clear definition of the Data Protection Directive? Also, is the principle of ‘self-determination of data’ (which, I assume is a reference to the German principle of ‘informational self-determination’) a principle of EU data protection law that applies across the EU? And how is a presumption in favour of consent justified when EU data protection law makes it very clear that consent is one among several grounds on which controllers can rely?

Few people will suggest that the IoT does not raise privacy issues. It does, and some of them are significant. But to say that (and I am paraphrasing the WP29 Opinion) pretty much all IoT data should be treated as personal data and can only be processed with the consent of the individual which, by the way, is very difficult to obtain at the required standards, leaves companies processing IoT data nowhere to go, is likely to unnecessarily stifle innovation, and slow down the development of the IoT, at least in Europe. We should not forget that the EU Data Protection Directive has a dual purpose: to protect the privacy of individuals and to enable the free movement of personal data.

Distinguishing between personal and non-personal data is essential to the future growth of the IoT. For instance, exploratory analysis to find random or non-obvious correlations and trends can lead to significant new opportunities that we cannot even imagine yet. If this type of analysis is performed on data sets that include personal data, it is unlikely to be lawful without obtaining informed consent (and even then, some regulators may have concerns about such processing). But if the data is not personal, because it has been effectively anonymised or does not relate to identifiable individuals in the first place, there should be no meaningful restrictions around consent for this use.

Consent will be necessary in several occasions such as for storing or accessing information stored on terminal equipment, for processing health data and other sensitive personal data, or for processing location data created in the context of public telecommunications services. But is consent really necessary for the processing of, e.g., device identifiers, MAC addresses or IP addresses? If the individual is sufficiently informed and makes a conscious decision to sign up for a service that entails the processing of such information (or, for that matter, any non-sensitive personal data), why isn’t it possible to rely on the legitimate interests ground, especially if the individual can subsequently chose to stop the further collection and processing of data relating to him/her? Where is the risk of harm in this scenario and why is it impossible to satisfy the balance of interests test?

Notwithstanding my reservations, the fact of the matter remains that the regulators have nailed their colours to the mast, and there is risk if their expectations are not met. So where does that leave us then?

Our approach

Sophisticated companies are likely to want to take the WP29 Opinion into account and also conduct a thorough analysis of the issues in order to identify more nuanced legal solutions and practical steps to achieve good privacy protections without unnecessarily restricting their ability to process data. Their approach should be guided by the following considerations:

  1. The IoT is global. The law is not.
  2. The law is changing, in Europe and around the world.
  3. The law is actively enforced, with increasing international cooperation.
  4. The law will never keep up with technology. This pushes regulators to try to bridge the gap through their guidance, which may not be practical or helpful.
  5. So, although regulatory guidance is not law, there is risk in implementing privacy solutions in cutting edge technologies, especially when this is done on a global scale.
  6. Ultimately, it’s all about trust: it’s the loss of trust that a company will respect our privacy and that it will do its best to protect our information that results in serious enforcement action, pushes companies out of business or results in the resignation of the CEO.

 

This is a combustible environment. However, there are massive business opportunities for those who get privacy right in the IoT, and good intentions, careful thinking and efficient implementation can take us a long way. Here are the key steps that we recommend organisations should take when designing a privacy compliance programme for their activities in the IoT:

  1. Acknowledge the privacy issue. ‘Privacy is dead’ or ‘people don’t care’ type of rhetoric will get you nowhere and is likely to be met with significant pushback by regulators.
  2. Start early and aim to bake privacy in. It’s easier and less expensive than leaving it for later. In practice this means running privacy impact assessments and security risk assessments early in the development cycle and as material changes are introduced.
  3. Understand the technology, the data, the data flows, the actors and the processing purposes. In practice, this may be more difficult than it sounds.
  4. Understand what IoT data is personal data taking into account if, when and how it is aggregated, pseudonymised or anonymised and how likely it is to be linked back to identifiable individuals.
  5. Define your compliance framework and strategy: which laws apply, what they require, how the regulators interpret the requirements and how you will approach compliance and risk mitigation.
  6. When receiving data from or sharing data with third parties, allocate roles and responsibilities, clearly defining who  is responsible for what, who protects what, who can use what and for what purposes.
  7. Transparency is absolutely essential. You should clearly explain to individuals what information you collect, what you do with it and the benefit that they receive by entrusting you with their data. Then do what you said you would do – there should be no surprises.
  8. Enable users to exercise choice by enabling them to allow or block data collection at any time.
  9. Obtain consents when the law requires you to do so, for instance if as part of the service you need to store information on a terminal device, or if you are processing sensitive personal data, such as health data. In most cases, it will be possible to rely on ‘implied’ consent so as to not unduly interrupt the user journey (except when processing sensitive personal data).
  10. Be prepared to justify your approach and evidence compliance. Contractual and policy hygiene can help a lot.
  11. Have a plan for failure: as with any other technology, in the IoT things will go wrong, complaints will be filed and data security breaches will happen. How you react is what makes the difference.
  12. Things will change fast: after you have implemented and operationalised your programme, do not forget to monitor, review, adapt and improve it.

 

What does EU regulatory guidance on the Internet of Things mean in practice? Part 1

Posted on October 31st, 2014 by



The Internet of Things (IoT) is likely to be the next big thing, a disruptive technological step that will change the way in which we live and work, perhaps as fundamentally as the ‘traditional’ Internet did. No surprise then that everyone wants a slice of that pie and that there is a lot of ‘noise’ out there. This is so despite the fact that to a large extent we’re not really sure about what the term ‘Internet of Things’ means – my colleague Mark Webber explores this question in his recent blog. Whatever the IoT is or is going to become, one thing is certain: it is all about the data.

There is also no doubt that the IoT triggers challenging legal issues that businesses, lawyers, legislators and regulators need to get their heads around in the months and years to come. Mark discusses these challenges in the second part of his blog (here), where he considers the regulatory outlook and briefly discusses the recent Article 29 Working Party Opinion on the Internet of Things.

Shortly after the WP29 Opinion was published, Data Protection and Privacy Commissioners from Europe and elsewhere in the world adopted the Mauritius Declaration on the Internet of Things. It is aligned to the WP29 Opinion, so it seems that privacy regulators are forming a united front on privacy in the IoT. This is consistent with their drive towards closer international cooperation – see for instance the latest Resolution on Enforcement Cooperation and the Global Cross Border Enforcement Cooperation Agreement (here).

The regulatory mind-set

You only need to read the first few lines of the Opinion and the Declaration to get a sense of the regulatory mind-set: the IoT can reveal ‘intimate details'; ‘sensor data is high in quantity, quality and sensitivity’ and the inferences that can be drawn from this data are ‘much bigger and sensitive’, especially when the IoT is seen alongside other technological trends such as cloud computing and big data analytics. The challenges are ‘huge’, ‘some new, some more traditional, but then amplified with regard to the exponential increase of data processing’, and include ‘data losses, infection by malware, but also unauthorized access to personal data, intrusive use of wearable devices or unlawful surveillance’.

In other words, in the minds of privacy regulators, it does not get much more intrusive (and potentially unlawful) than this, and if the IoT is left unchecked, it is the quickest way to an Orwellian dystopia. Not a surprise then that the WP29 supports the incorporation of the highest possible guarantees, with users remaining in complete control of their personal data, which is best achieved by obtaining fully informed consent. The Mauritius Declaration echoes these expectations.

What the regulators say

Here are the main highlights from the WP29 Opinion:

  1. Anyone who uses an IoT object, device, phone or computer situated in the EU to collect personal data is captured by EU data protection law. No surprises here.
  2. Data that originates from networked ‘things’ is personal data, potentially even if it is pseudonymised or anonymised (!), and even if it does not relate to individuals but rather relates to their environment. In other words, pretty much all IoT data should be treated as personal data.
  3. All actors who are involved in the IoT or process IoT data (including device manufacturers, social platforms, third party app developers, other third parties and IoT data platforms) are, or at least are likely to be, data controllers, i.e. responsible for compliance with EU data protection law.
  4. Device manufacturers are singled out as having to take more practical steps than other actors to ensure data protection compliance (see below). Presumably, this is because they have a direct relationship with the end user and are able to collect ‘more’ data than other actors.
  5. Consent is the first legal basis that should be principally relied on in the IoT. In addition to the usual requirements (specific, informed, freely given and freely revocable), end users should be enabled to provide (or withdraw) granular consent: for all data collected by a specific thing; for specific data collected by anything; and for a specific data processing. However, in practice it is difficult to obtain informed consent, because it is difficult to provide sufficient notice in the IoT.
  6. Controllers are unlikely to be able to process IoT data on the basis that it is on their legitimate interests to do so, because it is clear that this processing significantly affects the privacy rights of individuals. In other words, in the IoT there is a strong regulatory presumption against the legitimate interests ground and in favour of consent as the legitimate basis of processing.
  7. IoT devices constitute ‘terminal devices’ for EU law purposes, which means that any storage of information, or access to information stored, on an IoT device requires the end user’s consent (note: the requirement applies to any information, not just personal data).
  8. Transparency is absolutely essential to ensure that the processing is fair and that consent is valid. There are specific concerns around transparency in the IoT, for instance in relation to providing notice to individuals who are not the end users of a device (e.g. providing notice to a passer-by whose photo is taken by a smart watch).
  9. The right of individuals to access their data extends not only to data that is displayed to them (e.g. data about calories burnt that is displayed on a mobile app), but also the raw data processed in the background to provide the service (e.g. the biometric data collected by a wristband to calculate the calories burnt).
  10. There are additional specific concerns and corresponding expectations around purpose limitation, data minimisation, data retention, security and enabling data subjects to exercise their rights.

 

It is also worth noting that some of the expectations set out in the Opinion do not currently have an express statutory footing, but rather reflect provisions of the draft EU Data Protection Regulation (which may or may not become law): privacy impact assessments, privacy by design, privacy by default, security by design and the right to data portability feature prominently in the WP29 Opinion.

The regulators’ recommendations

The WP29 makes recommendations regarding what IoT stakeholders should do in practice to comply with EU data protection law. The highlights include:

  1. All actors who are involved in the IoT or process IoT data as controllers should, carry out Privacy Impact Assessments and implement Privacy by Design and Privacy by Default solutions; should delete raw data as soon as they have extracted the data they require; and should empower users to be in control in accordance with the ‘principle of self-determination of data’.
  2. In addition, device manufacturers should:
    1. follow a security by design principle;
    2. obtain consents that are granular (see above), and the granularity should extend to enabling users to determine the time and frequency of data collection;
    3. notify other actors in the IoT supply chain as soon as a data subject withdraws their consent or opposes a data processing activity;
    4. limit device finger printing to prevent location tracking;
    5. aggregate data locally on the devices to limit the amount of data leaving the device;
    6. provide users with tools to locally read, edit and modify data before it is shared with other parties;
    7. provide interfaces to allow users to extract aggregated and raw data in a structured and commonly used format; and
    8. enable privacy proxies that inform users about what data is collected, and facilitate local storage and processing without transmitting data to the manufacturer.
  3. The Opinion sets out additional specific expectations for app developers, social platforms, data platforms, IoT device owners and additional data recipients.

 

Comment

I have no doubt that there are genuinely good intentions behind the WP29 Opinion and the Mauritius Declaration. What I am not sure about is whether the approach of the regulators will encourage behaviours that protect privacy without stifling innovation and impeding the development of the IoT. I am not even sure if, despite the good intentions, in the end the Opinion will encourage ‘better’ privacy protections in the IoT. I explain why I have these concerns and how I think organisations should be approaching privacy compliance in the IoT in Part 2 of this piece.

Are DPA notifications obsolete?

Posted on October 27th, 2014 by



For almost 10 years I’ve been practising data protection law and advising multinational organizations on their strategic approach to global data processing operations. Usually, when it comes to complying with European data protection law, notifying the organization’s data processing activities with the national data protection authorities (DPAs) is one of the most burdensome exercises. It may look simple, but companies often underestimate the work involved to do this.

As a reminder, article 18 of the Data Protection Directive 95/46/EC requires data controllers (or their representatives in Europe) to notify the DPA prior to carrying out their processing operations. In practise, this means that they must file a notification with the DPA in each Member State in which they are processing personal data, which specifies who is the data controller, the types of data that are collected, the purpose(s) for processing such data, whether any of that data gets transferred outside the EEA and how individuals can exercise their privacy rights.

In a perfect world, this would be a fairly straightforward process whereby organizations would simply file a single notification with the DPA in every Member State. But that would be too easy! The reality is that DPA notification procedures are not harmonized in Europe, which means that organizations must comply with the notification procedures of each Member State as defined by national law. As a result, each DPA has established its own notification rules which impose a pre-established notification form, procedure, and formalities on data controllers. Europe is not the only region to have notification rules. In Latin America, organizations must file a notification in Argentina, Uruguay in Peru. And several African countries (usually those who are members of the “Francophonie” such as Morocco, Senegal, Tunisia, and the Ivory Coast) have also adopted data protection laws requiring data controllers to notify their data processing activities.

Failing to comply with this requirement puts your organization at risk with the DPAs who have the power in some countries to conduct audits and inspections of an organization’s processing activities. If a company is found to be in violation of the law, some DPAs may impose sanctions (such as fines, public warnings) or order the data to be blocked or the data processing to cease immediately. Furthermore, companies may also be sanctioned by the national courts. For example, on October 8th, 2014, the labour chamber of the French Court of Cassation (the equivalent to the Supreme Court for civil and criminal matters) ruled that an employer could not use the data collected via the company’s messaging system as evidence to lay-off one of its employees for excessively using that messaging service for private purposes (i.e., due to the high number of private emails transiting via the messaging service) because the company had failed to notify the French Data Protection Authority (CNIL) prior to monitoring the use of the messaging service.

One could also argue that notifications may get scrapped altogether by the draft Data Protection Regulation (currently being discussed by the European legislator) and so companies will no longer be required to notify their data processing activities to the regulator. True, but don’t hold your breath for too long! The draft Regulation is currently stuck in the Council of ministers, and assuming it does get adopted by the European legislator, the most realistic date of adoption could be 2016. Given that the text has a two-year grace period before it comes into force, the Regulation would not come into force before 2018. And in its last meeting of October 3rd, 2014, the Council agreed to reach a partial general approach on the text of chapter IV of the draft Regulation on the understanding that “nothing is agreed until everything is agreed.”

So, are DPA notifications obsolete? The answer is clearly “no”. If you’re thinking: “why all the fuss? Do I really need to go through all this bureaucracy?” think again! The reason organizations must notify their data processing activities to the DPAs is simple: it’s the law. Until the Data Protection Regulation comes into force (and even then, some processing activities may still require the DPA’s prior approval), companies must continue to file their notifications. Doing so is a necessary component of any global privacy compliance project. It requires organizations to strategize their processing operations and to prioritize the jurisdictions in which they are developing their business. And failing to do so simply puts your organization at risk.

This article was first published in the IAPP’s Privacy Tracker on October 23rd, 2014.

Subject access requests and data retention: two sides of the same coin?

Posted on October 3rd, 2014 by



Over the past year or so, there’s been a decided upswing in the number of subject access requests made by individuals to organizations that crunch their data.  There are a number of reasons for this, but they’re principally driven by a greater public awareness of privacy rights in a post-Snowden era and following the recent Google “Right to be Forgotten” decision.

If you’re unfamiliar with the term “subject access request”, then in simple terms it’s a right enshrined in EU law for an individual to contact an organization and ask it (a) whether it processes any personal information about the individual in question, and (b) if so, to supply a copy of that information.

A subject access request is a powerful transparency tool for individuals: the recipient organization has to provide the requested information within a time period specified by law, and very few exemptions apply.  However, these requests often prove disproportionately costly and time-consuming for the organizations that receive them – think about how much data your organization holds, and then ask yourself how easy it would be to pull all that data together to respond to these types of requests.  Imagine, for example, all the data held in your CRM databases, customer support records, IT access logs, CCTV footage, HR files, building access records, payroll databases, e-mail systems, third party vendors and so on – picture that, and you get the idea.

In addition, while many subject access requests are driven by a sincere desire for data processing transparency, some (inevitably) are made with legal mischief in mind – for example, the disgruntled former employee who makes a subject access request as part of a fishing expedition to try to find grounds for bringing an unfair dismissal claim, or the representative from a competitor business looking for grounds to complain about the recipient organization’s data compliance.  Because of these risks, organizations are often hesitant about responding to subject access requests in case doing so attracts other, unforeseen and unknown, liabilities.

But, if you’re a data controlling business facing this conundrum, don’t expect any regulatory sympathy.  Regulators can only enforce the law as it exists today, and this expects prompt, comprehensive disclosure.  Not only that, but the fact that subject access requests prove costly and resource intensive to address serves a wider regulatory goal: namely, applying pressure on organizations to reduce the amount of data they hold, consistent with the data protection principle of “data minimization”.

Therefore, considering that data storage costs are becoming cheaper all the time and that, in a world of Big Data, data collection is growing at an exponential rate, subject access becomes one of the most important – if not the most important – tool regulators have for encouraging businesses to minimize the data they retain.  The more data you hold, the more data you have to disclose in response to a subject access request – and the more costly and difficult that is to do.  This, in turn, makes adopting a carefully thought-out data retention policy much more attractive, whatever other business pressures there may be to keep data indefinitely.  Retain data for just a year or two, and there’ll be an awful lot less you need to disclose in response to a subject access request.  At the same time, your organization will enhance its overall data protection compliance.

So what does all this mean?  When considering your strategy for responding to subject access requests, don’t consider it in isolation; think also about how it dovetails with your data retention strategy.  If you’re an in-house counsel or CPO struggling to get business stakeholder buy-in to adopt a comprehensive data retention strategy, use subject access risk as a means of achieving this internal buy-in.  The more robust your data retention policies, the more readily you’ll be able to fulfill subject access requests within the timescales permitted by law and with less effort, reducing complaints and enhancing compliance.  Conversely, with weaker (or non-existent) data retention policies, your exposure will be that much greater.

Subject access and data retention are therefore really just two sides of the same coin – and you wouldn’t base your compliance on just a coin toss, would you?

German Federal Court further strengthens review platforms

Posted on September 24th, 2014 by



With ever increasing relevance of online review platforms, the discussion about the platform´s red lines becomes more and more heated in Germany. The Federal Court of Justice now issued its second decision in this area within only a couple of months. This time, a medical practitioner demanded his profile to be completely deleted on a review platform focusing on health care professionals, arguing on the basis of unlawful processing of his personal data.

The case concerned a typical review platform where users may search for information about health care professionals. Aside from the review content, information such as name, address, expertise, contact data and opening hours are accessible on the platform. Users have to register with their email address before posting a review.

The Federal Court dismissed the claim. The court held that the platform´s freedom of communication outweighs the claimant´s right in informational self-determination, which forms the constitutional-right basis for privacy rights under German law. According to the court, it is legitimate for the platform provider to publish the practitioner´s profile and the review content based on Sec. 29 German Data Protection Act. This result does not come as a surprise, as the Federal Court already decided on a similar case back in 2008 that a teacher cannot request to be deleted from a review platform dedicated to teachers.

What is slightly more surprising is that the court made some remarks emphasizing that the practitioner would be “not insignificantly” burdened by the publication of reviews on the portal, as he may face adverse economic effects caused by negative reviews. However, the court saw even a greater weight in the public´s interest in information about medical services, in particular as the publication would only concern the “social sphere” of the claimant, rather than his private or intimate sphere.

In July 2014, the Federal Court also dismissed a claim for disclosure of contact details of a reviewer who repeatedly posted defamatory statements on a review platform.

 

 

Challenges in global data residency laws – and how to solve them

Posted on September 13th, 2014 by



Whoever would have thought that, in a world where it seems nearly everything is connected, we would still have laws requiring that data be held within specific territories or regions?  Yet it seems that as more and more data moves online, is stored in the cloud, and gets transmitted all around the world and back in the blink of an eye, governments become ever more determined to introduce territorial restrictions limiting the movement of data.

The best known example of this is the EU’s Data Protection Directive which forbids movement of personal data outside of Europe to territories that do not provide “adequate” data protection – or, in layman’s speak, territories that the EU doesn’t consider to be safe.  This rule can be dated back to a technological world where data sat in a single database on a single server, and legislators sought to guard against businesses moving data outside of the EU in an attempt to circumvent European data protection laws.  Against that backdrop, it was a very sensible rule to introduce.  20 years on from its adoption, it now starts to look a little long in the tooth.

The problem is that legislative and regulatory thinking hasn’t advanced a great deal in that time.  Within those communities, there’s still a perception that data can, somehow, be kept within a single territory or region and not accessed or transmitted beyond those boundaries – or that, if it must, then implementing a standard form data protection agreement (so-called “model clauses”) between the ‘data exporter’ and the ‘data importer’ somehow solves the problem.

But here’s the thing: it doesn’t.  Denying that international data movements are an integral and necessary part of the global data economy is like denying that the earth moves round the sun.  Spend any time dealing with cloud vendors, or social media platforms, or interest based advertising providers, and you’ll quickly learn that data gets stored in multiple geographic locations, often through chains of different subcontractors, and tens, hundreds and perhaps even thousands of different databases.  With that knowledge, legislating that data should be kept in-territory or in-region is at best pointless.  At worst, it’s economically disastrous.

More than that, thinking that a ‘one size fits all’ set of model clause terms will somehow prove relevant across the multiplicity of different online business models that exist out there – or (and let’s be honest) that businesses executing those terms can and will actually comply with them – is nothing but a bad case of denial.

But despite this, these so-called ‘data residency’ laws only seem to be growing in favour – inevitably spurred in part through both post-Snowden mistrust of other countries’ data protection regimes and in part through misguided economic self-interest.  Other than the 31 countries in the European Economic Area that have adopted data residency requirements, other countries including Israel, Russia, Switzerland and South Africa (in EMEA), Argentina, Canada, Mexico and Uruguay (in the Americas), and Australia, India, Malaysia, Singapore and South Korea (in APAC) all have there own data residency rules.

The great irony here is that these rules will not prevent international movements of data.  They won’t even hamper them to the slightest degree.  Data will move beyond boundaries just as it always has, only at an ever quicker and more voluminous rate.  All of which begs the question: if data residency rules are to have this head on collision with increasingly globalised use of data, what can businesses do to comply?

For any large multinational organisation, there really in only one solution: Binding Corporate Rules.  Model clauses contain too many stiff and unworkable provisions that any commercial organisation would be very hesitant to sign – and, once the business reaches any sort of global scale, the prospect of regularly signing exponential numbers of model clauses becomes quickly very unattractive indeed.  Safe harbor is a fine solution, but only for transfers of data from Europe and Switzerland to the US and, with the future of safe harbor currently in doubt, doesn’t offer the longevity on which to build a robust compliance platform.

So that leave Binding Corporate Rules, which are specifically designed for large multinationals moving large volumes of data and for whom safe harbor and model clauses are not options.  More than that, Binding Corporate Rules have a regulatory recognition that extends beyond Europe – being expressly recognised in many non-EU countries as a valid solution for overcoming strict national data residency rules (Canada, Israel, South Africa, Singapore and Switzerland all being good examples).  And even in territories where Binding Corporate Rules don’t have express regulatory recognition, they’re at least generally tolerated as compliant with local data export regimes.

In the current political climate, it’s highly unlikely that data residency rules will relax in the short- to mid-term.  At the same time, data protection rules are only set to get stricter and carry greater risk (interesting fact: in 2011 there were 76 countries with data protection laws; by 2013 there were 101; and there are currently another 24 countries with new incoming privacy laws). Businesses with any kind of global footprint need to prepare for this and build out their data governance programs accordingly, with Binding Corporate Rules offering the most widely recognised and future-proofed solution.

The legal and practical realities of “personal data”

Posted on September 3rd, 2014 by



Are IP addresses personal data?  It’s a question I’m so frequently asked that I thought I’d pause for a moment to reflect on how the scope of “personal data” has changed since the EU Data Protection Directive’s adoption in 1995.

The Directive itself defines personal data as “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity“.

That’s not the beginning and the end of the story though.  Over the years, various regulatory guidance has been published that has further shaped what we understand by the term “personal data”.  This guidance has taken the form of papers published by the Article 29 Working Party (most notably Opinion 4/2007 on the Concept of Personal Data) and by national regulators like the UK’s Information Commissioner’s Office (see here).  Then throw in various case law that has touched on this issue, like the Durant case in the UK and the European Court of Justice rulings in Bodil Lindqvist (Case C-101/01) and the Google Right to Be Forgotten case (C-131/12), and it’s apparent that an awful lot of time has been spent thinking about this issue by an awful lot of very clever people.

The danger, though, is that the debate over what is and isn’t personal data can often get so weighted down in academic posturing, that the practical realities of managing data often get overlooked.  When I’m asked whether or not data is personal, it’s typically a loaded question: the enquirer wants to know whether the data in question can be retained indefinitely, or whether it can be withheld from disclosures made in response to a subject access request, or whether it can be transferred internationally without restriction.  If the data’s not personal, then the answer is: yes, yes and yes.  If it is personal, then the enquirer needs to start thinking about how to put in place appropriate compliance measures for managing that data.

There are, of course, data types that are so obviously personal that it would be churlish to pretend otherwise: no one could claim that a name, address or telephone number isn’t personal.  But what should you do when confronted with something like an IP address, a global user ID, or a cookie string?  Are these data types “personal”?  If you’re a business trying to operationalise a privacy compliance program, an answer of “maybe” just doesn’t cut it.  Nor does an answer of “err on the side of caution and treat it as personal anyway”, as this can lead to substantial engineering and compliance costs in pursuit of a vague – and possibly even unwarranted – benefit.

So what should you do?  Legal purists might start exploring whether these data types “relate” to an “identified or identifiable person”, as per the Directive.  They might note that the Directive mentions “direct or indirect” identification, including by means of an “identification number” (an obvious hook for arguing an IP address is personal data).  They might explore the content, purpose or result of the data processing, as proposed by the Article 29 Working Party, or point out that these data types “enable data subjects to be ‘singled out’, even if their real names are not known.”  Or they might even argue the (by now slightly fatigued) argument that these data types relate to a device, not to a person – an argument that may once have worked in a world where a single computer was shared by a family of four, but that now looks increasingly weak in a world where your average consumer owns multiple devices, each with multiple unique IDs.

There is an alternative, simpler test though: ask yourself why this data is processed in the first place and what the underlying individuals would therefore expect as a consequence.  For example: Is it collected just to prevent online fraud or is it instead being put to use for targeting purposes? Depending on your answer, would individuals therefore expect to receive a bunch of cookie strings in response to a subject access request?  How would they feel about you retaining their IP address indefinitely if it was held separately from other personal identifiers?

The answers to these questions will of course vary depending on the nature of the business you run – it’s difficult to imagine a Not For Profit realistically being expected to disclose IP addresses contained in web server logs in response to a subject access request, but perhaps not a huge stretch, say, for a targeted ad platform.   The point is simply that trying to apply black and white boundaries to what is, and isn’t, personal will, in most cases, prove an unhelpful exercise and be wholly devoid of context.  That’s why Privacy Impact Assessment are so important as a tool to assess these issues and proposed measured, proportionate responses to them.

The debate over the scope of personal data is far from over, particularly as new technologies come online and regulators and courts continue to publish decisions about what they consider to be personal.  But, faced with practical compliance challenges about how to handle data in a day-to-day context, it’s worth stepping back from legal and regulatory guidance alone.  Of course, I wouldn’t for a second advocate making serious compliance decisions in the absence of legal advice; it’s simply that decisions based on legal merit alone risk not giving due consideration to data subject trust.

And what is data protection about, if not about trust?