Archive for the ‘Legislative reform’ Category

WP29 Guidance on the right to be forgotten

Posted on December 18th, 2014 by



On 26 November the Article 29 Working Party (“WP29“) issued WP225 (the “Opinion“). Part I of the Opinion provides guidance on the interpretation of the Court of Justice of the European Union ruling on Google Spain and Inc v the Spanish Data Protection Authority and Mario Costeja Gonzalez (the “Ruling“) and in part II the WP29 provides a list of common criteria that the European Regulators would take into account when considering right to be forgotten (“RTBF“) related complaints from individuals.

The Opinion is in line with the Ruling but it further elaborates on certain legal and practical aspects of it and it offers, as a result, an invaluable insight into European Regulators’ vision of the future of the RTBF.

Some of the main ‘take-aways’ are highlighted below:

Territorial scope

One of the most controversial conclusions in the Opinion is that limiting the de-listing to the EU domains of the search engines cannot be considered sufficient to satisfactorily guarantee the rights of the data subjects and that therefore de-listing decisions should be implemented in all relevant domains, including “.com”.

The above confirms the trend of extending the application of EU privacy laws (and regulatory powers) beyond the traditional interpretation of current territorial scope rules under the Data Protection Directive and will present search engines with legal uncertainly and operational challenges.

Material scope

The Opinion argues that the precedent set out by the judgment only applies to generalist search engines and not to search engines with a limited scope of action (for instance, search engines within a website).

Even though such clarification is to be welcome, where does this leave non-search engine controllers that receive right to be forgotten requests?

What will happen in practice?

In the Opinion, the WP29 advises that:

  • Individuals should be able to exercise their rights using “any adequate means” and cannot be forced by search engines to use specific electronic forms or procedures.
  • Search engines must follow national data protection laws when dealing with requests.
  • Both search engines and individuals must provide “sufficient” explanations in their requests/decisions.
  • Search engines must inform individuals that they can turn to the Regulators if they decide not to de-list the relevant materials.
  • Search engines are encouraged to publish their de-listing criteria.
  • Search engines should not inform users that some results to their queries have been de-listed. WP29’s preference is that this information is provided generically.
  • The WP29 also advises that search engines should not inform the original publishers of the information that has been de-listed about the fact that some pages have been de-listed in response to a RTBF request.

 

Spam texts: “substantially distressing” or just annoying?

Posted on November 11th, 2014 by



The Department for Culture, Media and Sport (“DCMS”) recently launched a consultation to reduce or even remove the threshold of harm the Information Commissioner’s Office (“ICO”) needs to establish in order to fine nuisance callers, texters or emailers.

Background

In 2010 ICO was given powers to issue Monetary Penalty Notices (“MPNs”, or fines to you and me) of up to £500,000 for those companies who breach the Data Protection Act 1998 (“DPA”).  In 2011 these were extended to cover breaches of the Privacy and Electronic Communications Regulations 2003 (“PECR”), which sought to control the scourge of nuisance calls, texts and emails.

At present the standard ICO has to establish before issuing an MPN is a high one: that there was a serious, deliberate (or reckless) contravention of the DPA or PECR which was of a kind likely to cause substantial damage or substantial distress.  Whilst unsolicited marketing calls are certainly irritating, can they really be said to cause “substantial distress”?  Getting a text from a number you didn’t know about a PPI claim is certainly annoying, but could it seriously be considered “substantial damage”?  Not exactly; and therein lies the problem.

Overturned

In the first big case where ICO used this power, it issued an MPN of £300,000 to an individual who’d allegedly sent millions of spam texts for PPI claims to users who had not consented to receive them.  Upon appeal the Information Rights Tribunal overturned the fine.  The First Tier Tribunal found that whilst there was a breach of PECR (the messages were unsolicited, deliberate, with no opt-out link and for financial gain), the damage or distress caused could not be described as substantial.  Every mobile user knew what a PPI spam text meant and was unlikely to be concerned for their safety or have false expectations of compensation.  A short tut of irritation and then deleting the message solved the problem.  The Upper Tribunal agreed: a few spam texts did not substantial damage or distress cause.  Interestingly, the judge pointed out that the “substantial” requirement had come from the UK government, was stricter than that required by the relevant EU Directive and suggested the statutory test be revisited.

This does not however mean that ICO has not been able to use the power.  Since 2012 it has issued nine MPNs totalling £1.1m to direct marketers who breach PECR.  More emphasis is placed on the overall level of distress suffered by hundreds or thousands of victims, which can be considered substantial.  ICO concentrates on the worst offenders: cold callers who deliberately and repeatedly call numbers registered with the Telephone Preference Service, (“TPS” – Ofcom’s “do not call” list) even when asked to stop and those that attract hundreds of complaints.

In fact, in this particular case there were specific problems with the MPN document (this will not necessarily come as a surprise for those familiar with ICO MPNs).  The Tribunal criticised ICO for a number of reasons: not being specific about the Regulation contravened, omitting important factual information, including in the period of contravention time when ICO did not yet have fining power and changing the claim from the initial few hundred complaints to the much wider body that may have been sent.  Once all this was taken into consideration, only 270 unsolicited texts were sent to 160 people.

Proposal

ICO has been very vocal about having its hands tied in this matter and has long pushed for a change in the law (which is consistent with ICO’s broader campaigning for new powers).  Nuisance calls are a cause of great irritation for the public and currently only the worst offenders can be targeted.  Statistics compiled by ICO and TPS showed that the most nuisance is caused by large numbers of companies making a smaller number of calls.  Of 982 companies that TPS received complaints about, 80% received fewer than 5 complaints and only 20 more than 25 complaints.

Following a select committee enquiry, an All Party Parliamentary Group and a backbench debate, DCMS has launched the consultation, which invites responses on whether the threshold should be lowered to “annoyance, inconvenience or anxiety“.  This would bring it in line with the threshold Ofcom must consider when fining telecoms operators for persistent misuse for silent/abandoned calls. ICO estimates that had this threshold been in place since 2012, a further 50 companies would have been investigated/fined.

The three options being considered are: to do nothing, to lower the threshold or to remove it altogether.  Both ICO and DCMS favour complete removal.  ICO would thus only need to prove a breach was serious and deliberate/reckless.

Comment

I was at a seminar last week with the Information Commissioner himself, Chris Graham, at which he announced the consultation.  It was pretty clear he is itching to get his hands on these new powers to tackle rogue callers/emailers/texters, but emphasised any new powers would still be used proportionally and in conjunction with other enforcement actions such as compliance meetings and enforcement notices.  Even the announcement of any new law should act as a deterrent: typically whenever a large MPN is announced, the number of complaints about direct marketers reduces the following month.

The consultation document is squarely aimed at unsolicited calls, texts and emails and is consistently stated to only apply to certain regulations of PECR.  There is no suggestion that the threshold be reduced for other breaches of the PECR or the DPA.  It will be interesting to see how any reform will work in practice as the actual threshold is contained within the DPA and so will require its amendment.

The consultation will run until 7 December 2014, the document can be found here.  Organisations that are concerned about these proposals now have an opportunity to make their voices heard.

DPA update: finally the end of enforced Subject Access Requests?

Posted on November 10th, 2014 by



Employers who force prospective employees to obtain a Subject Access Request from the police detailing any criminal history or investigation will soon themselves be committing a criminal offence.

Background

The Ministry of Justice recently announced that on 1 December 2014, section 56 of the Data Protection Act 1998 (“DPA”) will come into force across the UK.  It will make it a criminal offence for employers to demand prospective employees obtain Subject Access Request (“SAR”) reports.

Some employers are concerned that s56 will make it an offence to undertake Disclosure & Barring Service (“DBS”, the new name for the Criminal Records Bureau) checks on prospective employees.  This is not the case.  In fact it is designed to encourage the use of these and to prevent enforced SARs.

Purpose

The correct procedure to obtain criminal records of prospective employees is via the disclosure service provided by DBS or the Scottish equivalent, Disclosure Scotland (“DS”).  Whilst these services were in the process of being developed, employers could demand that applicants made SARs directly to the police and pass on the report.

The purpose of s56 was to close this loophole once the DBS/DS system had become fully operational.  For that reason s56 was inserted into the DPA but not enacted with the rest of the provisions.  It applies only to records obtained by the individual from the police using their s7 SAR rights. SAR reports contain far more information than would be revealed under a DBS/DS check, such as police intelligence and spent convictions.  As a result the practice is frowned upon by the authorities: the police SAR form states enforced SARs are exploitative and contrary to the spirit of the DPA, and the Information Commissioner’s Office (“ICO”) guidance on employment has long advised against it using stern wording (“Do not force applicants…” !).

Exemptions

The only exemptions to s56 are situations when the report is justified as being in the public interest or when required by law; the s28 national security exemption does not apply.

Opinion

There has been no specific guidance released on s56.  However, it is clear from the Written Ministerial Statement which announced the change in March 2014 and the ICO release which followed it that the section is being brought into force to close the loophole.  ICO has publically stated it intends to prosecute infringers under the offence so as to encourage the correct use of the DBS/DS procedure and prevent enforced SARs.  s56 does nothing to prevent employers requesting DBS/DS checks on prospective employees in the usual way.

What this means in practice is that any employer who demands a potential employee to file an SAR with the police and provide the results will be committing a criminal offence and there is a potentially unlimited fine for infringement.  Instead, employers should utilise the DBS procedure (DS if in Scotland) for the purpose of background criminal checks.  This sneaky backdoor route to obtaining far more sensitive personal data than employers are entitled to – often harming the individual’s job prospects in the process – will be shut for good.  Non-compliant employers should take note.

Update 19 November 2014

In an informative webinar on this subject yesterday, ICO mentioned a delay in the commencement date.  When I queried this the official response was: “a technical issue encountered when finalising arrangements for introduction means there will be a delay to the date for commencing Section 56 of the Data Protection Act. The Government is working to urgently resolve this issue. There is no exact date as yet.”

What does EU regulatory guidance on the Internet of Things mean in practice? Part 2

Posted on November 1st, 2014 by



In Part 1 of this piece I summarised the key points from the recent Article 29 Working Party (WP29) Opinion on the Internet of Things (IoT), which are largely reflected in the more recent Mauritius Declaration adopted by the Data Protection and Privacy Commissioners from Europe and elsewhere in the world. I expressed my doubts that the approach of the regulators will encourage the right behaviours while enabling us to reap the benefits that the IoT promises to deliver. Here is why I have these concerns.

Thoughts about what the regulators say

As with previous WP29 Opinions (think cloud, for example), the regulators have taken a very broad brush approach and have set the bar so high, that there is a risk that their guidance will be impossible to meet in practice and, therefore, may be largely ignored. What we needed at this stage was a somewhat more balanced and nuanced guidance that aimed for good privacy protections while taking into account the technological and operational realities and the public interest in allowing the IoT to flourish.

I am also unsure whether certain statements in the Opinion can withstand rigorous legal analysis. For instance, isn’t it a massive generalisation to suggest that all data collected by things should be treated as personal, even if it is anonymised or it relates to the ‘environment’ of individuals as opposed to ‘an identifiable individual’? How does this square with the pretty clear definition of the Data Protection Directive? Also, is the principle of ‘self-determination of data’ (which, I assume is a reference to the German principle of ‘informational self-determination’) a principle of EU data protection law that applies across the EU? And how is a presumption in favour of consent justified when EU data protection law makes it very clear that consent is one among several grounds on which controllers can rely?

Few people will suggest that the IoT does not raise privacy issues. It does, and some of them are significant. But to say that (and I am paraphrasing the WP29 Opinion) pretty much all IoT data should be treated as personal data and can only be processed with the consent of the individual which, by the way, is very difficult to obtain at the required standards, leaves companies processing IoT data nowhere to go, is likely to unnecessarily stifle innovation, and slow down the development of the IoT, at least in Europe. We should not forget that the EU Data Protection Directive has a dual purpose: to protect the privacy of individuals and to enable the free movement of personal data.

Distinguishing between personal and non-personal data is essential to the future growth of the IoT. For instance, exploratory analysis to find random or non-obvious correlations and trends can lead to significant new opportunities that we cannot even imagine yet. If this type of analysis is performed on data sets that include personal data, it is unlikely to be lawful without obtaining informed consent (and even then, some regulators may have concerns about such processing). But if the data is not personal, because it has been effectively anonymised or does not relate to identifiable individuals in the first place, there should be no meaningful restrictions around consent for this use.

Consent will be necessary in several occasions such as for storing or accessing information stored on terminal equipment, for processing health data and other sensitive personal data, or for processing location data created in the context of public telecommunications services. But is consent really necessary for the processing of, e.g., device identifiers, MAC addresses or IP addresses? If the individual is sufficiently informed and makes a conscious decision to sign up for a service that entails the processing of such information (or, for that matter, any non-sensitive personal data), why isn’t it possible to rely on the legitimate interests ground, especially if the individual can subsequently chose to stop the further collection and processing of data relating to him/her? Where is the risk of harm in this scenario and why is it impossible to satisfy the balance of interests test?

Notwithstanding my reservations, the fact of the matter remains that the regulators have nailed their colours to the mast, and there is risk if their expectations are not met. So where does that leave us then?

Our approach

Sophisticated companies are likely to want to take the WP29 Opinion into account and also conduct a thorough analysis of the issues in order to identify more nuanced legal solutions and practical steps to achieve good privacy protections without unnecessarily restricting their ability to process data. Their approach should be guided by the following considerations:

  1. The IoT is global. The law is not.
  2. The law is changing, in Europe and around the world.
  3. The law is actively enforced, with increasing international cooperation.
  4. The law will never keep up with technology. This pushes regulators to try to bridge the gap through their guidance, which may not be practical or helpful.
  5. So, although regulatory guidance is not law, there is risk in implementing privacy solutions in cutting edge technologies, especially when this is done on a global scale.
  6. Ultimately, it’s all about trust: it’s the loss of trust that a company will respect our privacy and that it will do its best to protect our information that results in serious enforcement action, pushes companies out of business or results in the resignation of the CEO.

 

This is a combustible environment. However, there are massive business opportunities for those who get privacy right in the IoT, and good intentions, careful thinking and efficient implementation can take us a long way. Here are the key steps that we recommend organisations should take when designing a privacy compliance programme for their activities in the IoT:

  1. Acknowledge the privacy issue. ‘Privacy is dead’ or ‘people don’t care’ type of rhetoric will get you nowhere and is likely to be met with significant pushback by regulators.
  2. Start early and aim to bake privacy in. It’s easier and less expensive than leaving it for later. In practice this means running privacy impact assessments and security risk assessments early in the development cycle and as material changes are introduced.
  3. Understand the technology, the data, the data flows, the actors and the processing purposes. In practice, this may be more difficult than it sounds.
  4. Understand what IoT data is personal data taking into account if, when and how it is aggregated, pseudonymised or anonymised and how likely it is to be linked back to identifiable individuals.
  5. Define your compliance framework and strategy: which laws apply, what they require, how the regulators interpret the requirements and how you will approach compliance and risk mitigation.
  6. When receiving data from or sharing data with third parties, allocate roles and responsibilities, clearly defining who  is responsible for what, who protects what, who can use what and for what purposes.
  7. Transparency is absolutely essential. You should clearly explain to individuals what information you collect, what you do with it and the benefit that they receive by entrusting you with their data. Then do what you said you would do – there should be no surprises.
  8. Enable users to exercise choice by enabling them to allow or block data collection at any time.
  9. Obtain consents when the law requires you to do so, for instance if as part of the service you need to store information on a terminal device, or if you are processing sensitive personal data, such as health data. In most cases, it will be possible to rely on ‘implied’ consent so as to not unduly interrupt the user journey (except when processing sensitive personal data).
  10. Be prepared to justify your approach and evidence compliance. Contractual and policy hygiene can help a lot.
  11. Have a plan for failure: as with any other technology, in the IoT things will go wrong, complaints will be filed and data security breaches will happen. How you react is what makes the difference.
  12. Things will change fast: after you have implemented and operationalised your programme, do not forget to monitor, review, adapt and improve it.

 

What does EU regulatory guidance on the Internet of Things mean in practice? Part 1

Posted on October 31st, 2014 by



The Internet of Things (IoT) is likely to be the next big thing, a disruptive technological step that will change the way in which we live and work, perhaps as fundamentally as the ‘traditional’ Internet did. No surprise then that everyone wants a slice of that pie and that there is a lot of ‘noise’ out there. This is so despite the fact that to a large extent we’re not really sure about what the term ‘Internet of Things’ means – my colleague Mark Webber explores this question in his recent blog. Whatever the IoT is or is going to become, one thing is certain: it is all about the data.

There is also no doubt that the IoT triggers challenging legal issues that businesses, lawyers, legislators and regulators need to get their heads around in the months and years to come. Mark discusses these challenges in the second part of his blog (here), where he considers the regulatory outlook and briefly discusses the recent Article 29 Working Party Opinion on the Internet of Things.

Shortly after the WP29 Opinion was published, Data Protection and Privacy Commissioners from Europe and elsewhere in the world adopted the Mauritius Declaration on the Internet of Things. It is aligned to the WP29 Opinion, so it seems that privacy regulators are forming a united front on privacy in the IoT. This is consistent with their drive towards closer international cooperation – see for instance the latest Resolution on Enforcement Cooperation and the Global Cross Border Enforcement Cooperation Agreement (here).

The regulatory mind-set

You only need to read the first few lines of the Opinion and the Declaration to get a sense of the regulatory mind-set: the IoT can reveal ‘intimate details'; ‘sensor data is high in quantity, quality and sensitivity’ and the inferences that can be drawn from this data are ‘much bigger and sensitive’, especially when the IoT is seen alongside other technological trends such as cloud computing and big data analytics. The challenges are ‘huge’, ‘some new, some more traditional, but then amplified with regard to the exponential increase of data processing’, and include ‘data losses, infection by malware, but also unauthorized access to personal data, intrusive use of wearable devices or unlawful surveillance’.

In other words, in the minds of privacy regulators, it does not get much more intrusive (and potentially unlawful) than this, and if the IoT is left unchecked, it is the quickest way to an Orwellian dystopia. Not a surprise then that the WP29 supports the incorporation of the highest possible guarantees, with users remaining in complete control of their personal data, which is best achieved by obtaining fully informed consent. The Mauritius Declaration echoes these expectations.

What the regulators say

Here are the main highlights from the WP29 Opinion:

  1. Anyone who uses an IoT object, device, phone or computer situated in the EU to collect personal data is captured by EU data protection law. No surprises here.
  2. Data that originates from networked ‘things’ is personal data, potentially even if it is pseudonymised or anonymised (!), and even if it does not relate to individuals but rather relates to their environment. In other words, pretty much all IoT data should be treated as personal data.
  3. All actors who are involved in the IoT or process IoT data (including device manufacturers, social platforms, third party app developers, other third parties and IoT data platforms) are, or at least are likely to be, data controllers, i.e. responsible for compliance with EU data protection law.
  4. Device manufacturers are singled out as having to take more practical steps than other actors to ensure data protection compliance (see below). Presumably, this is because they have a direct relationship with the end user and are able to collect ‘more’ data than other actors.
  5. Consent is the first legal basis that should be principally relied on in the IoT. In addition to the usual requirements (specific, informed, freely given and freely revocable), end users should be enabled to provide (or withdraw) granular consent: for all data collected by a specific thing; for specific data collected by anything; and for a specific data processing. However, in practice it is difficult to obtain informed consent, because it is difficult to provide sufficient notice in the IoT.
  6. Controllers are unlikely to be able to process IoT data on the basis that it is on their legitimate interests to do so, because it is clear that this processing significantly affects the privacy rights of individuals. In other words, in the IoT there is a strong regulatory presumption against the legitimate interests ground and in favour of consent as the legitimate basis of processing.
  7. IoT devices constitute ‘terminal devices’ for EU law purposes, which means that any storage of information, or access to information stored, on an IoT device requires the end user’s consent (note: the requirement applies to any information, not just personal data).
  8. Transparency is absolutely essential to ensure that the processing is fair and that consent is valid. There are specific concerns around transparency in the IoT, for instance in relation to providing notice to individuals who are not the end users of a device (e.g. providing notice to a passer-by whose photo is taken by a smart watch).
  9. The right of individuals to access their data extends not only to data that is displayed to them (e.g. data about calories burnt that is displayed on a mobile app), but also the raw data processed in the background to provide the service (e.g. the biometric data collected by a wristband to calculate the calories burnt).
  10. There are additional specific concerns and corresponding expectations around purpose limitation, data minimisation, data retention, security and enabling data subjects to exercise their rights.

 

It is also worth noting that some of the expectations set out in the Opinion do not currently have an express statutory footing, but rather reflect provisions of the draft EU Data Protection Regulation (which may or may not become law): privacy impact assessments, privacy by design, privacy by default, security by design and the right to data portability feature prominently in the WP29 Opinion.

The regulators’ recommendations

The WP29 makes recommendations regarding what IoT stakeholders should do in practice to comply with EU data protection law. The highlights include:

  1. All actors who are involved in the IoT or process IoT data as controllers should, carry out Privacy Impact Assessments and implement Privacy by Design and Privacy by Default solutions; should delete raw data as soon as they have extracted the data they require; and should empower users to be in control in accordance with the ‘principle of self-determination of data’.
  2. In addition, device manufacturers should:
    1. follow a security by design principle;
    2. obtain consents that are granular (see above), and the granularity should extend to enabling users to determine the time and frequency of data collection;
    3. notify other actors in the IoT supply chain as soon as a data subject withdraws their consent or opposes a data processing activity;
    4. limit device finger printing to prevent location tracking;
    5. aggregate data locally on the devices to limit the amount of data leaving the device;
    6. provide users with tools to locally read, edit and modify data before it is shared with other parties;
    7. provide interfaces to allow users to extract aggregated and raw data in a structured and commonly used format; and
    8. enable privacy proxies that inform users about what data is collected, and facilitate local storage and processing without transmitting data to the manufacturer.
  3. The Opinion sets out additional specific expectations for app developers, social platforms, data platforms, IoT device owners and additional data recipients.

 

Comment

I have no doubt that there are genuinely good intentions behind the WP29 Opinion and the Mauritius Declaration. What I am not sure about is whether the approach of the regulators will encourage behaviours that protect privacy without stifling innovation and impeding the development of the IoT. I am not even sure if, despite the good intentions, in the end the Opinion will encourage ‘better’ privacy protections in the IoT. I explain why I have these concerns and how I think organisations should be approaching privacy compliance in the IoT in Part 2 of this piece.

Are DPA notifications obsolete?

Posted on October 27th, 2014 by



For almost 10 years I’ve been practising data protection law and advising multinational organizations on their strategic approach to global data processing operations. Usually, when it comes to complying with European data protection law, notifying the organization’s data processing activities with the national data protection authorities (DPAs) is one of the most burdensome exercises. It may look simple, but companies often underestimate the work involved to do this.

As a reminder, article 18 of the Data Protection Directive 95/46/EC requires data controllers (or their representatives in Europe) to notify the DPA prior to carrying out their processing operations. In practise, this means that they must file a notification with the DPA in each Member State in which they are processing personal data, which specifies who is the data controller, the types of data that are collected, the purpose(s) for processing such data, whether any of that data gets transferred outside the EEA and how individuals can exercise their privacy rights.

In a perfect world, this would be a fairly straightforward process whereby organizations would simply file a single notification with the DPA in every Member State. But that would be too easy! The reality is that DPA notification procedures are not harmonized in Europe, which means that organizations must comply with the notification procedures of each Member State as defined by national law. As a result, each DPA has established its own notification rules which impose a pre-established notification form, procedure, and formalities on data controllers. Europe is not the only region to have notification rules. In Latin America, organizations must file a notification in Argentina, Uruguay in Peru. And several African countries (usually those who are members of the “Francophonie” such as Morocco, Senegal, Tunisia, and the Ivory Coast) have also adopted data protection laws requiring data controllers to notify their data processing activities.

Failing to comply with this requirement puts your organization at risk with the DPAs who have the power in some countries to conduct audits and inspections of an organization’s processing activities. If a company is found to be in violation of the law, some DPAs may impose sanctions (such as fines, public warnings) or order the data to be blocked or the data processing to cease immediately. Furthermore, companies may also be sanctioned by the national courts. For example, on October 8th, 2014, the labour chamber of the French Court of Cassation (the equivalent to the Supreme Court for civil and criminal matters) ruled that an employer could not use the data collected via the company’s messaging system as evidence to lay-off one of its employees for excessively using that messaging service for private purposes (i.e., due to the high number of private emails transiting via the messaging service) because the company had failed to notify the French Data Protection Authority (CNIL) prior to monitoring the use of the messaging service.

One could also argue that notifications may get scrapped altogether by the draft Data Protection Regulation (currently being discussed by the European legislator) and so companies will no longer be required to notify their data processing activities to the regulator. True, but don’t hold your breath for too long! The draft Regulation is currently stuck in the Council of ministers, and assuming it does get adopted by the European legislator, the most realistic date of adoption could be 2016. Given that the text has a two-year grace period before it comes into force, the Regulation would not come into force before 2018. And in its last meeting of October 3rd, 2014, the Council agreed to reach a partial general approach on the text of chapter IV of the draft Regulation on the understanding that “nothing is agreed until everything is agreed.”

So, are DPA notifications obsolete? The answer is clearly “no”. If you’re thinking: “why all the fuss? Do I really need to go through all this bureaucracy?” think again! The reason organizations must notify their data processing activities to the DPAs is simple: it’s the law. Until the Data Protection Regulation comes into force (and even then, some processing activities may still require the DPA’s prior approval), companies must continue to file their notifications. Doing so is a necessary component of any global privacy compliance project. It requires organizations to strategize their processing operations and to prioritize the jurisdictions in which they are developing their business. And failing to do so simply puts your organization at risk.

This article was first published in the IAPP’s Privacy Tracker on October 23rd, 2014.

PART 2 – The regulatory outlook for the Internet of Things

Posted on October 22nd, 2014 by



In Part 1 of this piece I posed a question asking: the Internet of Things – what is it? I argued that even the concept of the Internet of Things (“IoT“) itself is somewhat ill-defined making the point there is no definition of IoT and, even if there were, that the definition will only change. What’s more, IoT will mean different things to different people and talk to something new each year.

For all the commentary, there is not specific IoT law today (sorry there is no Internet of Things (Interconnectivity) Act in the UK (and nor will there be any time soon)). We are left applying a variety of existing laws across telecoms, intellectual property, competition, health and safety and data privacy / security. Equally, with a number of open questions about how the IoT will work, how devices will communicate and identify each other etc., there is also a lack of standards and industry wide co-operation around IoT.

Frequently based around data use and with potentially intrusive application in the consumer space (think wearables, intelligent vehicles and healthtech) there is no doubt that convergence around IoT will fan privacy questions and concerns.

An evolving landscape

This lack of definition, coupled with a nascent landscape of standards, interfaces, and protocols leaves many open questions about future regulation and the application of current laws. On the regulatory front there is little sign of actual law-making or which rules may evolve to influence our approach or analysis.

Across the US, UK and the rest of Europe some of the regulatory bodies with an interest in IoT are diverse with a range of regulatory mandates and sometimes with a defined role confined to specific sectors. Some of these regulators are waking up to potential issues posed by IoT and a few are reaching out to the industry as a whole to consult and stimulate discussion. We’re more likely to see piecemeal regulation addressing specific issues than something all encompassing.

The challenge of new technology

Undoubtedly the Internet of Things will challenge law makers as well as those of us who construe the law. It’s possible that in navigating these challenges and our current matrix of laws and principles that we may influence the regulatory position as a result. Some obvious examples of where these challenges may come from are:

  • Adaptations to spectrum allocation. If more devices want to communicate, many of these will do so wirelessly (whether via short range or wide area comms or mobile). The key is that these exchanges don’t interfere with each other and that there is sufficient capacity available within the allocated spectrum. This may need to be regulated.
  • Equally, as demand increases, with a scarce resource what kind of spectrum allocation is “fair” and “optimal” and is some machine to machine traffic more important than other traffic? With echoes of the net neutrality debate the way this evolves will be interesting. Additionally, if market dominance emerges around one technology will there be competition/anti-trust concerns;
  • The technologies surrounding the IoT will throw up intellectual property and licensing issues. The common standards and exchange and identification protocols themselves may be controlled by interested party or parties or released on an “open” basis. Regulation may need to step-in to promote economic advance via speedy adoption or simply act as an honest broker in a competitive world; and
  • In some applications of IoT the concept of privacy will be challenged. In a decentralised world the thorny issues of consent and reaffirming consent will be challenging. This said, many IoT deployments will not involve personal information or identifiers. Plus, whatever the data, issues around security become more acute.

We have a good idea what issues may be posed, but we don’t yet know which will impose themselves sufficiently to force regulation or market intervention.

Consultation – what IoT means for the policy agenda

There have been some opening shots in this potential regulatory debate because a continued interconnectivity between multiple devices raises potential issues.

  • In issuing a new Consultation: “Promoting investment and innovation in the Internet of Things“, Ofcom (the UK’s communications regulator) kicked off its own learning exercise identify potential policy concerns around:
  • spectrum allocation and providing for potential demand;
  • understanding of the robustness and reliability issues placed upon networks which demand resilience and security. The corresponding issue of privacy is recognised also;
  • a need for each connected device to have an assigned name or identifier and questioning just how those addresses should be determined and potentially how they would be assigned; and
  • understanding their potential role as the UK’s regulator in an area (connectivity) key to the evolution of IoT.

In a varied and quite penetrable paper, Ofcom’s consultation recognises what many will be shouting, their published view “is that industry is best placed to drive the development, standardisation and commercialisation of new technology“. However, it goes on to recognise that “given the potential for significant benefits from the development of the IoT across a range of industry sectors, ][Ofcom[ are interested in views on whether we should be more proactive; for example, in identifying and making available key frequency bands, or in helping to drive technical standards.”

Europe muses while Working Party 29 wades in early warning about privacy

IoT adoption has been on Europe’s “Digital Agenda” for some time and in 2013 it reported back on its own Conclusions of the Internet of Things public consultation. There is also the “Connected Continent” initiative chasing a single EU telecoms market for jobs and growth.   The usual dichotomy is playing out equating technology adoption with “growth” while Europe wrestles with an urge to protect consumers and markets.

In just one such fight with this urge, in the past month the Article 29 Working Party (comprising the data privacy regulators of Europe) published its own Opinion 8/2014 on the Recent Developments on the Internet of Things. Recognising that it’s impossible to predict with any certainty the extent to which the IoT will develop the group also calls out that the development must “respect the many privacy and security challenges which can be associated with IoT“.

Their Opinion focuses on three specific IoT developments:

  • Wearable Computing;
  • Quantified Self; and
  • Domotics (home automation).

This Opinion doesn’t even consider B2B applications and more global issues like “smart cities”, “smart transportations”, as well as M2M (“machine to machine”) developments. Yet, the principles and recommendations their Opinion may well apply outside its strict scope and cover these other developments in the IoT. It’s one of our only guiding lights (and one which applies high standards of responsibility).

As one would expect, the Opinion identifies the “main data protection risks that lie within the ecosystem of the IoT before providing guidance on how the EU legal framework should be applied in this context”. What’s more the Working Party “supports the incorporation of the highest possible guarantees for individual users at the heart of the projects by relevant stakeholders. In particular, users must remain in complete control of their personal data throughout the product lifecycle, and when organisations rely on consent as a basis for processing, the consent should be fully informed, freely given and specific.”

The Fieldfisher team will shortly publish its thoughts and explanation of this Opinion. As one may expect, the IoT can and will challenge the privacy notions of transparency and consent let alone proportionality and purpose limitation. This means that accommodating the EU’s data privacy principles within the application of some IoT will not always be easy. Security poses another tricky concept and conversation. Typically these are issues to be tacked at the design stage and not as a legal afterthought. Step forward the concept of privacy by design (a concept recognised now around the globe).

In time, who knows, we may even see the EU Data Protection Regulation pass and face enhanced privacy obligations in Europe with new focus on “profiling” and legal responsibilities falling beyond the data processor exacting its own force over IoT.

The US is also alive to the potential needs of IoT

But Europe is not alone, with its focus on activity specific laws or laws regulating specific industries, even the US may be addressing particular IoT concerns with legislation. Take the “We Are Watching You Act” currently with Congress and the “Black Box Privacy Protection Act” with the House of Representatives. Each now apparently have a low chance of actually passing, but may regulate monitoring of surveillance by video devices in the home and force car manufacturers to disclose to consumers the presence of event data recorders, or ‘black boxes’, in new automobiles.

A wider US development possibly comes from the Federal Trade Commission who hosted public workshops in 2013, itself interested in privacy and security in the connected world and the growing connectivity of devices. In the FTC’s own words: “[c]onnected devices can communicate with consumers, transmit data back to companies, and compile data for third parties such as researchers, health care providers, or even other consumers, who can measure how their product usage compares with that of their neighbors. The workshop brought together academics, business and industry representatives, and consumer advocacy groups to explore the security and privacy issues in this changing world. The workshop served to inform the Commission about the developments in this area.” Though there are no concrete proposals yet, 2014 has seen a variety of continued commentary around “building trust” and “maximising consumer benefits through consumer control”. With its first IoT enforcement action falling in 2013 (in respect of connected baby monitors from TRENDnet whose feeds were not secure) there’s no doubt the evolution of IoT is on the FTC’s radar.

FTC Chairwomen, Edith Ramirez commented that “The Internet of Things holds great promise for innovative consumer products and services. But consumer privacy and security must remain a priority as companies develop more devices that connect to the Internet“.

No specific law, but plenty of applicable laws

My gut instinct to hold back on my IoT commentary had served me well enough. In the legal sense with little to say, perhaps even now I’ve spoken too soon? What is clear is that we’re immersing ourselves in IoT projects, wearable device launches, health monitoring apps, intelligent vehicles and all the related data sharing already. The application of law to the IoT needs some legal thought and, without specific legislation today, as for many other emerging technologies we must draw upon:

  • Our insight into the existing law across and its current application across different legal fields; and
  • Rather than applying a rule specific to IoT, we have to ask the right questions to build a picture of the technology, the way it communicates and figure out the commercial realities and relative risks posed by these interactions.

Whether the internet of customers, the internet of people, data, processes or even the internet of everything; applied legal analysis will get us far enough until we actually see some substantive law for the IoT. This is today’s IoT challenge.

Mark Webber – Partner, Palo Alto California mark.webber@fieldfisher.com

UK to introduce emergency data retention measures

Posted on July 15th, 2014 by



The UK Prime Minister David Cameron announced last week that the Government is taking emergency measures to fast track new legislation, The Data Retention and Investigations Powers Bill, which will force communications service providers (i.e. telecommunications companies and internet service providers, together “CSPs“) to store communications data (including call and internet search metadata) for 12 months.

This announcement follows the CJEU’s ruling in April that the Data Retention Directive 2006/24/EC (the “Directive“), which requires companies to store communications data for up to two years, is invalid because it contravenes the right to privacy and data protection and the principle of proportionality under the EU Charter of Fundamental Rights (the “Charter“). The CJEU was particularly concerned about the lack of restrictions on how, why and when data could be used. It called for a measure which was more specific in terms of crimes covered and respective retention periods.

The PM said that the emergency law was necessary to protect existing interception capabilities, and that without it, the Government would be less able to protect the country from paedophiles, terrorists and other serious criminals. Cameron said the new legislation will respond to the CJEU’s concerns and provide a clear legal basis for companies to retain such communications data and also stressed that the new measures would cover the retention of only metadata, such as the time, place and frequency of communications, and would not cover the content of communications.  The emergency Bill is intended as a temporary measure and is to expire in 2016. The Government intends that the legislation will ensure that, in the short term, UK security and law enforcement agencies can continue to function whilst Parliament has time to examine the Regulation of Investigatory Powers Act 2000 (RIPA) to make recommendations on how it could be modernised and improved. Whilst Cameron stressed that the measures did not impose new obligations on CSPs and insisted they would not authorise new intrusions on civil liberties, the Bill faces criticism that it extends on the already far reaching interception rights under RIPA and also that in light of the CJEU decision, the temporary measure also contravenes the Charter.

At present, in order to comply with their obligations under the Directive, CSPs already operate significant storage and retrieval systems to retain data from which they can derive no further use or revenue. If the draft Bill is enacted with little further amendment, the UK’s Secretary of State could be issuing new retention notices later this year. Those CSPs subject to retention obligations today will be reading carefully as these arrive. It is not yet clear whether the legislative burden and cost of compliance is likely to spread to additional CSPs not previously notified under the current retention regime. From the Bill’s drafting it appears this could conceivably happen. It is equally clear that there is no mechanism to recoup these costs other than from their general business operations.

Britain is the first EU country to seek to rewrite its laws to continue data retention since the CJEU decision, and the Government said it was in close contact with other European states on the issue.

By comparison, in Germany, when the Directive was initially implemented, the German courts took the view that the German implementation of it by far exceeded the limits set by the German constitutional right of informational self-determination of the individual in that it did not narrow down the scope of use of the retained data sufficiently, e. g., by not limiting it to the prosecution or prevention of certain severe criminal acts. In Germany’s new Telecommunication Act, enacted in 2012, the provisions pertaining to data retention were deleted and not replaced by the compulsory principles in the Directive. Treaty violation proceedings against Germany by the EU Commission ensued, however the proceedings have now lost their grounds entirely as a result of the CJEU ruling.

Meanwhile the Constitutional Court of Austria last month declared that Austrian data retention laws were unconstitutional. Austria is the first EU Member State to annul data retention laws in response to the CJEU decision.  Austrian companies are now only obliged to retain data for specific purposes provided by law, such as billing of fault recovery.

Whether other EU countries will now follow the UK’s lead, potentially introducing a patchwork of data retention standards for CSPs throughout the EU, remains to be seen. If this happens, then equally uncertain is the conflict this will create between, on the one hand, nationally-driven data retention standards and, on the other, EU fundamental rights of privacy and data protection.

 

The Long Arm of the Law

Posted on May 9th, 2014 by



There’s a fair amount of indignation swilling around EU privacy regulators, politicians and policy makers following last year’s revelations about the NSA’s access to data on EU citizens. Hence, the enthusiasm of some parties for the idea of building a European internet seemingly beyond the reach of any non-EU actors (a.k.a. the US Government). So when recently a US district judge quashed Microsoft’s opposition to a US warrant requiring it to disclose data held on a server in Ireland, it appeared that this was an example of the overreaching influence of US law ignoring EU data privacy rules. However, a reading of the published court document setting out US Judge Francis’ decision does not obviously lend itself to this dichotomy. In fact EU data protection and privacy law principles do not immediately appear to have been discussed and taken into account as part of the decision.

What did the decision deal with?

Instead the main focus of the decision was on the extraterritorial reach of the search warrant issued against Microsoft under the US Stored Communications Act (SCA). The SCA governs the obligations of internet service providers to disclose information to, amongst other things, the US Government. Microsoft argued that a US federal court can only issue warrants for the search and seizure of property within the territorial limits of the US. It followed that a warrant seeking access to information associated with a specific web-based email account that was stored at Microsoft premises in Ireland was information stored beyond the reach of the territorial limits of the US law enforcement authorities.

Well, Judge Francis was having none of it. He assessed the structure of the SCA, its legislative history and the practical consequences of Microsoft’s view and dismissed Microsoft’s argument. He argued that it ‘has long been the law that a subpoena [which is what he argued the warrant was] requires the recipient to produce information in its possession, custody, or control regardless of the location of that information’. Furthermore, the legal authorities in Judge Francis’ opinion supported the notion that ‘an entity [that is] subject to jurisdiction in the United States, like Microsoft, may be required to obtain evidence from abroad in connection with a criminal investigation‘.

Although this District Court decision has gone against Microsoft, it is clear that Microsoft is in it for the long haul. Public pronouncements by Microsoft have indicated that it sees this decision as just one step in the process of challenging (and seeking to correct) the US Government’s view on their right to access data stored electronically outside the US.

What are the implications of the decision?

This decision seems to confirm the status quo for the moment as it relates to internet service providers. In other words, US ISPs with EU subsidiaries could reasonably take the view that they are required to comply with warrants and subpoenas from US law enforcement agencies relating to data held in the EU. A US ISP subsidiary with an EU parent should also think very carefully before challenging a requirement under US law to provide access to data held in the EU. Judge Francis did not clearly spell out that the reach of the law here only applies to US parent ISPs. Therefore it would seem that a US ISP subsidiary would need to be able to argue that the information held in the EU that was the subject of a warrant under the SCA was not in its possession, custody or control in order to deny access.

For cloud computing services more generally, the decision has not changed the general outlook. But given this reminder of the reach of US law, cloud providers with a US presence should be thinking about how to structure services for their EU customers. For instance, offering encryption solutions where the EU customer holds the encryption key should require US law enforcement authorities to approach the EU customer. Or using a corporate structure where a US cloud company can argue that it does not have possession, custody, or control over information held by its EU sister company would also make the strict enforcement of a warrant against a US company more difficult.

In any event, if Microsoft continues to pursue its challenge through the US courts as they indicate they will, then it is possible that a higher court will take a more nuanced view, balancing perhaps US security concerns with the constraints of extraterritoriality and privacy. At some point in all of this, the US courts may well consider Microsoft’s obligations under EU data protection law in more detail. Whilst there is no definitive prohibition under current EU data protection law preventing Microsoft, as with any other cloud provider, complying with a US law enforcement request for access to personal data held in the EU, this is one of the critical issues being discussed as part of the reforms to the EU data protection regulatory framework following the Snowden revelations.

Microsoft evidently sees this as a fundamental issue of customer trust in their services. Just as Microsoft, Google, Facebook and others have argued in recent months that they want to be able to tell users when the US Government seeks access to that user’s information, so this move by Microsoft to challenge the US Government’s right to access data held overseas is part of a similar stand against Government powers. Whether or not Microsoft will be successful in its campaign remains to be seen but a cloud provider will doubtless watch this debate with interest given the repercussions it could have for defending itself against similar requests from the US Government.

 

 

The ECJ finds the Data Retention Directive disproportionate

Posted on April 11th, 2014 by



The Data Retention Directive has always been controversial. Born as it was after the tragedies of the 2004 Madrid and 2005 London bombings, it has faced considerable criticism concerning its scope and lengthy debate over whether it is a measured response to the perceived threat.  It is therefore no surprise that over the years a number of constitutional courts in EU Member States have struck down the implementing legislation in their local law as unconstitutional (e.g. Romania and Germany).  But now the ECJ, having considered references from Irish and Austrian courts, has ruled that the Directive is invalid since it is disproportionate in scope and incompatible with the rights to privacy and data protection under the EU Charter of Fundamental Rights.

What did the ECJ object to?

The ECJ’s analysis focused on the extent of the Directive’s interference with the fundamental rights under Article 7 (right to privacy) and Article 8 (right to data protection) of the Charter. Any limitation of fundamental rights must be provided for by law, be proportionate, necessary and genuinely meet objectives of general interest. The ECJ considered that the Directive’s interference was ‘wide-ranging and…particularly serious’. Yet the ECJ conceded that the interference did not extend to obtaining knowledge of the content of communications and that its material objective – the fight against serious crime – was an objective of general interest. Consequently the key issue was whether the measures under the Directive were proportionate and necessary to fulfil the objective.

For the ECJ, the requirements under the Directive do not fulfil the strictly necessary test. In particular, the ECJ emphasised the ubiquitous nature of the retention – all data, all means, all subscribers and registered users. The requirements affect individuals indiscriminately without exception. Furthermore, there are no objective criteria determining the limits of national authorities to access and use the data. All in all the interference is not limited to what is strictly necessary and consequently the interference is disproportionate.

Of particular importance given the on-going EU-US debate about Safe Harbor and US authorities’ access to EU data, is that the ECJ was also worried that the Directive did not require the retained data to be held within the EU. This suggests that the ECJ expects global companies to devise locally based EU data retention systems regardless of the cost or inconvenience.

What are the implications of the ECJ judgment?

This is a hugely significant decision coming as it does after the revelations prompted by Edward Snowden about the access by western law enforcement agencies to masses of data concerning individuals’ use of electronic resources. Although the Advocate General in his opinion last year suggested that an invalidity ruling on the Directive be suspended to allow the EU time to amend the legislation, the ECJ has not adopted this approach. Therefore, to all intents and purposes, the Directive is no longer EU law.

This ECJ judgment effectively overrules any implementing legislation such as the UK’s Data Retention Regulations. This does not mean that UK ISP’s and Telco’s won’t continue to collect and retain communications data for billing and other legitimate business purposes as permitted under the UK’s DPA and PEC Regs. But they no longer have to do so in compliance with the UK Data Retention Regulations. Indeed there could be a risk that continuing to hold data in compliance with the retention periods under the Regulations is actually a breach of the data protection principle not to retain personal data for longer than is necessary.

What does this mean for Telco’s/ ISPs?

It has been reported that the UK Government has already responded to the ECJ decision by saying that it is imperative that companies continue to retain data. Clearly the UK and other EU Governments would become very nervous if companies suddenly started deleting copious amounts of data due to the impact this could have on intelligence gathering to deal with detecting and preventing serious crime.  And in any event, in spite of what has happened at the ECJ, Telco’s and ISP’s are still required to comply with law enforcement disclosure requests concerning the communications data they retain.

Significantly, the ECJ did not rule that this kind of data collection and retention is never warranted. One of the main criticisms of the ECJ was that the Directive did not include clear and precise rules governing the scope and application of measures and did not include minimum safeguards. This suggests that the Directive could be redrafted (and relaunched) in a form that includes these rules and safeguards when requiring companies to retain communications data. Of course, this is likely to take some time. In the meantime UK companies could consider reverting to the retention periods set out in the voluntary code introduced under the Anti-terrorism, Crime and Security Act 2001.