Archive for the ‘Legislative reform’ Category

Progress update on the draft EU Cybersecurity Directive

Posted on February 27th, 2015 by



In a blog earlier this year we commented on the status of the European Union (“EU”) Cybersecurity Strategy. Given that the Strategy’s flagship piece of legislation, the draft EU Cybersecurity Directive, was not adopted within the proposed institutional timeline of December 2014 and the growing concerns held by EU citizens about cybercrime, it seems that an update on EU legislative cybersecurity developments is somewhat overdue.

Background

As more of our lives are lived in a connected, digital world, the need for enhanced cybersecurity is evident. The cost of recent high-profile data breaches in the US involving Sony Pictures, JPMorgan Chase and Home Depot ran into hundreds of millions of dollars. A terrorist attack on critical infrastructure such as telecommunications or power supplies would be devastating. Some EU Member States have taken measures to improve cybersecurity but there is wide variation in the 28 country bloc and little sharing of expertise.

These factors gave rise to the European Commission’s (the “Commission”) publication in February 2013 of a proposed Directive 2013/0027 concerning measures to ensure a high common level of network and information security across the Union (the “proposed Directive”). The proposed Directive would impose minimum obligations on “market operators” and “public administrations” to harmonise and strengthen cybersecurity across the EU. Market operators would include energy suppliers, e-commerce platforms and application stores. The headline provision for business and organisations is the mandatory obligation to report security incidents to a national competent authority (“NCA”).

Where do things stand in the EU institutions on the proposed Directive?

On 13 March 2014 the European Parliament (the “Parliament”) adopted its report on the proposed Directive. It made a number of amendments to the Commission’s original text including:

  • the removal of “public administrations” and “internet enablers” (e.g. e-commerce platforms or application stores) from the scope of key compliance obligations;
  • the exclusion of software developers and hardware manufacturers;
  • the inclusion of a number of parameters to be considered by market operators to determine the significance of incidents and thus whether they must be reported to the NCA;
  • the enabling of Member States to designate more than one NCA;
  • the expansion of the concept of “damage” to include non-intentional force majeure damage;
  • the expansion of the list of critical infrastructure to include, for example, freight auxiliary services; and
  • the reduction of the burden on market operators including that they would be given the right to be heard or anonymised before any public disclosure and sanctions would only apply if they intentionally failed to comply or were grossly negligent.

In May-October 2014 the Council of the European Union (the “Council”) debated the proposed Directive at a series of meetings. It was broadly in favour of the Parliament’s amendments but disagreed over some high-level principles. Specifically, in the interests of speed and efficiency, the Council preferred to use existing bodies and arrangements rather than setting up a new cooperation mechanism between Member States.

In keeping with the Council’s general approach to draft EU legislation intended to harmonise practices between Member States, the institution also advocated the adoption of future-proofed flexible principles as opposed to concrete prescriptive requirements. Further, it contended that Member States should retain discretion over what information to share, if any, in the case of an incident, rather than imposing mandatory requirements.

In October-November 2014 the Commission, Parliament and Council commenced trilogue negotiations on an agreed joint text. The institutions were unable to come to an agreement during the negotiations due to the following sticking points:

  1. Scope. Member States are seeking the ability to assess (to agreed criteria) whether specific market operators come within the scope, whereas the Parliament wants all market operators within defined sectors to be captured.
  2. Internet enablers. The Parliament wants all internet enablers apart from internet exchanges to be excluded, whereas some Member States on the Council (France and Germany particularly) want to include cloud providers, social networks and search engines.
  3. There was also disagreement on the extent of strategic and operational cooperation and the criteria for incident notification.

What is the timetable for adoption of the proposed Directive?

There is political desire on behalf of the Commission to see the proposed Directive adopted as soon as possible. The Council has also stated that “the timely adoption of … the Cybersecurity Directive is essential for the completion of the Digital Single Market by 2015“.

Responsibility for enacting the reform now lies with the Latvian Presidency of the Council. On 30 January 2015, Latvian Transport Minister Anrijs Matiss stated that further trilogue negotiations would be held in March 2015, with the aim of adopting the proposed Directive by July 2015.

Once adopted, Member States will have 18 months to enact national implementing legislation so we could expect to see the proposed Directive come into force by early 2017.

How does the proposed Directive interact with other EU data privacy reforms?

In our previous blog we highlighted the difficulties facing market operators of complying with the proposed Directive in view of the potentially conflicting notification requirements in the existing e-Privacy Directive and the proposed General Data Protection Regulation (the “proposed GDPR”).

Although the text of the proposed Directive does anticipate the proposed GDPR, obliging market operators to protect personal data and implement security policies “in line with applicable data protection rules“, there has still been no EU guidance issued on how these overlapping or conflicting notification requirements would operate in practice.

Furthermore, any debate over which market operators fall within the scope of the breach notification requirements of the proposed Directive would seem to become superfluous once the proposed GDPR, with mandatory breach notifications for all data controllers, comes into force.

Comment

Rather unsurprisingly, the Commission’s broad reform has been somewhat diluted in Parliament and Council. This is a logical result of Member States seeking to impose their own standards, protect their own industries or harbouring doubts regarding the potential to harmonise practices where cybersecurity/infrastructure measures diverge markedly in sophistication and scope.

Nonetheless, the proposed Directive does still impose serious compliance obligations on market operators in relation to cybersecurity incident handling and notification.

At the risk of sounding somewhat hackneyed, for organisations, cyber data breaches are no longer a question of “if” but “when” for private and public sector bodies. Indeed, there is an increasing awareness that a high level of security in one link is no use if this is not replicated across the chain. Whether the proposed Directive meets its aim of reducing weak links across the EU remains to be seen.

EU privacy reform: are we nearly there yet?

Posted on February 7th, 2015 by



One thing everyone agrees on is that the EU needs new data protection rules. The current rules, now some 20 years old, are getting long in the tooth. Adopted at a time when having household Internet access was still a rare thing (remember those 56kpbs dial-up modems, anyone?), there’s a collective view across all quarters that they need updating for the 24/7 connected world in which we now live.

The only problem is this: we can’t agree what those new rules should look like. That shouldn’t really be a surprise – Europe is politically, culturally, economically and linguistically diverse, so it would be naive to think that reaching consensus on such an important and sensitive topic would be quick or easy.

Nevertheless, whether through optimism, politicization, or plain naivety, there have been repeated pronouncements over the years that adoption of the new rules is imminent. Since the initial publication of the EU’s draft General Data Protection Regulation in January 2012, data protection pundits have repeatedly predicted it would all be done and dusted in 2012, 2013, 2014 and now – no surprises – in 2015.

The truth is we’re a way off yet, as this excellent blog from the UK Deputy Information Commissioner highlights. Adoption of the new General Data Protection Regulation ultimately requires agreement to be reached, first, individually by each of the European Parliament and the Council of the EU on their respective preferred amendments to the original draft proposals; and then, second, collectively between the Parliament, the Council and the Commission via three-way negotiations (so-called “trilogue” negotiations).

As at the date of this post, the Parliament has reached consensus on its preferred amendments to the draft, but the Council’s deliberations in this respect are still ongoing. That means the individual positions of both institutions have not yet been finalised, the trilogue negotiations have not yet begun, and so an overall agreed upon text is not yet even close. There’s still a mountain to climb.

Not that progress hasn’t been made – it has, but there’s still a long way to go and it’s very unlikely the new law will pass in 2015. Even when it does, the expectation is that it will be a further two years until it takes effect. In other words, don’t expect the news rules to bite any time before 2018 – six years after they were originally proposed.

Why so long? Designing privacy rules fit for the 21st century is a difficult task, and the difficulty stems from the inherent subjectivity of privacy as a right. When thinking about what protections should exist, a natural consideration is what “expectation” of privacy individuals have. And therein lies the problem: no two people have the same expectations: what you expect and I expect are likely very different. Amplify those differences onto a national stage, and it becomes quickly apparent why discussions over new pan-European rules have become so protracted.

How, then, to progress the debate through to conclusion?

First, European lawmakers need to listen to the views of all stakeholders in the legislative process without prejudice or pre-judging their value. It’s far too simplistic to dismiss consumer advocates’ proposals as ‘impractical’, and equally disingenuous to label all industry concerns as just ‘lobbying’. Every side to the debate raises important points that deserve careful consideration. Insufficiently strong privacy protections will come at an expense to society, our human rights and our dignity; but, conversely, excessively strict regulation will impede innovation, hamper technological progress and restrict economic growth. A balance needs to be found, and ignoring salient points made by any side to the debate comes at a cost to us all.

Once lawmakers accept this, then they must also accept compromise and not simply ‘dig in’ to already fortified positions. Any agreement requires compromise – whether a verbal agreement between friends, a written contract between counterparties, or even legislative agreement over new laws like the General Data Protection Regulation. At present, however, there is too much bluster, quarreling and entrenchment, where reason, level-headedness and compromise should prevail.

When it comes to new data protection rules, a compromise – one that benefits all stakeholders of the information economy – is there to be struck: we just have to find it.

US and UK Regulators position themselves to meet the needs of the IoT market

Posted on January 30th, 2015 by



The Internet of Things (“IoT“) is set to enable large numbers of previously unconnected devices to communicate and share data with one another.

In an earlier posting I examined the future potential regulatory landscape for the IoT market and introduced Ofcom’s (the UK’s communications regulator) 2014 consultation on the Internet of Things. This stakeholder consultation was issued in order to examine the emerging debate around this increasing interconnectivity between multiple devices and to guide Ofcom regulatory priorities. Since the consultation was issued, the potential privacy issues associated with IoT continue to attract the most attention but, as yet, no IoT issues have led to any specific laws or legal change.

In two separate developments in January 2015, the UK and US Internet of Things markets were exposed to more advanced thinking and guidance around the legal challenges of the IoT.

UK IoT developments

Ofcom published its Report: “Promoting investment and innovation in the Internet of Things: Summary of responses and next steps” (27 January 2015) which responded to the views gathered during the consultation which closed in the autumn of 2014. In this report Ofcom has identified several priority areas to focus on in order to support the growth of the IoT. These “next step” Ofcom priorities are summarised across four core areas:

Spectrum availability: where Ofcom concludes that “existing initiatives will help to meet much of the short to medium term spectrum demand for IoT services. These initiatives include making spectrum available in the 870/915MHz bands and liberalising licence conditions for existing mobile bands. We also note that some IoT devices could make use of the spectrum at 2.4 and 5GHz, which is used by a range of services and technologies including Wi-Fi.” Ofcom goes on to recognise that, as IoT grows and the sector develops, there may be a renewed need to release more spectrum in the longer term.

Network security and resilience: where Ofcom holds the view that “as IoT services become an increasingly important part of our daily lives, there will be growing demands both in terms of the resilience of the networks used to transmit IoT data and the approaches used to securely store and process the data collected by IoT devices“. Working with other sector regulators where appropriate, Ofcom plans to continue existing security and resilience investigations and to extend its thoughts to the world of IoT.

Network addressing: where Ofcom, previously fearing numbering scarcity, now recognises that “telephone numbers are unlikely to be required for most IoT services. Instead IoT services will likely either use bespoke addressing systems or the IPv6 standard. Given this we intend to continue to monitor the progress being made by internet service providers (ISPs) in migrating to IPv6 connectivity and the demand for telephone numbers to verify this conclusion“; and

Privacy: In the particularly hot privacy arena there is nothing particularly new within Ofcom’s preliminary conclusions. Ofcom concludes that there is a need for “a common framework that allows consumers easily and transparently to authorise the conditions under which data collected by their devices is used and shared by others will be critical to future development of the IoT sector.” In a world where the UK’s Data Protection Act already applies, it was inevitable that Ofcom (without a direct regulatory remit over privacy) would offer little further insight in this regard.

It’s not surprising to read from the Report that commentary within the responses highlighted data protection and privacy to potentially be the “greatest single barrier to the development of the IoT“. The findings from its consultation do foresee potential inhibitors to the IoT adoption resulting from these privacy challenges, and Ofcom acknowledges that the activities and guidance of the UK Information Commissioner (ICO) and other regulators will be pertinent to achieving clarity. Ofcom will be co-ordinating further cooperation and discussion with such bodies both nationally and internationally.

A measured approach to an emerging sector

Ofcom appears to be striking the right balance here for the UK. Ofcom suggests that future work with ICO and others could include examining some of the following privacy issues:

  • assessing the extent to which existing data protection regulations fully encompass the IoT;
  • considering a set of principles for the sharing of data within the IoT looking to principles of minimisation and restricting the overall time any data is stored for;
  • forming a better understanding of consumer attitudes to sharing data and considering techniques to provide consumers “with the necessary information to enable them to make an informed decision on whether to share their data“; and
  • in the longer term, exploring the merit of a consumer education campaign exposing the potential benefits of the IoT to consumers.

The perceived need for more clarity around privacy and the IoT

International progress around self-regulation, standards and operational best practice will inevitably be slow. On the international stage, Ofcom suggests it will work with existing research groups (such as the ones hosted by BEREC amongst other EU regulators).

We of course already have insight from Working Party 29 in its September 2014 Opinion on the Internet of Things. The Fieldfisher privacy team expounded the Working Party’s regulatory mind-set in another of our Blogs. The Working Party has warned that the IoT can reveal ‘intimate details’; ‘sensor data is high in quantity, quality and sensitivity’ and the inferences that can be drawn from this data are ‘much bigger and sensitive’, especially when the IoT is seen alongside other technological trends such as cloud computing and big data analytics.

As with previous WP29 Opinions (think cloud, for example), the regulators in that Opinion have taken a very broad brush approach and have set the bar so high, that there is a risk that their guidance will be impossible to meet in practice and, therefore, may be largely ignored. This is in contrast to the more pragmatic FTC musings further explained below, though following a similar approach to protect privacy, the EU approach is far more alarmist and potentially restrictive.

Hopefully, as practical and innovative assessments are made in relation to technologies within the IoT, we may find new pragmatic solutions emerging to some of these privacy challenges. Perhaps the development of standard “labels” for transparency notifications to consumers, industry protocols for data sharing coupled with associated controls and possibly more recognition from the regulators that swamping consumers with more choices and information can sometimes amount to no choice at all (as citizens start to ignore a myriad of options and simply proceed with their connected lives ignoring the interference of another pop-up or check-box). Certainly with increasing device volumes and data uses in the IoT, consumers will continue to value their privacy. But, if this myriad of devices is without effective security, they will soon learn that both privacy and security issues count.

And in other news….US developments

Just as the UK’s regulators are turning their attention to the IoT, the Federal Trade Commission (FTC) also published a new Report on the IoT in January 2015: As Ofcom’s foray into the world of the IoT, the FTC’s steps in “Privacy & Security in a Connected World” are also exploratory. To a degree, there is now more pragmatic and realistic guidance around best practices in making IoT services available in the US than we have today in Europe.

In this report the FTC recommends “a series of concrete steps that businesses can take to enhance and protect consumers’ privacy and security, as Americans start to reap the benefits from a growing world of Internet-connected devices.” As with Ofcom, it recognises that best practice steps need to emerge to ensure the potential of the IoT can be recognised.  This reads as an active invitation to those playing in the IoT to self-regulate and act as good data citizens. With the surge in active enforcement by the FTC in during 2014, this is something worthy of attention for those engaged in the consumer facing world of the IoT.

As the Federal Trade Commission works for consumers to prevent fraudulent, deceptive, and unfair business practices and to provide information to help spot, stop, and avoid them the FTC’s approach focusses more on the risks that will arise from a lack of transparency and excessive data collection than the practical challenges the US IoT industry may encounter as the IoT and its devices create an increasing demand on infrastructure and spectrum.

The report focuses in on three core topics of (1) Security, (2) Data Minimisation and (3) Notice and Choice. Of particular note the FTC report makes a number of recommendations for anyone building solutions or deploying devices in the IoT space:

  • build security into devices at the outset, rather than as an afterthought in the design process;
  • train employees about the importance of security, and ensure that security is managed at an appropriate level in the organization;
  • ensure that when outside service providers are hired, that those providers are capable of maintaining reasonable security, and provide reasonable oversight of the providers;
  • when a security risk is identified, consider a “defense-in-depth” strategy whereby multiple layers of security may be used to defend against a particular risk;
  • consider measures to keep unauthorized users from accessing a consumer’s device, data, or personal information stored on the network;
  • monitor connected devices throughout their expected life cycle, and where feasible, provide security patches to cover known risks.”

With echoes of privacy by design and data minimisation as well as recommendations to limit the collection and retention of information, suggestions to impose security on outside contractors and then recommendations to consider and notice and choice, it could transpire that the IoT space will be one where we’ll be seeing fewer differences in the application of US/EU best practice?!

In addition to its report, the FTC also released a new publication designed to provide practical advice about how to build security into products connected to the Internet of Things. This report “Careful Connections: Building Security in the Internet of Things” encourages both “a risk-based approach” and suggests businesses active in the IoT “take advantage of best practices developed by security experts, such as using strong encryption and proper authentication“.

Where next?

Both reports indicate a consolidation in regulatory thinking around the much hyped world of IoT. Neither report proposes concrete laws for the IoT and, if they are to come, such laws are some time off. The FTC even goes as far as saying “IoT-specific legislation at this stage would be premature“. However, it does actively “urge further self-regulatory efforts on IoT, along with enactment of data security and broad-based privacy legislation”. Obama’s new data privacy proposals are obviously seen as a complementary step toward US consumer protection? What is clear is there are now emerging good practices and a deeper understanding at the regulators of the IoT, its potential and risks.

On both sides of the Atlantic the US and UK regulators are operating a “wait and see” policy. In the absence of legislation, with other potentially privacy sensitive emerging technologies we’ve seen self-regulatory programs within particular sectors or practices emerging to help guide and standardise practice around norms. This can protect at the same time as introducing an element of certainty around which business is able to innovate.

Mark Webber – Partner, Palo Alto California mark.webber@fieldfisher.com

 

Guns and privacy have more in common than you think

Posted on January 13th, 2015 by



When speaking with US companies, how do you explain the importance that EU consumers place on their data protection rights?  Oftentimes, I do this by referring to the US right to bear arms.

Whether for or against guns, pretty much every American has a strong view on this issue.  And why wouldn’t they?  The right to bear arms is a constitutional right for US citizens.  Over in the EU, we have the Charter of Fundamental Rights – not quite a constitution, but pretty close to it.  This doesn’t enshrine a right to bear arms, but it does enshrine both a right to privacy (Art 7) and a right to data protection (Art 8) for all EU citizens.

So I start by explaining that Europeans have constitutional-like rights to privacy and data protection, and that they feel as strongly about these rights as Americans do about their second amendment rights.  Once I’ve drawn this analogy, US companies quickly grasp the ‘EU privacy issue’ and understand the need for comprehensive measures to address EU data protection compliance.

In fact, the analogy between guns and privacy doesn’t end there.  At the risk of extending the analogy to breaking point, it can also be applied to debates about government surveillance and gun control.

Consider this: in the EU, there’s widespread ongoing concern over excessive government surveillance of telephone and internet communications.  These concerns are fuelled largely by fears that the data collected might be used by governments to exert Orwellian control over their citizens.   As it happens, fear of an abusive government is also part of what drives many of the heated debates over US gun control: a fear that, by restricting citizens’ right to bear arms, a dystopian future government might in some way turn against a citizenship that has no ability to defend itself.

Not everyone feels this way though.  Some argue that allowing some level of government incursion into citizens’ civil liberties affords us greater protection, either by disrupting potential terrorist threats or by preventing accidental or deliberate gun deaths, and that these incursions are necessary in light of the present-day threats we face.  The issues are complex and, whether it comes to guns or privacy, the emotive arguments presented by both sides to the discourse often seem to present an insurmountable barrier to consensus.

Perhaps this is the way it should be, though.  When fundamental human or constitutional rights are at stake, they should attract impassioned debate – that’s the imperative of a democratic society.  Because debating these issues calls into question the very type of society we want to be:  are we a society that accepts a level of surveillance in return for greater assurance of physical safety?  Or should we be a society that protects freedom of communication at all cost?

There are no easy answers, and the debate will often be determined by cultural sensitivities and topical news events.  But, as difficult as consensus can sometimes seem, we witnessed one wonderfully positive example of it today.  Speaking at the Federal Trade Commission, President Obama announced four major new privacy initiatives in the US.  These included a federal data breach notification standard, easier access to credit scores, and new protections for student data.

Most critically, though, President Obama announced that federal consumer privacy legislation would be introduced by the end of February and called on Congress to make this new legislation “the law of the land”.  The new legislation will address data processing transparency, control, purpose limitation, security and accountability, across all sectors.  In other words, the White House acknowledges the need for federal data protection standards across the entirety of the US that will to a large degree mirror those that EU citizens enjoy today.  A form of transatlantic consensus, if you will.

So maybe there’ll come a time in the very near future where I won’t have to explain how passionately Europeans feel about their privacy because American consumers will also enjoy, and feel as strongly about, these rights.  Maybe consensus building on privacy issues, across continents if not across different schools of thought, is possible.  And maybe – no, certainly – continuing the dialogue to enshrine and protect our data protection rights worldwide is now more important and more achievable than ever.

WP29 Guidance on the right to be forgotten

Posted on December 18th, 2014 by



On 26 November the Article 29 Working Party (“WP29“) issued WP225 (the “Opinion“). Part I of the Opinion provides guidance on the interpretation of the Court of Justice of the European Union ruling on Google Spain and Inc v the Spanish Data Protection Authority and Mario Costeja Gonzalez (the “Ruling“) and in part II the WP29 provides a list of common criteria that the European Regulators would take into account when considering right to be forgotten (“RTBF“) related complaints from individuals.

The Opinion is in line with the Ruling but it further elaborates on certain legal and practical aspects of it and it offers, as a result, an invaluable insight into European Regulators’ vision of the future of the RTBF.

Some of the main ‘take-aways’ are highlighted below:

Territorial scope

One of the most controversial conclusions in the Opinion is that limiting the de-listing to the EU domains of the search engines cannot be considered sufficient to satisfactorily guarantee the rights of the data subjects and that therefore de-listing decisions should be implemented in all relevant domains, including “.com”.

The above confirms the trend of extending the application of EU privacy laws (and regulatory powers) beyond the traditional interpretation of current territorial scope rules under the Data Protection Directive and will present search engines with legal uncertainly and operational challenges.

Material scope

The Opinion argues that the precedent set out by the judgment only applies to generalist search engines and not to search engines with a limited scope of action (for instance, search engines within a website).

Even though such clarification is to be welcome, where does this leave non-search engine controllers that receive right to be forgotten requests?

What will happen in practice?

In the Opinion, the WP29 advises that:

  • Individuals should be able to exercise their rights using “any adequate means” and cannot be forced by search engines to use specific electronic forms or procedures.
  • Search engines must follow national data protection laws when dealing with requests.
  • Both search engines and individuals must provide “sufficient” explanations in their requests/decisions.
  • Search engines must inform individuals that they can turn to the Regulators if they decide not to de-list the relevant materials.
  • Search engines are encouraged to publish their de-listing criteria.
  • Search engines should not inform users that some results to their queries have been de-listed. WP29’s preference is that this information is provided generically.
  • The WP29 also advises that search engines should not inform the original publishers of the information that has been de-listed about the fact that some pages have been de-listed in response to a RTBF request.

 

Spam texts: “substantially distressing” or just annoying?

Posted on November 11th, 2014 by



The Department for Culture, Media and Sport (“DCMS”) recently launched a consultation to reduce or even remove the threshold of harm the Information Commissioner’s Office (“ICO”) needs to establish in order to fine nuisance callers, texters or emailers.

Background

In 2010 ICO was given powers to issue Monetary Penalty Notices (“MPNs”, or fines to you and me) of up to £500,000 for those companies who breach the Data Protection Act 1998 (“DPA”).  In 2011 these were extended to cover breaches of the Privacy and Electronic Communications Regulations 2003 (“PECR”), which sought to control the scourge of nuisance calls, texts and emails.

At present the standard ICO has to establish before issuing an MPN is a high one: that there was a serious, deliberate (or reckless) contravention of the DPA or PECR which was of a kind likely to cause substantial damage or substantial distress.  Whilst unsolicited marketing calls are certainly irritating, can they really be said to cause “substantial distress”?  Getting a text from a number you didn’t know about a PPI claim is certainly annoying, but could it seriously be considered “substantial damage”?  Not exactly; and therein lies the problem.

Overturned

In the first big case where ICO used this power, it issued an MPN of £300,000 to an individual who’d allegedly sent millions of spam texts for PPI claims to users who had not consented to receive them.  Upon appeal the Information Rights Tribunal overturned the fine.  The First Tier Tribunal found that whilst there was a breach of PECR (the messages were unsolicited, deliberate, with no opt-out link and for financial gain), the damage or distress caused could not be described as substantial.  Every mobile user knew what a PPI spam text meant and was unlikely to be concerned for their safety or have false expectations of compensation.  A short tut of irritation and then deleting the message solved the problem.  The Upper Tribunal agreed: a few spam texts did not substantial damage or distress cause.  Interestingly, the judge pointed out that the “substantial” requirement had come from the UK government, was stricter than that required by the relevant EU Directive and suggested the statutory test be revisited.

This does not however mean that ICO has not been able to use the power.  Since 2012 it has issued nine MPNs totalling £1.1m to direct marketers who breach PECR.  More emphasis is placed on the overall level of distress suffered by hundreds or thousands of victims, which can be considered substantial.  ICO concentrates on the worst offenders: cold callers who deliberately and repeatedly call numbers registered with the Telephone Preference Service, (“TPS” – Ofcom’s “do not call” list) even when asked to stop and those that attract hundreds of complaints.

In fact, in this particular case there were specific problems with the MPN document (this will not necessarily come as a surprise for those familiar with ICO MPNs).  The Tribunal criticised ICO for a number of reasons: not being specific about the Regulation contravened, omitting important factual information, including in the period of contravention time when ICO did not yet have fining power and changing the claim from the initial few hundred complaints to the much wider body that may have been sent.  Once all this was taken into consideration, only 270 unsolicited texts were sent to 160 people.

Proposal

ICO has been very vocal about having its hands tied in this matter and has long pushed for a change in the law (which is consistent with ICO’s broader campaigning for new powers).  Nuisance calls are a cause of great irritation for the public and currently only the worst offenders can be targeted.  Statistics compiled by ICO and TPS showed that the most nuisance is caused by large numbers of companies making a smaller number of calls.  Of 982 companies that TPS received complaints about, 80% received fewer than 5 complaints and only 20 more than 25 complaints.

Following a select committee enquiry, an All Party Parliamentary Group and a backbench debate, DCMS has launched the consultation, which invites responses on whether the threshold should be lowered to “annoyance, inconvenience or anxiety“.  This would bring it in line with the threshold Ofcom must consider when fining telecoms operators for persistent misuse for silent/abandoned calls. ICO estimates that had this threshold been in place since 2012, a further 50 companies would have been investigated/fined.

The three options being considered are: to do nothing, to lower the threshold or to remove it altogether.  Both ICO and DCMS favour complete removal.  ICO would thus only need to prove a breach was serious and deliberate/reckless.

Comment

I was at a seminar last week with the Information Commissioner himself, Chris Graham, at which he announced the consultation.  It was pretty clear he is itching to get his hands on these new powers to tackle rogue callers/emailers/texters, but emphasised any new powers would still be used proportionally and in conjunction with other enforcement actions such as compliance meetings and enforcement notices.  Even the announcement of any new law should act as a deterrent: typically whenever a large MPN is announced, the number of complaints about direct marketers reduces the following month.

The consultation document is squarely aimed at unsolicited calls, texts and emails and is consistently stated to only apply to certain regulations of PECR.  There is no suggestion that the threshold be reduced for other breaches of the PECR or the DPA.  It will be interesting to see how any reform will work in practice as the actual threshold is contained within the DPA and so will require its amendment.

The consultation will run until 7 December 2014, the document can be found here.  Organisations that are concerned about these proposals now have an opportunity to make their voices heard.

Update 27 February 2015

Following the consultation, DCMS announced that the majority of responses favoured the complete removal of the threshold.  As a result, from 6 April 2015 section 55A(1) of the DPA will be amended to remove the need to prove “substantial harm or substantial distress” in respect of regulations 19 to 24 of PECR.  ICO will still need to establish that the breach was serious and intentional or reckless, however this reform removes a huge hurdle in the fight against spammers.

DPA update: finally the end of enforced Subject Access Requests?

Posted on November 10th, 2014 by



Employers who force prospective employees to obtain a Subject Access Request from the police detailing any criminal history or investigation will soon themselves be committing a criminal offence.

Background

The Ministry of Justice recently announced that on 1 December 2014, section 56 of the Data Protection Act 1998 (“DPA”) will come into force across the UK.  It will make it a criminal offence for employers to demand prospective employees obtain Subject Access Request (“SAR”) reports.

Some employers are concerned that s56 will make it an offence to undertake Disclosure & Barring Service (“DBS”, the new name for the Criminal Records Bureau) checks on prospective employees.  This is not the case.  In fact it is designed to encourage the use of these and to prevent enforced SARs.

Purpose

The correct procedure to obtain criminal records of prospective employees is via the disclosure service provided by DBS or the Scottish equivalent, Disclosure Scotland (“DS”).  Whilst these services were in the process of being developed, employers could demand that applicants made SARs directly to the police and pass on the report.

The purpose of s56 was to close this loophole once the DBS/DS system had become fully operational.  For that reason s56 was inserted into the DPA but not enacted with the rest of the provisions.  It applies only to records obtained by the individual from the police using their s7 SAR rights. SAR reports contain far more information than would be revealed under a DBS/DS check, such as police intelligence and spent convictions.  As a result the practice is frowned upon by the authorities: the police SAR form states enforced SARs are exploitative and contrary to the spirit of the DPA, and the Information Commissioner’s Office (“ICO”) guidance on employment has long advised against it using stern wording (“Do not force applicants…” !).

Exemptions

The only exemptions to s56 are situations when the report is justified as being in the public interest or when required by law; the s28 national security exemption does not apply.

Opinion

There has been no specific guidance released on s56.  However, it is clear from the Written Ministerial Statement which announced the change in March 2014 and the ICO release which followed it that the section is being brought into force to close the loophole.  ICO has publically stated it intends to prosecute infringers under the offence so as to encourage the correct use of the DBS/DS procedure and prevent enforced SARs.  s56 does nothing to prevent employers requesting DBS/DS checks on prospective employees in the usual way.

What this means in practice is that any employer who demands a potential employee to file an SAR with the police and provide the results will be committing a criminal offence and there is a potentially unlimited fine for infringement.  Instead, employers should utilise the DBS procedure (DS if in Scotland) for the purpose of background criminal checks.  This sneaky backdoor route to obtaining far more sensitive personal data than employers are entitled to – often harming the individual’s job prospects in the process – will be shut for good.  Non-compliant employers should take note.

Update 19 November 2014

In an informative webinar on this subject yesterday, ICO mentioned a delay in the commencement date.  When I queried this the official response was: “a technical issue encountered when finalising arrangements for introduction means there will be a delay to the date for commencing Section 56 of the Data Protection Act. The Government is working to urgently resolve this issue. There is no exact date as yet.”

Update 27 February 2015

The Government has since passed the necessary commencement order and so s56 will come into force from 10 March 2015.

What does EU regulatory guidance on the Internet of Things mean in practice? Part 2

Posted on November 1st, 2014 by



In Part 1 of this piece I summarised the key points from the recent Article 29 Working Party (WP29) Opinion on the Internet of Things (IoT), which are largely reflected in the more recent Mauritius Declaration adopted by the Data Protection and Privacy Commissioners from Europe and elsewhere in the world. I expressed my doubts that the approach of the regulators will encourage the right behaviours while enabling us to reap the benefits that the IoT promises to deliver. Here is why I have these concerns.

Thoughts about what the regulators say

As with previous WP29 Opinions (think cloud, for example), the regulators have taken a very broad brush approach and have set the bar so high, that there is a risk that their guidance will be impossible to meet in practice and, therefore, may be largely ignored. What we needed at this stage was a somewhat more balanced and nuanced guidance that aimed for good privacy protections while taking into account the technological and operational realities and the public interest in allowing the IoT to flourish.

I am also unsure whether certain statements in the Opinion can withstand rigorous legal analysis. For instance, isn’t it a massive generalisation to suggest that all data collected by things should be treated as personal, even if it is anonymised or it relates to the ‘environment’ of individuals as opposed to ‘an identifiable individual’? How does this square with the pretty clear definition of the Data Protection Directive? Also, is the principle of ‘self-determination of data’ (which, I assume is a reference to the German principle of ‘informational self-determination’) a principle of EU data protection law that applies across the EU? And how is a presumption in favour of consent justified when EU data protection law makes it very clear that consent is one among several grounds on which controllers can rely?

Few people will suggest that the IoT does not raise privacy issues. It does, and some of them are significant. But to say that (and I am paraphrasing the WP29 Opinion) pretty much all IoT data should be treated as personal data and can only be processed with the consent of the individual which, by the way, is very difficult to obtain at the required standards, leaves companies processing IoT data nowhere to go, is likely to unnecessarily stifle innovation, and slow down the development of the IoT, at least in Europe. We should not forget that the EU Data Protection Directive has a dual purpose: to protect the privacy of individuals and to enable the free movement of personal data.

Distinguishing between personal and non-personal data is essential to the future growth of the IoT. For instance, exploratory analysis to find random or non-obvious correlations and trends can lead to significant new opportunities that we cannot even imagine yet. If this type of analysis is performed on data sets that include personal data, it is unlikely to be lawful without obtaining informed consent (and even then, some regulators may have concerns about such processing). But if the data is not personal, because it has been effectively anonymised or does not relate to identifiable individuals in the first place, there should be no meaningful restrictions around consent for this use.

Consent will be necessary in several occasions such as for storing or accessing information stored on terminal equipment, for processing health data and other sensitive personal data, or for processing location data created in the context of public telecommunications services. But is consent really necessary for the processing of, e.g., device identifiers, MAC addresses or IP addresses? If the individual is sufficiently informed and makes a conscious decision to sign up for a service that entails the processing of such information (or, for that matter, any non-sensitive personal data), why isn’t it possible to rely on the legitimate interests ground, especially if the individual can subsequently chose to stop the further collection and processing of data relating to him/her? Where is the risk of harm in this scenario and why is it impossible to satisfy the balance of interests test?

Notwithstanding my reservations, the fact of the matter remains that the regulators have nailed their colours to the mast, and there is risk if their expectations are not met. So where does that leave us then?

Our approach

Sophisticated companies are likely to want to take the WP29 Opinion into account and also conduct a thorough analysis of the issues in order to identify more nuanced legal solutions and practical steps to achieve good privacy protections without unnecessarily restricting their ability to process data. Their approach should be guided by the following considerations:

  1. The IoT is global. The law is not.
  2. The law is changing, in Europe and around the world.
  3. The law is actively enforced, with increasing international cooperation.
  4. The law will never keep up with technology. This pushes regulators to try to bridge the gap through their guidance, which may not be practical or helpful.
  5. So, although regulatory guidance is not law, there is risk in implementing privacy solutions in cutting edge technologies, especially when this is done on a global scale.
  6. Ultimately, it’s all about trust: it’s the loss of trust that a company will respect our privacy and that it will do its best to protect our information that results in serious enforcement action, pushes companies out of business or results in the resignation of the CEO.

 

This is a combustible environment. However, there are massive business opportunities for those who get privacy right in the IoT, and good intentions, careful thinking and efficient implementation can take us a long way. Here are the key steps that we recommend organisations should take when designing a privacy compliance programme for their activities in the IoT:

  1. Acknowledge the privacy issue. ‘Privacy is dead’ or ‘people don’t care’ type of rhetoric will get you nowhere and is likely to be met with significant pushback by regulators.
  2. Start early and aim to bake privacy in. It’s easier and less expensive than leaving it for later. In practice this means running privacy impact assessments and security risk assessments early in the development cycle and as material changes are introduced.
  3. Understand the technology, the data, the data flows, the actors and the processing purposes. In practice, this may be more difficult than it sounds.
  4. Understand what IoT data is personal data taking into account if, when and how it is aggregated, pseudonymised or anonymised and how likely it is to be linked back to identifiable individuals.
  5. Define your compliance framework and strategy: which laws apply, what they require, how the regulators interpret the requirements and how you will approach compliance and risk mitigation.
  6. When receiving data from or sharing data with third parties, allocate roles and responsibilities, clearly defining who  is responsible for what, who protects what, who can use what and for what purposes.
  7. Transparency is absolutely essential. You should clearly explain to individuals what information you collect, what you do with it and the benefit that they receive by entrusting you with their data. Then do what you said you would do – there should be no surprises.
  8. Enable users to exercise choice by enabling them to allow or block data collection at any time.
  9. Obtain consents when the law requires you to do so, for instance if as part of the service you need to store information on a terminal device, or if you are processing sensitive personal data, such as health data. In most cases, it will be possible to rely on ‘implied’ consent so as to not unduly interrupt the user journey (except when processing sensitive personal data).
  10. Be prepared to justify your approach and evidence compliance. Contractual and policy hygiene can help a lot.
  11. Have a plan for failure: as with any other technology, in the IoT things will go wrong, complaints will be filed and data security breaches will happen. How you react is what makes the difference.
  12. Things will change fast: after you have implemented and operationalised your programme, do not forget to monitor, review, adapt and improve it.

 

What does EU regulatory guidance on the Internet of Things mean in practice? Part 1

Posted on October 31st, 2014 by



The Internet of Things (IoT) is likely to be the next big thing, a disruptive technological step that will change the way in which we live and work, perhaps as fundamentally as the ‘traditional’ Internet did. No surprise then that everyone wants a slice of that pie and that there is a lot of ‘noise’ out there. This is so despite the fact that to a large extent we’re not really sure about what the term ‘Internet of Things’ means – my colleague Mark Webber explores this question in his recent blog. Whatever the IoT is or is going to become, one thing is certain: it is all about the data.

There is also no doubt that the IoT triggers challenging legal issues that businesses, lawyers, legislators and regulators need to get their heads around in the months and years to come. Mark discusses these challenges in the second part of his blog (here), where he considers the regulatory outlook and briefly discusses the recent Article 29 Working Party Opinion on the Internet of Things.

Shortly after the WP29 Opinion was published, Data Protection and Privacy Commissioners from Europe and elsewhere in the world adopted the Mauritius Declaration on the Internet of Things. It is aligned to the WP29 Opinion, so it seems that privacy regulators are forming a united front on privacy in the IoT. This is consistent with their drive towards closer international cooperation – see for instance the latest Resolution on Enforcement Cooperation and the Global Cross Border Enforcement Cooperation Agreement (here).

The regulatory mind-set

You only need to read the first few lines of the Opinion and the Declaration to get a sense of the regulatory mind-set: the IoT can reveal ‘intimate details'; ‘sensor data is high in quantity, quality and sensitivity’ and the inferences that can be drawn from this data are ‘much bigger and sensitive’, especially when the IoT is seen alongside other technological trends such as cloud computing and big data analytics. The challenges are ‘huge’, ‘some new, some more traditional, but then amplified with regard to the exponential increase of data processing’, and include ‘data losses, infection by malware, but also unauthorized access to personal data, intrusive use of wearable devices or unlawful surveillance’.

In other words, in the minds of privacy regulators, it does not get much more intrusive (and potentially unlawful) than this, and if the IoT is left unchecked, it is the quickest way to an Orwellian dystopia. Not a surprise then that the WP29 supports the incorporation of the highest possible guarantees, with users remaining in complete control of their personal data, which is best achieved by obtaining fully informed consent. The Mauritius Declaration echoes these expectations.

What the regulators say

Here are the main highlights from the WP29 Opinion:

  1. Anyone who uses an IoT object, device, phone or computer situated in the EU to collect personal data is captured by EU data protection law. No surprises here.
  2. Data that originates from networked ‘things’ is personal data, potentially even if it is pseudonymised or anonymised (!), and even if it does not relate to individuals but rather relates to their environment. In other words, pretty much all IoT data should be treated as personal data.
  3. All actors who are involved in the IoT or process IoT data (including device manufacturers, social platforms, third party app developers, other third parties and IoT data platforms) are, or at least are likely to be, data controllers, i.e. responsible for compliance with EU data protection law.
  4. Device manufacturers are singled out as having to take more practical steps than other actors to ensure data protection compliance (see below). Presumably, this is because they have a direct relationship with the end user and are able to collect ‘more’ data than other actors.
  5. Consent is the first legal basis that should be principally relied on in the IoT. In addition to the usual requirements (specific, informed, freely given and freely revocable), end users should be enabled to provide (or withdraw) granular consent: for all data collected by a specific thing; for specific data collected by anything; and for a specific data processing. However, in practice it is difficult to obtain informed consent, because it is difficult to provide sufficient notice in the IoT.
  6. Controllers are unlikely to be able to process IoT data on the basis that it is on their legitimate interests to do so, because it is clear that this processing significantly affects the privacy rights of individuals. In other words, in the IoT there is a strong regulatory presumption against the legitimate interests ground and in favour of consent as the legitimate basis of processing.
  7. IoT devices constitute ‘terminal devices’ for EU law purposes, which means that any storage of information, or access to information stored, on an IoT device requires the end user’s consent (note: the requirement applies to any information, not just personal data).
  8. Transparency is absolutely essential to ensure that the processing is fair and that consent is valid. There are specific concerns around transparency in the IoT, for instance in relation to providing notice to individuals who are not the end users of a device (e.g. providing notice to a passer-by whose photo is taken by a smart watch).
  9. The right of individuals to access their data extends not only to data that is displayed to them (e.g. data about calories burnt that is displayed on a mobile app), but also the raw data processed in the background to provide the service (e.g. the biometric data collected by a wristband to calculate the calories burnt).
  10. There are additional specific concerns and corresponding expectations around purpose limitation, data minimisation, data retention, security and enabling data subjects to exercise their rights.

 

It is also worth noting that some of the expectations set out in the Opinion do not currently have an express statutory footing, but rather reflect provisions of the draft EU Data Protection Regulation (which may or may not become law): privacy impact assessments, privacy by design, privacy by default, security by design and the right to data portability feature prominently in the WP29 Opinion.

The regulators’ recommendations

The WP29 makes recommendations regarding what IoT stakeholders should do in practice to comply with EU data protection law. The highlights include:

  1. All actors who are involved in the IoT or process IoT data as controllers should, carry out Privacy Impact Assessments and implement Privacy by Design and Privacy by Default solutions; should delete raw data as soon as they have extracted the data they require; and should empower users to be in control in accordance with the ‘principle of self-determination of data’.
  2. In addition, device manufacturers should:
    1. follow a security by design principle;
    2. obtain consents that are granular (see above), and the granularity should extend to enabling users to determine the time and frequency of data collection;
    3. notify other actors in the IoT supply chain as soon as a data subject withdraws their consent or opposes a data processing activity;
    4. limit device finger printing to prevent location tracking;
    5. aggregate data locally on the devices to limit the amount of data leaving the device;
    6. provide users with tools to locally read, edit and modify data before it is shared with other parties;
    7. provide interfaces to allow users to extract aggregated and raw data in a structured and commonly used format; and
    8. enable privacy proxies that inform users about what data is collected, and facilitate local storage and processing without transmitting data to the manufacturer.
  3. The Opinion sets out additional specific expectations for app developers, social platforms, data platforms, IoT device owners and additional data recipients.

 

Comment

I have no doubt that there are genuinely good intentions behind the WP29 Opinion and the Mauritius Declaration. What I am not sure about is whether the approach of the regulators will encourage behaviours that protect privacy without stifling innovation and impeding the development of the IoT. I am not even sure if, despite the good intentions, in the end the Opinion will encourage ‘better’ privacy protections in the IoT. I explain why I have these concerns and how I think organisations should be approaching privacy compliance in the IoT in Part 2 of this piece.

Are DPA notifications obsolete?

Posted on October 27th, 2014 by



For almost 10 years I’ve been practising data protection law and advising multinational organizations on their strategic approach to global data processing operations. Usually, when it comes to complying with European data protection law, notifying the organization’s data processing activities with the national data protection authorities (DPAs) is one of the most burdensome exercises. It may look simple, but companies often underestimate the work involved to do this.

As a reminder, article 18 of the Data Protection Directive 95/46/EC requires data controllers (or their representatives in Europe) to notify the DPA prior to carrying out their processing operations. In practise, this means that they must file a notification with the DPA in each Member State in which they are processing personal data, which specifies who is the data controller, the types of data that are collected, the purpose(s) for processing such data, whether any of that data gets transferred outside the EEA and how individuals can exercise their privacy rights.

In a perfect world, this would be a fairly straightforward process whereby organizations would simply file a single notification with the DPA in every Member State. But that would be too easy! The reality is that DPA notification procedures are not harmonized in Europe, which means that organizations must comply with the notification procedures of each Member State as defined by national law. As a result, each DPA has established its own notification rules which impose a pre-established notification form, procedure, and formalities on data controllers. Europe is not the only region to have notification rules. In Latin America, organizations must file a notification in Argentina, Uruguay in Peru. And several African countries (usually those who are members of the “Francophonie” such as Morocco, Senegal, Tunisia, and the Ivory Coast) have also adopted data protection laws requiring data controllers to notify their data processing activities.

Failing to comply with this requirement puts your organization at risk with the DPAs who have the power in some countries to conduct audits and inspections of an organization’s processing activities. If a company is found to be in violation of the law, some DPAs may impose sanctions (such as fines, public warnings) or order the data to be blocked or the data processing to cease immediately. Furthermore, companies may also be sanctioned by the national courts. For example, on October 8th, 2014, the labour chamber of the French Court of Cassation (the equivalent to the Supreme Court for civil and criminal matters) ruled that an employer could not use the data collected via the company’s messaging system as evidence to lay-off one of its employees for excessively using that messaging service for private purposes (i.e., due to the high number of private emails transiting via the messaging service) because the company had failed to notify the French Data Protection Authority (CNIL) prior to monitoring the use of the messaging service.

One could also argue that notifications may get scrapped altogether by the draft Data Protection Regulation (currently being discussed by the European legislator) and so companies will no longer be required to notify their data processing activities to the regulator. True, but don’t hold your breath for too long! The draft Regulation is currently stuck in the Council of ministers, and assuming it does get adopted by the European legislator, the most realistic date of adoption could be 2016. Given that the text has a two-year grace period before it comes into force, the Regulation would not come into force before 2018. And in its last meeting of October 3rd, 2014, the Council agreed to reach a partial general approach on the text of chapter IV of the draft Regulation on the understanding that “nothing is agreed until everything is agreed.”

So, are DPA notifications obsolete? The answer is clearly “no”. If you’re thinking: “why all the fuss? Do I really need to go through all this bureaucracy?” think again! The reason organizations must notify their data processing activities to the DPAs is simple: it’s the law. Until the Data Protection Regulation comes into force (and even then, some processing activities may still require the DPA’s prior approval), companies must continue to file their notifications. Doing so is a necessary component of any global privacy compliance project. It requires organizations to strategize their processing operations and to prioritize the jurisdictions in which they are developing their business. And failing to do so simply puts your organization at risk.

This article was first published in the IAPP’s Privacy Tracker on October 23rd, 2014.