Archive for the ‘Legislative reform’ Category

The EU DP Regulation is on its way…but when?

Posted on April 17th, 2015 by



As a privacy lawyer based in Brussels, I often get asked when the General Data Protection Regulation (the “DP Regulation”) will be adopted. People often look surprised or shocked when I tell them that it could take at least another year. Recently, there have been several announcements stating that the DP Regulation could be adopted before the end of 2015. While possible, this seems very unlikely due to the complex and lengthy legislative procedure of the European Union. Here’s an overview of how it all works…

The law-making procedure in Europe is enshrined in the founding treaties of the European Union called Treaty on European Union and Treaty on the Functioning of the European Union, which were updated by the Lisbon Treaty on 1st December 2009. Essentially, there are three institutions involved in the EU’s legislative procedure, whose powers and responsibilities are defined under the EU treaties:

the European Commission (the Commission”): which represents the interests of the Union as a whole.

the European Parliament (the Parliament”): which represents the EU’s citizens and is directly elected by them.

the Council of Ministers of the European Union (the Council”): which represents the governments of the 28 EU Member States. The Presidency of the Council is shared by the Member States on a rotating basis every six months. Currently, the EU presidency is held by Latvia until June 30th and will then will passed on to Luxembourg.

Together, these three institutions produce through the “Ordinary Legislative Procedure” the policies and laws that apply throughout the EU. The main steps of the Ordinary Legislative Procedure are described below.

Step 1: Commission’s initial proposal

The Commission submits its legislative proposal simultaneously to the Parliament and the Council. The Commission did so with its proposal for a DP Regulation on 25th January 2012.

Step 2: 1st reading in the Parliament

The President of the Parliament refers the proposal to a parliamentary committee (in this case, the Civil Liberties, Justice and Home Affairs committee, more commonly referred to as “LIBE committee”), which appoints a rapporteur (Jan Philipp Albrecht from the Group of the Greens/European Free Alliance party) who is responsible for drawing up a draft report containing amendments to the proposed text. The committee votes on this report and any amendments to it tabled by other members. This is usually the moment when all the lobbying in Brussels takes place, which as we know was immensely important for this text (more than 4,000 proposed amendments!). The Parliament then discusses and votes on the legislative proposal in its plenary session on the basis of the committee report and amendments. The result is the Parliament’s position, which in the case of the DP Regulation was adopted on 12th March 2014. The Parliament’s 1st reading position is then forwarded to the Council.

Step 3: 1st reading in the Council

The Council can begin preparatory work in parallel with the 1st reading in Parliament, but it may only formally conduct its 1st reading based on the Parliament’s position. The Council can either accept the Parliament’s position, in which case the legislative act is adopted; or where the Council does not adopt all the Parliament’s amendments or wants to introduce its own changes, it adopts a 1st reading position, which is sent to Parliament for a 2nd reading. This is currently where we stand with the DP Regulation. The Council is expected to adopt its amendments any time soon. However, it is worth noting that there is no time limit for the Council’s 1st reading, which explains why this can take a long time, particularly when the EU Member States disagree amongst themselves on some of the proposals (as is the case with the so-called “one-stop-shop” rule). The Commission may also decide at any time during the 1st reading to withdraw or alter its proposal, although this seems unlikely given the attention that the DP Regulation has drawn in Europe and abroad.

Step 4: 2nd reading in the Parliament

Upon receipt of the Council’s 1st reading position, the Parliament has three months (with a possible extension to four) to examine the Council’s position. The Council’s position goes first to the responsible committee (LIBE committee), which prepares a recommendation for the Parliament’s 2nd reading. In this case, the text to be amended is the Council’s 1st reading position rather than the Commission’s initial proposal.

The outcome of the 2nd reading can be that the Parliament:

– rejects the Council’s 1st reading position. This puts an end to the legislative procedure, which can only be re-launched if the Commission makes a new proposal. However, this has only happened once in July 2005 on the software patents directive.

– fails to vote within the time limit and in that case, the text is deemed to have been adopted in accordance with the Council’s 1st reading position.

– approves the Council’s 1st reading position without any amendments.

– proposes amendments to the Council’s 1st reading position.

In principle, 2nd reading amendments in Parliament are admissible only is they seek to (a) wholly or partly restore the Parliament’s 1st reading position; (b) reach a compromise between the Parliament and the Council; (c) amend part of the Council’s text that was not included in, or differs in content from, the original Commission’s proposal; or (d) take account of a new fact or legal situation that has arisen since the 1st reading. However, if European parliamentary elections have taken place since the 1st reading – which is the case here – the President may decide that the restrictions do not apply. In theory, this broadens the scope of amendments that the Parliament could make on the Council’s 1st reading of the DP Regulation.

Law-making behind the scenes

The EU treaties provide for a 2nd reading in the Parliament and Council and, where both institutions fail to agree on a common position, a Conciliation Committee (composed of an equal number of MEPs and Council representatives) is convened with a view to reaching an agreement on a joint text that is finally adopted in a 3rd reading in Parliament.

In recent years, however, the number of laws that have made it all the way to the Conciliation Committee has dropped significantly and, on the contrary, approximately 80% of law are now agreed after the first reading. In fact, most of the law-making now takes place behind the scenes. The so-called “trilogues” are not mentioned anywhere in the EU treaties, but are specifically designed to speed up the legislative procedure.

The way it works is that when the co-legislators are aiming for a 1st reading agreement they will then organise informal meetings that are held behind closed doors and are attended by representatives of the Parliament (rapporteur and, where appropriate, shadow rapporteurs), the Council (chair of the working party and/or Coreper), and the Commission (department responsible for the dossier and the Commission’s Secretariat-General). This is aimed at ensuring that the Parliament’s amendments adopted in plenary are acceptable to the Council. The Commission typically plays the role of a mediator or facilitator in respect of these compromise texts. However, due to its permanent staff of highly-qualified officials, the Commission is better equipped in terms of resources and expertise than the two other institutions to impose its view during these negotiations.

In conclusion, things will certainly accelerate once the Council adopts its 1st reading position, which is excepted at some point this year following the Justice and Home Affairs Council on 15-16 June. The question remains whether the Council and Parliament will succeed to reach a common position during the trilogues, in which case a swift adoption of the DP Regulation in 2016 (or possibly even end of 2015) seems possible. Otherwise, the adoption of this text could be pushed to the end of 2016 or 2017 if the legislative procedure continues all the way to the Conciliation Committee, which isn’t seem completely unfounded given the strong divergences around this text. Only time will tell…

This article was first published in the IAPP’s Privacy Tracker.

US and European moves to foster pro-active cybersecurity threat collaboration

Posted on March 12th, 2015 by



In this blog we report a little further on the proposals to share cybersecurity threat information within the United States. We also draw analogies with a similar initiative under the EU Cybersecurity Directive aimed at boosting security protections for critical infrastructure and enhancing information sharing around incidents that may impact that infrastructure within the EU.

Both of these mechanisms reflect a fully-formed ambition to see greater cybersecurity across the private sector. Whilst the approaches taken vary, both the EU and US wish to drive similar outcomes. Actors in the market are being asked to “up” their game. Cyber-crimes and cyber-threats are impacting companies financially, operationally and, at times, are having a detrimental impact on individuals and their privacy.

Sharing of cyber-threat information in the US

Last month we reported on Obama’s privacy proposals which included plans to enhance cybersecurity protection. These plans included requests to increase the budget available for detection and prevention mechanisms as well as for cybersecurity funding for the Pentagon. They also outlined plans for the creation of a single, central cybersecurity agency: the US government is establishing a new central agency, modelled on the National Counterterrorism Centre, to combat the threat from cyber attacks.

On February 12th 2015, President Obama signed a new Executive Order to encourage and promote sharing of cybersecurity threat information within the private sector and between the private sector and government.  In a Whitehouse Statement they emphasised that “[r]apid information sharing is an essential element of effective cybersecurity, because it enables U.S. companies to work together to respond to threats, rather than operating alone”.  The rhetoric is that, in sharing information about “risks”, all actors in the United States will be better protected and prepared to react.

This Executive Order therefore encourages a basis for more private sector and more private sector and government cybersecurity collaboration.  The Executive Order:

  • Encourages the development of Information Sharing Organizations: with the development of information sharing and analysis organizations (ISAOs) to serve as focal points for sharing;
  • Proposes the development of a common set of voluntary standards for information sharing organizations: with Department of Homeland Security being asked to fund the creation of a non-profit organization to develop a common set of voluntary standards for ISAOs;
  • Clarifies the Department of Homeland Security’s authority to enter into agreements with information sharing organizations: the Executive Order also increases collaboration between ISAOs and the federal government by streamlining the mechanism for the National Cybersecurity and Communications Integration Center (NCCIC) to enter into information sharing agreements with ISAOs. It goes on to propose streamlining private sector companies’ ability to access classified cybersecurity threat information.

All in, Obama’s plan is to streamline private sector companies’ ability to access cybersecurity threat information. These plans were generally well-received as a step towards collective responsibility and security. Though some have voiced concern that there is scant mention of liability protection for businesses that share information threats with an ISAO. Commentators have pointed out that it is this fear of liability which is a major barrier to effective threat sharing.

Past US initiatives around improving cybersecurity infrastructure

This latest Executive Order promoting private sector information sharing came one year after the launch of another US-centric development. In February 2014, the National Institute of Standards and Technology (NIST) released a Framework for Improving Critical Infrastructure Cybersecurity pursuant to another Executive Order of President Obama’s issued back in February 2013.

This Cybersecurity Framework contains a list of recommended practices for those with “critical infrastructures”.   The Cybersecurity Framework’s executive summary explains that “[t]he national and economic security of the United States depends on the reliable functioning of critical infrastructure. Cybersecurity threats exploit the increased complexity and connectivity of critical infrastructure systems, placing the Nation’s security, economy, and public safety and health at risk.”

Obama’s 2013 Executive Order had called for the “development of a voluntary risk-based Cybersecurity Framework” being a set of industry standards and best practices to help organisations manage cybersecurity risks.  The resulting technology neutral Cybersecurity Framework was the result of interaction between the private sector and Government institutions. For now the use of the Cybersecurity Framework is voluntary and it relies on a variety of existing standards, guidelines, and practices to enable critical infrastructure providers to achieve resilience. “Building from those standards, guidelines, and practices, the [Cybersecurity] Framework provides a common taxonomy and mechanism for organizations to:

  • Describe their current cybersecurity posture;
  • Describe their target state for cybersecurity;
  • Identify and prioritize opportunities for improvement within the context of a continuous and repeatable process;
  • Assess progress toward the target state;
  • Communicate among internal and external stakeholders about cybersecurity risk.”

The Cybersecurity Framework was designed to complement, and not to replace, an organisation’s existing risk management process and cybersecurity program. There is recognition that it cannot be a one-size-fits-all solution and different organisations will have their own unique risks which may require additional considerations.

The Cybersecurity Framework states that it could be used a model for organisations outside of the United States. Yet even in the US there are open questions about how many are actually adopting and following it.

Similarities between US and European cybersecurity proposals

We have to draw analogies between the US initiatives in relation to cybersecurity and the more recent information sharing proposals with the draft EU Cybersecurity Directive which the team reported on in more detail in a recent blog. Both initiatives intend to drive behavioural change. But, as you may expect, the EU wants to introduce formal rules and consequences while the US remains focussed on building good cyber-citizens through awareness and information sharing.

The proposed Cybersecurity Directive would impose minimum obligations on “market operators” and “public administrations” to harmonise and strengthen cybersecurity across the European Union. Market operators would include energy suppliers, e-commerce platforms and application stores. The headline provision for business and organisations is the mandatory obligation to report security incidents to a national competent authority (“NCA”). The NCA being analogous to the ISAO information sharing body concept being developed in the US.  In contrast to the US Framework the EU’s own cybersecurity initiatives are now delayed (with a likely date for mere agreement of the rules of summer 2015 and implementation not likely until 2018) and somewhat diluted compared to the original announced plans.

Both the US and EU cybersecurity initiatives aim to ensure that governments and private sector bodies involved in the provision of certain critical infrastructure take appropriate steps to deal with cybersecurity threats. Both encourage these actors to share information about cyber threats. Both facilitate a pro-active approach to cyber-risk. Whist the US approach is more about self-regulation within defined frameworks the EU is going further and mandating compliance – that’s a seismic shift.

In the EU we await to see the final extent of the “critical infrastructure providers” definition and whether or not “key internet enablers” will be caught within the rules or whether the more recent and narrower definition will prevail. Interplay with data breach notification rules within the upcoming General Data Protection Regulation is also of interest.

Impact

Undoubtedly cyber-risk can hit a corporate’s bottom-line. Keeping up with the pace of change and multitude of risks can be a real challenge for even the most agile of businesses. Taking adequate steps in this area is a continuous and often fast-moving process. Only time will tell us whether the information sharing and interactions that these US and EU proposals are predicated on are going to be frequent enough and fast enough to make any real difference. Cyber-readiness remains at the fore because the first to be hit still wants to preserve an adequate line of defence. The end game remains take appropriate technical and organisational measures to secure your networks and data.

Of course cyber-space does not respect or recognise borders. How national states co-operate and share cybersecurity threat information beyond the borders of the EU is a whole other story. What is certain is that as the cyber-threat response steps up, undoubtedly so too will the hackers and cyber-criminals. The EU’s challenge is to foster a uniform approach for more effective cybersecurity across all 28 Member States. The US also wants to improve its ability to identify and respond to cyber incidents. The US and EU understand that economic prosperity and national security depend on a collective responsibility to secure.

For those acting within the EU and beyond in the future, they will have to adjust to operating (and where required complying) in an effective way across each of the emerging cybersecurity systems.

Mark Webber, Partner Palo Alto, CAmark.webber@fieldfisher.com

 

Progress update on the draft EU Cybersecurity Directive

Posted on February 27th, 2015 by



In a blog earlier this year we commented on the status of the European Union (“EU”) Cybersecurity Strategy. Given that the Strategy’s flagship piece of legislation, the draft EU Cybersecurity Directive, was not adopted within the proposed institutional timeline of December 2014 and the growing concerns held by EU citizens about cybercrime, it seems that an update on EU legislative cybersecurity developments is somewhat overdue.

Background

As more of our lives are lived in a connected, digital world, the need for enhanced cybersecurity is evident. The cost of recent high-profile data breaches in the US involving Sony Pictures, JPMorgan Chase and Home Depot ran into hundreds of millions of dollars. A terrorist attack on critical infrastructure such as telecommunications or power supplies would be devastating. Some EU Member States have taken measures to improve cybersecurity but there is wide variation in the 28 country bloc and little sharing of expertise.

These factors gave rise to the European Commission’s (the “Commission”) publication in February 2013 of a proposed Directive 2013/0027 concerning measures to ensure a high common level of network and information security across the Union (the “proposed Directive”). The proposed Directive would impose minimum obligations on “market operators” and “public administrations” to harmonise and strengthen cybersecurity across the EU. Market operators would include energy suppliers, e-commerce platforms and application stores. The headline provision for business and organisations is the mandatory obligation to report security incidents to a national competent authority (“NCA”).

Where do things stand in the EU institutions on the proposed Directive?

On 13 March 2014 the European Parliament (the “Parliament”) adopted its report on the proposed Directive. It made a number of amendments to the Commission’s original text including:

  • the removal of “public administrations” and “internet enablers” (e.g. e-commerce platforms or application stores) from the scope of key compliance obligations;
  • the exclusion of software developers and hardware manufacturers;
  • the inclusion of a number of parameters to be considered by market operators to determine the significance of incidents and thus whether they must be reported to the NCA;
  • the enabling of Member States to designate more than one NCA;
  • the expansion of the concept of “damage” to include non-intentional force majeure damage;
  • the expansion of the list of critical infrastructure to include, for example, freight auxiliary services; and
  • the reduction of the burden on market operators including that they would be given the right to be heard or anonymised before any public disclosure and sanctions would only apply if they intentionally failed to comply or were grossly negligent.

In May-October 2014 the Council of the European Union (the “Council”) debated the proposed Directive at a series of meetings. It was broadly in favour of the Parliament’s amendments but disagreed over some high-level principles. Specifically, in the interests of speed and efficiency, the Council preferred to use existing bodies and arrangements rather than setting up a new cooperation mechanism between Member States.

In keeping with the Council’s general approach to draft EU legislation intended to harmonise practices between Member States, the institution also advocated the adoption of future-proofed flexible principles as opposed to concrete prescriptive requirements. Further, it contended that Member States should retain discretion over what information to share, if any, in the case of an incident, rather than imposing mandatory requirements.

In October-November 2014 the Commission, Parliament and Council commenced trilogue negotiations on an agreed joint text. The institutions were unable to come to an agreement during the negotiations due to the following sticking points:

  1. Scope. Member States are seeking the ability to assess (to agreed criteria) whether specific market operators come within the scope, whereas the Parliament wants all market operators within defined sectors to be captured.
  2. Internet enablers. The Parliament wants all internet enablers apart from internet exchanges to be excluded, whereas some Member States on the Council (France and Germany particularly) want to include cloud providers, social networks and search engines.
  3. There was also disagreement on the extent of strategic and operational cooperation and the criteria for incident notification.

What is the timetable for adoption of the proposed Directive?

There is political desire on behalf of the Commission to see the proposed Directive adopted as soon as possible. The Council has also stated that “the timely adoption of … the Cybersecurity Directive is essential for the completion of the Digital Single Market by 2015“.

Responsibility for enacting the reform now lies with the Latvian Presidency of the Council. On 30 January 2015, Latvian Transport Minister Anrijs Matiss stated that further trilogue negotiations would be held in March 2015, with the aim of adopting the proposed Directive by July 2015.

Once adopted, Member States will have 18 months to enact national implementing legislation so we could expect to see the proposed Directive come into force by early 2017.

How does the proposed Directive interact with other EU data privacy reforms?

In our previous blog we highlighted the difficulties facing market operators of complying with the proposed Directive in view of the potentially conflicting notification requirements in the existing e-Privacy Directive and the proposed General Data Protection Regulation (the “proposed GDPR”).

Although the text of the proposed Directive does anticipate the proposed GDPR, obliging market operators to protect personal data and implement security policies “in line with applicable data protection rules“, there has still been no EU guidance issued on how these overlapping or conflicting notification requirements would operate in practice.

Furthermore, any debate over which market operators fall within the scope of the breach notification requirements of the proposed Directive would seem to become superfluous once the proposed GDPR, with mandatory breach notifications for all data controllers, comes into force.

Comment

Rather unsurprisingly, the Commission’s broad reform has been somewhat diluted in Parliament and Council. This is a logical result of Member States seeking to impose their own standards, protect their own industries or harbouring doubts regarding the potential to harmonise practices where cybersecurity/infrastructure measures diverge markedly in sophistication and scope.

Nonetheless, the proposed Directive does still impose serious compliance obligations on market operators in relation to cybersecurity incident handling and notification.

At the risk of sounding somewhat hackneyed, for organisations, cyber data breaches are no longer a question of “if” but “when” for private and public sector bodies. Indeed, there is an increasing awareness that a high level of security in one link is no use if this is not replicated across the chain. Whether the proposed Directive meets its aim of reducing weak links across the EU remains to be seen.

EU privacy reform: are we nearly there yet?

Posted on February 7th, 2015 by



One thing everyone agrees on is that the EU needs new data protection rules. The current rules, now some 20 years old, are getting long in the tooth. Adopted at a time when having household Internet access was still a rare thing (remember those 56kpbs dial-up modems, anyone?), there’s a collective view across all quarters that they need updating for the 24/7 connected world in which we now live.

The only problem is this: we can’t agree what those new rules should look like. That shouldn’t really be a surprise – Europe is politically, culturally, economically and linguistically diverse, so it would be naive to think that reaching consensus on such an important and sensitive topic would be quick or easy.

Nevertheless, whether through optimism, politicization, or plain naivety, there have been repeated pronouncements over the years that adoption of the new rules is imminent. Since the initial publication of the EU’s draft General Data Protection Regulation in January 2012, data protection pundits have repeatedly predicted it would all be done and dusted in 2012, 2013, 2014 and now – no surprises – in 2015.

The truth is we’re a way off yet, as this excellent blog from the UK Deputy Information Commissioner highlights. Adoption of the new General Data Protection Regulation ultimately requires agreement to be reached, first, individually by each of the European Parliament and the Council of the EU on their respective preferred amendments to the original draft proposals; and then, second, collectively between the Parliament, the Council and the Commission via three-way negotiations (so-called “trilogue” negotiations).

As at the date of this post, the Parliament has reached consensus on its preferred amendments to the draft, but the Council’s deliberations in this respect are still ongoing. That means the individual positions of both institutions have not yet been finalised, the trilogue negotiations have not yet begun, and so an overall agreed upon text is not yet even close. There’s still a mountain to climb.

Not that progress hasn’t been made – it has, but there’s still a long way to go and it’s very unlikely the new law will pass in 2015. Even when it does, the expectation is that it will be a further two years until it takes effect. In other words, don’t expect the news rules to bite any time before 2018 – six years after they were originally proposed.

Why so long? Designing privacy rules fit for the 21st century is a difficult task, and the difficulty stems from the inherent subjectivity of privacy as a right. When thinking about what protections should exist, a natural consideration is what “expectation” of privacy individuals have. And therein lies the problem: no two people have the same expectations: what you expect and I expect are likely very different. Amplify those differences onto a national stage, and it becomes quickly apparent why discussions over new pan-European rules have become so protracted.

How, then, to progress the debate through to conclusion?

First, European lawmakers need to listen to the views of all stakeholders in the legislative process without prejudice or pre-judging their value. It’s far too simplistic to dismiss consumer advocates’ proposals as ‘impractical’, and equally disingenuous to label all industry concerns as just ‘lobbying’. Every side to the debate raises important points that deserve careful consideration. Insufficiently strong privacy protections will come at an expense to society, our human rights and our dignity; but, conversely, excessively strict regulation will impede innovation, hamper technological progress and restrict economic growth. A balance needs to be found, and ignoring salient points made by any side to the debate comes at a cost to us all.

Once lawmakers accept this, then they must also accept compromise and not simply ‘dig in’ to already fortified positions. Any agreement requires compromise – whether a verbal agreement between friends, a written contract between counterparties, or even legislative agreement over new laws like the General Data Protection Regulation. At present, however, there is too much bluster, quarreling and entrenchment, where reason, level-headedness and compromise should prevail.

When it comes to new data protection rules, a compromise – one that benefits all stakeholders of the information economy – is there to be struck: we just have to find it.

US and UK Regulators position themselves to meet the needs of the IoT market

Posted on January 30th, 2015 by



The Internet of Things (“IoT“) is set to enable large numbers of previously unconnected devices to communicate and share data with one another.

In an earlier posting I examined the future potential regulatory landscape for the IoT market and introduced Ofcom’s (the UK’s communications regulator) 2014 consultation on the Internet of Things. This stakeholder consultation was issued in order to examine the emerging debate around this increasing interconnectivity between multiple devices and to guide Ofcom regulatory priorities. Since the consultation was issued, the potential privacy issues associated with IoT continue to attract the most attention but, as yet, no IoT issues have led to any specific laws or legal change.

In two separate developments in January 2015, the UK and US Internet of Things markets were exposed to more advanced thinking and guidance around the legal challenges of the IoT.

UK IoT developments

Ofcom published its Report: “Promoting investment and innovation in the Internet of Things: Summary of responses and next steps” (27 January 2015) which responded to the views gathered during the consultation which closed in the autumn of 2014. In this report Ofcom has identified several priority areas to focus on in order to support the growth of the IoT. These “next step” Ofcom priorities are summarised across four core areas:

Spectrum availability: where Ofcom concludes that “existing initiatives will help to meet much of the short to medium term spectrum demand for IoT services. These initiatives include making spectrum available in the 870/915MHz bands and liberalising licence conditions for existing mobile bands. We also note that some IoT devices could make use of the spectrum at 2.4 and 5GHz, which is used by a range of services and technologies including Wi-Fi.” Ofcom goes on to recognise that, as IoT grows and the sector develops, there may be a renewed need to release more spectrum in the longer term.

Network security and resilience: where Ofcom holds the view that “as IoT services become an increasingly important part of our daily lives, there will be growing demands both in terms of the resilience of the networks used to transmit IoT data and the approaches used to securely store and process the data collected by IoT devices“. Working with other sector regulators where appropriate, Ofcom plans to continue existing security and resilience investigations and to extend its thoughts to the world of IoT.

Network addressing: where Ofcom, previously fearing numbering scarcity, now recognises that “telephone numbers are unlikely to be required for most IoT services. Instead IoT services will likely either use bespoke addressing systems or the IPv6 standard. Given this we intend to continue to monitor the progress being made by internet service providers (ISPs) in migrating to IPv6 connectivity and the demand for telephone numbers to verify this conclusion“; and

Privacy: In the particularly hot privacy arena there is nothing particularly new within Ofcom’s preliminary conclusions. Ofcom concludes that there is a need for “a common framework that allows consumers easily and transparently to authorise the conditions under which data collected by their devices is used and shared by others will be critical to future development of the IoT sector.” In a world where the UK’s Data Protection Act already applies, it was inevitable that Ofcom (without a direct regulatory remit over privacy) would offer little further insight in this regard.

It’s not surprising to read from the Report that commentary within the responses highlighted data protection and privacy to potentially be the “greatest single barrier to the development of the IoT“. The findings from its consultation do foresee potential inhibitors to the IoT adoption resulting from these privacy challenges, and Ofcom acknowledges that the activities and guidance of the UK Information Commissioner (ICO) and other regulators will be pertinent to achieving clarity. Ofcom will be co-ordinating further cooperation and discussion with such bodies both nationally and internationally.

A measured approach to an emerging sector

Ofcom appears to be striking the right balance here for the UK. Ofcom suggests that future work with ICO and others could include examining some of the following privacy issues:

  • assessing the extent to which existing data protection regulations fully encompass the IoT;
  • considering a set of principles for the sharing of data within the IoT looking to principles of minimisation and restricting the overall time any data is stored for;
  • forming a better understanding of consumer attitudes to sharing data and considering techniques to provide consumers “with the necessary information to enable them to make an informed decision on whether to share their data“; and
  • in the longer term, exploring the merit of a consumer education campaign exposing the potential benefits of the IoT to consumers.

The perceived need for more clarity around privacy and the IoT

International progress around self-regulation, standards and operational best practice will inevitably be slow. On the international stage, Ofcom suggests it will work with existing research groups (such as the ones hosted by BEREC amongst other EU regulators).

We of course already have insight from Working Party 29 in its September 2014 Opinion on the Internet of Things. The Fieldfisher privacy team expounded the Working Party’s regulatory mind-set in another of our Blogs. The Working Party has warned that the IoT can reveal ‘intimate details’; ‘sensor data is high in quantity, quality and sensitivity’ and the inferences that can be drawn from this data are ‘much bigger and sensitive’, especially when the IoT is seen alongside other technological trends such as cloud computing and big data analytics.

As with previous WP29 Opinions (think cloud, for example), the regulators in that Opinion have taken a very broad brush approach and have set the bar so high, that there is a risk that their guidance will be impossible to meet in practice and, therefore, may be largely ignored. This is in contrast to the more pragmatic FTC musings further explained below, though following a similar approach to protect privacy, the EU approach is far more alarmist and potentially restrictive.

Hopefully, as practical and innovative assessments are made in relation to technologies within the IoT, we may find new pragmatic solutions emerging to some of these privacy challenges. Perhaps the development of standard “labels” for transparency notifications to consumers, industry protocols for data sharing coupled with associated controls and possibly more recognition from the regulators that swamping consumers with more choices and information can sometimes amount to no choice at all (as citizens start to ignore a myriad of options and simply proceed with their connected lives ignoring the interference of another pop-up or check-box). Certainly with increasing device volumes and data uses in the IoT, consumers will continue to value their privacy. But, if this myriad of devices is without effective security, they will soon learn that both privacy and security issues count.

And in other news….US developments

Just as the UK’s regulators are turning their attention to the IoT, the Federal Trade Commission (FTC) also published a new Report on the IoT in January 2015: As Ofcom’s foray into the world of the IoT, the FTC’s steps in “Privacy & Security in a Connected World” are also exploratory. To a degree, there is now more pragmatic and realistic guidance around best practices in making IoT services available in the US than we have today in Europe.

In this report the FTC recommends “a series of concrete steps that businesses can take to enhance and protect consumers’ privacy and security, as Americans start to reap the benefits from a growing world of Internet-connected devices.” As with Ofcom, it recognises that best practice steps need to emerge to ensure the potential of the IoT can be recognised.  This reads as an active invitation to those playing in the IoT to self-regulate and act as good data citizens. With the surge in active enforcement by the FTC in during 2014, this is something worthy of attention for those engaged in the consumer facing world of the IoT.

As the Federal Trade Commission works for consumers to prevent fraudulent, deceptive, and unfair business practices and to provide information to help spot, stop, and avoid them the FTC’s approach focusses more on the risks that will arise from a lack of transparency and excessive data collection than the practical challenges the US IoT industry may encounter as the IoT and its devices create an increasing demand on infrastructure and spectrum.

The report focuses in on three core topics of (1) Security, (2) Data Minimisation and (3) Notice and Choice. Of particular note the FTC report makes a number of recommendations for anyone building solutions or deploying devices in the IoT space:

  • build security into devices at the outset, rather than as an afterthought in the design process;
  • train employees about the importance of security, and ensure that security is managed at an appropriate level in the organization;
  • ensure that when outside service providers are hired, that those providers are capable of maintaining reasonable security, and provide reasonable oversight of the providers;
  • when a security risk is identified, consider a “defense-in-depth” strategy whereby multiple layers of security may be used to defend against a particular risk;
  • consider measures to keep unauthorized users from accessing a consumer’s device, data, or personal information stored on the network;
  • monitor connected devices throughout their expected life cycle, and where feasible, provide security patches to cover known risks.”

With echoes of privacy by design and data minimisation as well as recommendations to limit the collection and retention of information, suggestions to impose security on outside contractors and then recommendations to consider and notice and choice, it could transpire that the IoT space will be one where we’ll be seeing fewer differences in the application of US/EU best practice?!

In addition to its report, the FTC also released a new publication designed to provide practical advice about how to build security into products connected to the Internet of Things. This report “Careful Connections: Building Security in the Internet of Things” encourages both “a risk-based approach” and suggests businesses active in the IoT “take advantage of best practices developed by security experts, such as using strong encryption and proper authentication“.

Where next?

Both reports indicate a consolidation in regulatory thinking around the much hyped world of IoT. Neither report proposes concrete laws for the IoT and, if they are to come, such laws are some time off. The FTC even goes as far as saying “IoT-specific legislation at this stage would be premature“. However, it does actively “urge further self-regulatory efforts on IoT, along with enactment of data security and broad-based privacy legislation”. Obama’s new data privacy proposals are obviously seen as a complementary step toward US consumer protection? What is clear is there are now emerging good practices and a deeper understanding at the regulators of the IoT, its potential and risks.

On both sides of the Atlantic the US and UK regulators are operating a “wait and see” policy. In the absence of legislation, with other potentially privacy sensitive emerging technologies we’ve seen self-regulatory programs within particular sectors or practices emerging to help guide and standardise practice around norms. This can protect at the same time as introducing an element of certainty around which business is able to innovate.

Mark Webber – Partner, Palo Alto California mark.webber@fieldfisher.com

 

Guns and privacy have more in common than you think

Posted on January 13th, 2015 by



When speaking with US companies, how do you explain the importance that EU consumers place on their data protection rights?  Oftentimes, I do this by referring to the US right to bear arms.

Whether for or against guns, pretty much every American has a strong view on this issue.  And why wouldn’t they?  The right to bear arms is a constitutional right for US citizens.  Over in the EU, we have the Charter of Fundamental Rights – not quite a constitution, but pretty close to it.  This doesn’t enshrine a right to bear arms, but it does enshrine both a right to privacy (Art 7) and a right to data protection (Art 8) for all EU citizens.

So I start by explaining that Europeans have constitutional-like rights to privacy and data protection, and that they feel as strongly about these rights as Americans do about their second amendment rights.  Once I’ve drawn this analogy, US companies quickly grasp the ‘EU privacy issue’ and understand the need for comprehensive measures to address EU data protection compliance.

In fact, the analogy between guns and privacy doesn’t end there.  At the risk of extending the analogy to breaking point, it can also be applied to debates about government surveillance and gun control.

Consider this: in the EU, there’s widespread ongoing concern over excessive government surveillance of telephone and internet communications.  These concerns are fuelled largely by fears that the data collected might be used by governments to exert Orwellian control over their citizens.   As it happens, fear of an abusive government is also part of what drives many of the heated debates over US gun control: a fear that, by restricting citizens’ right to bear arms, a dystopian future government might in some way turn against a citizenship that has no ability to defend itself.

Not everyone feels this way though.  Some argue that allowing some level of government incursion into citizens’ civil liberties affords us greater protection, either by disrupting potential terrorist threats or by preventing accidental or deliberate gun deaths, and that these incursions are necessary in light of the present-day threats we face.  The issues are complex and, whether it comes to guns or privacy, the emotive arguments presented by both sides to the discourse often seem to present an insurmountable barrier to consensus.

Perhaps this is the way it should be, though.  When fundamental human or constitutional rights are at stake, they should attract impassioned debate – that’s the imperative of a democratic society.  Because debating these issues calls into question the very type of society we want to be:  are we a society that accepts a level of surveillance in return for greater assurance of physical safety?  Or should we be a society that protects freedom of communication at all cost?

There are no easy answers, and the debate will often be determined by cultural sensitivities and topical news events.  But, as difficult as consensus can sometimes seem, we witnessed one wonderfully positive example of it today.  Speaking at the Federal Trade Commission, President Obama announced four major new privacy initiatives in the US.  These included a federal data breach notification standard, easier access to credit scores, and new protections for student data.

Most critically, though, President Obama announced that federal consumer privacy legislation would be introduced by the end of February and called on Congress to make this new legislation “the law of the land”.  The new legislation will address data processing transparency, control, purpose limitation, security and accountability, across all sectors.  In other words, the White House acknowledges the need for federal data protection standards across the entirety of the US that will to a large degree mirror those that EU citizens enjoy today.  A form of transatlantic consensus, if you will.

So maybe there’ll come a time in the very near future where I won’t have to explain how passionately Europeans feel about their privacy because American consumers will also enjoy, and feel as strongly about, these rights.  Maybe consensus building on privacy issues, across continents if not across different schools of thought, is possible.  And maybe – no, certainly – continuing the dialogue to enshrine and protect our data protection rights worldwide is now more important and more achievable than ever.

WP29 Guidance on the right to be forgotten

Posted on December 18th, 2014 by



On 26 November the Article 29 Working Party (“WP29“) issued WP225 (the “Opinion“). Part I of the Opinion provides guidance on the interpretation of the Court of Justice of the European Union ruling on Google Spain and Inc v the Spanish Data Protection Authority and Mario Costeja Gonzalez (the “Ruling“) and in part II the WP29 provides a list of common criteria that the European Regulators would take into account when considering right to be forgotten (“RTBF“) related complaints from individuals.

The Opinion is in line with the Ruling but it further elaborates on certain legal and practical aspects of it and it offers, as a result, an invaluable insight into European Regulators’ vision of the future of the RTBF.

Some of the main ‘take-aways’ are highlighted below:

Territorial scope

One of the most controversial conclusions in the Opinion is that limiting the de-listing to the EU domains of the search engines cannot be considered sufficient to satisfactorily guarantee the rights of the data subjects and that therefore de-listing decisions should be implemented in all relevant domains, including “.com”.

The above confirms the trend of extending the application of EU privacy laws (and regulatory powers) beyond the traditional interpretation of current territorial scope rules under the Data Protection Directive and will present search engines with legal uncertainly and operational challenges.

Material scope

The Opinion argues that the precedent set out by the judgment only applies to generalist search engines and not to search engines with a limited scope of action (for instance, search engines within a website).

Even though such clarification is to be welcome, where does this leave non-search engine controllers that receive right to be forgotten requests?

What will happen in practice?

In the Opinion, the WP29 advises that:

  • Individuals should be able to exercise their rights using “any adequate means” and cannot be forced by search engines to use specific electronic forms or procedures.
  • Search engines must follow national data protection laws when dealing with requests.
  • Both search engines and individuals must provide “sufficient” explanations in their requests/decisions.
  • Search engines must inform individuals that they can turn to the Regulators if they decide not to de-list the relevant materials.
  • Search engines are encouraged to publish their de-listing criteria.
  • Search engines should not inform users that some results to their queries have been de-listed. WP29’s preference is that this information is provided generically.
  • The WP29 also advises that search engines should not inform the original publishers of the information that has been de-listed about the fact that some pages have been de-listed in response to a RTBF request.

 

Spam texts: “substantially distressing” or just annoying?

Posted on November 11th, 2014 by



The Department for Culture, Media and Sport (“DCMS”) recently launched a consultation to reduce or even remove the threshold of harm the Information Commissioner’s Office (“ICO”) needs to establish in order to fine nuisance callers, texters or emailers.

Background

In 2010 ICO was given powers to issue Monetary Penalty Notices (“MPNs”, or fines to you and me) of up to £500,000 for those companies who breach the Data Protection Act 1998 (“DPA”).  In 2011 these were extended to cover breaches of the Privacy and Electronic Communications Regulations 2003 (“PECR”), which sought to control the scourge of nuisance calls, texts and emails.

At present the standard ICO has to establish before issuing an MPN is a high one: that there was a serious, deliberate (or reckless) contravention of the DPA or PECR which was of a kind likely to cause substantial damage or substantial distress.  Whilst unsolicited marketing calls are certainly irritating, can they really be said to cause “substantial distress”?  Getting a text from a number you didn’t know about a PPI claim is certainly annoying, but could it seriously be considered “substantial damage”?  Not exactly; and therein lies the problem.

Overturned

In the first big case where ICO used this power, it issued an MPN of £300,000 to an individual who’d allegedly sent millions of spam texts for PPI claims to users who had not consented to receive them.  Upon appeal the Information Rights Tribunal overturned the fine.  The First Tier Tribunal found that whilst there was a breach of PECR (the messages were unsolicited, deliberate, with no opt-out link and for financial gain), the damage or distress caused could not be described as substantial.  Every mobile user knew what a PPI spam text meant and was unlikely to be concerned for their safety or have false expectations of compensation.  A short tut of irritation and then deleting the message solved the problem.  The Upper Tribunal agreed: a few spam texts did not substantial damage or distress cause.  Interestingly, the judge pointed out that the “substantial” requirement had come from the UK government, was stricter than that required by the relevant EU Directive and suggested the statutory test be revisited.

This does not however mean that ICO has not been able to use the power.  Since 2012 it has issued nine MPNs totalling £1.1m to direct marketers who breach PECR.  More emphasis is placed on the overall level of distress suffered by hundreds or thousands of victims, which can be considered substantial.  ICO concentrates on the worst offenders: cold callers who deliberately and repeatedly call numbers registered with the Telephone Preference Service, (“TPS” – Ofcom’s “do not call” list) even when asked to stop and those that attract hundreds of complaints.

In fact, in this particular case there were specific problems with the MPN document (this will not necessarily come as a surprise for those familiar with ICO MPNs).  The Tribunal criticised ICO for a number of reasons: not being specific about the Regulation contravened, omitting important factual information, including in the period of contravention time when ICO did not yet have fining power and changing the claim from the initial few hundred complaints to the much wider body that may have been sent.  Once all this was taken into consideration, only 270 unsolicited texts were sent to 160 people.

Proposal

ICO has been very vocal about having its hands tied in this matter and has long pushed for a change in the law (which is consistent with ICO’s broader campaigning for new powers).  Nuisance calls are a cause of great irritation for the public and currently only the worst offenders can be targeted.  Statistics compiled by ICO and TPS showed that the most nuisance is caused by large numbers of companies making a smaller number of calls.  Of 982 companies that TPS received complaints about, 80% received fewer than 5 complaints and only 20 more than 25 complaints.

Following a select committee enquiry, an All Party Parliamentary Group and a backbench debate, DCMS has launched the consultation, which invites responses on whether the threshold should be lowered to “annoyance, inconvenience or anxiety“.  This would bring it in line with the threshold Ofcom must consider when fining telecoms operators for persistent misuse for silent/abandoned calls. ICO estimates that had this threshold been in place since 2012, a further 50 companies would have been investigated/fined.

The three options being considered are: to do nothing, to lower the threshold or to remove it altogether.  Both ICO and DCMS favour complete removal.  ICO would thus only need to prove a breach was serious and deliberate/reckless.

Comment

I was at a seminar last week with the Information Commissioner himself, Chris Graham, at which he announced the consultation.  It was pretty clear he is itching to get his hands on these new powers to tackle rogue callers/emailers/texters, but emphasised any new powers would still be used proportionally and in conjunction with other enforcement actions such as compliance meetings and enforcement notices.  Even the announcement of any new law should act as a deterrent: typically whenever a large MPN is announced, the number of complaints about direct marketers reduces the following month.

The consultation document is squarely aimed at unsolicited calls, texts and emails and is consistently stated to only apply to certain regulations of PECR.  There is no suggestion that the threshold be reduced for other breaches of the PECR or the DPA.  It will be interesting to see how any reform will work in practice as the actual threshold is contained within the DPA and so will require its amendment.

The consultation will run until 7 December 2014, the document can be found here.  Organisations that are concerned about these proposals now have an opportunity to make their voices heard.

Update 27 February 2015

Following the consultation, DCMS announced that the majority of responses favoured the complete removal of the threshold.  As a result, from 6 April 2015 section 55A(1) of the DPA will be amended to remove the need to prove “substantial harm or substantial distress” in respect of regulations 19 to 24 of PECR.  ICO will still need to establish that the breach was serious and intentional or reckless, however this reform removes a huge hurdle in the fight against spammers.

DPA update: finally the end of enforced Subject Access Requests?

Posted on November 10th, 2014 by



Employers who force prospective employees to obtain a Subject Access Request from the police detailing any criminal history or investigation will soon themselves be committing a criminal offence.

Background

The Ministry of Justice recently announced that on 1 December 2014, section 56 of the Data Protection Act 1998 (“DPA”) will come into force across the UK.  It will make it a criminal offence for employers to demand prospective employees obtain Subject Access Request (“SAR”) reports.

Some employers are concerned that s56 will make it an offence to undertake Disclosure & Barring Service (“DBS”, the new name for the Criminal Records Bureau) checks on prospective employees.  This is not the case.  In fact it is designed to encourage the use of these and to prevent enforced SARs.

Purpose

The correct procedure to obtain criminal records of prospective employees is via the disclosure service provided by DBS or the Scottish equivalent, Disclosure Scotland (“DS”).  Whilst these services were in the process of being developed, employers could demand that applicants made SARs directly to the police and pass on the report.

The purpose of s56 was to close this loophole once the DBS/DS system had become fully operational.  For that reason s56 was inserted into the DPA but not enacted with the rest of the provisions.  It applies only to records obtained by the individual from the police using their s7 SAR rights. SAR reports contain far more information than would be revealed under a DBS/DS check, such as police intelligence and spent convictions.  As a result the practice is frowned upon by the authorities: the police SAR form states enforced SARs are exploitative and contrary to the spirit of the DPA, and the Information Commissioner’s Office (“ICO”) guidance on employment has long advised against it using stern wording (“Do not force applicants…” !).

Exemptions

The only exemptions to s56 are situations when the report is justified as being in the public interest or when required by law; the s28 national security exemption does not apply.

Opinion

There has been no specific guidance released on s56.  However, it is clear from the Written Ministerial Statement which announced the change in March 2014 and the ICO release which followed it that the section is being brought into force to close the loophole.  ICO has publically stated it intends to prosecute infringers under the offence so as to encourage the correct use of the DBS/DS procedure and prevent enforced SARs.  s56 does nothing to prevent employers requesting DBS/DS checks on prospective employees in the usual way.

What this means in practice is that any employer who demands a potential employee to file an SAR with the police and provide the results will be committing a criminal offence and there is a potentially unlimited fine for infringement.  Instead, employers should utilise the DBS procedure (DS if in Scotland) for the purpose of background criminal checks.  This sneaky backdoor route to obtaining far more sensitive personal data than employers are entitled to – often harming the individual’s job prospects in the process – will be shut for good.  Non-compliant employers should take note.

Update 19 November 2014

In an informative webinar on this subject yesterday, ICO mentioned a delay in the commencement date.  When I queried this the official response was: “a technical issue encountered when finalising arrangements for introduction means there will be a delay to the date for commencing Section 56 of the Data Protection Act. The Government is working to urgently resolve this issue. There is no exact date as yet.”

Update 27 February 2015

The Government has since passed the necessary commencement order and so s56 will come into force from 10 March 2015.

What does EU regulatory guidance on the Internet of Things mean in practice? Part 2

Posted on November 1st, 2014 by



In Part 1 of this piece I summarised the key points from the recent Article 29 Working Party (WP29) Opinion on the Internet of Things (IoT), which are largely reflected in the more recent Mauritius Declaration adopted by the Data Protection and Privacy Commissioners from Europe and elsewhere in the world. I expressed my doubts that the approach of the regulators will encourage the right behaviours while enabling us to reap the benefits that the IoT promises to deliver. Here is why I have these concerns.

Thoughts about what the regulators say

As with previous WP29 Opinions (think cloud, for example), the regulators have taken a very broad brush approach and have set the bar so high, that there is a risk that their guidance will be impossible to meet in practice and, therefore, may be largely ignored. What we needed at this stage was a somewhat more balanced and nuanced guidance that aimed for good privacy protections while taking into account the technological and operational realities and the public interest in allowing the IoT to flourish.

I am also unsure whether certain statements in the Opinion can withstand rigorous legal analysis. For instance, isn’t it a massive generalisation to suggest that all data collected by things should be treated as personal, even if it is anonymised or it relates to the ‘environment’ of individuals as opposed to ‘an identifiable individual’? How does this square with the pretty clear definition of the Data Protection Directive? Also, is the principle of ‘self-determination of data’ (which, I assume is a reference to the German principle of ‘informational self-determination’) a principle of EU data protection law that applies across the EU? And how is a presumption in favour of consent justified when EU data protection law makes it very clear that consent is one among several grounds on which controllers can rely?

Few people will suggest that the IoT does not raise privacy issues. It does, and some of them are significant. But to say that (and I am paraphrasing the WP29 Opinion) pretty much all IoT data should be treated as personal data and can only be processed with the consent of the individual which, by the way, is very difficult to obtain at the required standards, leaves companies processing IoT data nowhere to go, is likely to unnecessarily stifle innovation, and slow down the development of the IoT, at least in Europe. We should not forget that the EU Data Protection Directive has a dual purpose: to protect the privacy of individuals and to enable the free movement of personal data.

Distinguishing between personal and non-personal data is essential to the future growth of the IoT. For instance, exploratory analysis to find random or non-obvious correlations and trends can lead to significant new opportunities that we cannot even imagine yet. If this type of analysis is performed on data sets that include personal data, it is unlikely to be lawful without obtaining informed consent (and even then, some regulators may have concerns about such processing). But if the data is not personal, because it has been effectively anonymised or does not relate to identifiable individuals in the first place, there should be no meaningful restrictions around consent for this use.

Consent will be necessary in several occasions such as for storing or accessing information stored on terminal equipment, for processing health data and other sensitive personal data, or for processing location data created in the context of public telecommunications services. But is consent really necessary for the processing of, e.g., device identifiers, MAC addresses or IP addresses? If the individual is sufficiently informed and makes a conscious decision to sign up for a service that entails the processing of such information (or, for that matter, any non-sensitive personal data), why isn’t it possible to rely on the legitimate interests ground, especially if the individual can subsequently chose to stop the further collection and processing of data relating to him/her? Where is the risk of harm in this scenario and why is it impossible to satisfy the balance of interests test?

Notwithstanding my reservations, the fact of the matter remains that the regulators have nailed their colours to the mast, and there is risk if their expectations are not met. So where does that leave us then?

Our approach

Sophisticated companies are likely to want to take the WP29 Opinion into account and also conduct a thorough analysis of the issues in order to identify more nuanced legal solutions and practical steps to achieve good privacy protections without unnecessarily restricting their ability to process data. Their approach should be guided by the following considerations:

  1. The IoT is global. The law is not.
  2. The law is changing, in Europe and around the world.
  3. The law is actively enforced, with increasing international cooperation.
  4. The law will never keep up with technology. This pushes regulators to try to bridge the gap through their guidance, which may not be practical or helpful.
  5. So, although regulatory guidance is not law, there is risk in implementing privacy solutions in cutting edge technologies, especially when this is done on a global scale.
  6. Ultimately, it’s all about trust: it’s the loss of trust that a company will respect our privacy and that it will do its best to protect our information that results in serious enforcement action, pushes companies out of business or results in the resignation of the CEO.

 

This is a combustible environment. However, there are massive business opportunities for those who get privacy right in the IoT, and good intentions, careful thinking and efficient implementation can take us a long way. Here are the key steps that we recommend organisations should take when designing a privacy compliance programme for their activities in the IoT:

  1. Acknowledge the privacy issue. ‘Privacy is dead’ or ‘people don’t care’ type of rhetoric will get you nowhere and is likely to be met with significant pushback by regulators.
  2. Start early and aim to bake privacy in. It’s easier and less expensive than leaving it for later. In practice this means running privacy impact assessments and security risk assessments early in the development cycle and as material changes are introduced.
  3. Understand the technology, the data, the data flows, the actors and the processing purposes. In practice, this may be more difficult than it sounds.
  4. Understand what IoT data is personal data taking into account if, when and how it is aggregated, pseudonymised or anonymised and how likely it is to be linked back to identifiable individuals.
  5. Define your compliance framework and strategy: which laws apply, what they require, how the regulators interpret the requirements and how you will approach compliance and risk mitigation.
  6. When receiving data from or sharing data with third parties, allocate roles and responsibilities, clearly defining who  is responsible for what, who protects what, who can use what and for what purposes.
  7. Transparency is absolutely essential. You should clearly explain to individuals what information you collect, what you do with it and the benefit that they receive by entrusting you with their data. Then do what you said you would do – there should be no surprises.
  8. Enable users to exercise choice by enabling them to allow or block data collection at any time.
  9. Obtain consents when the law requires you to do so, for instance if as part of the service you need to store information on a terminal device, or if you are processing sensitive personal data, such as health data. In most cases, it will be possible to rely on ‘implied’ consent so as to not unduly interrupt the user journey (except when processing sensitive personal data).
  10. Be prepared to justify your approach and evidence compliance. Contractual and policy hygiene can help a lot.
  11. Have a plan for failure: as with any other technology, in the IoT things will go wrong, complaints will be filed and data security breaches will happen. How you react is what makes the difference.
  12. Things will change fast: after you have implemented and operationalised your programme, do not forget to monitor, review, adapt and improve it.