Archive for the ‘DPAs competence’ Category

What does EU regulatory guidance on the Internet of Things mean in practice? Part 2

Posted on November 1st, 2014 by



In Part 1 of this piece I summarised the key points from the recent Article 29 Working Party (WP29) Opinion on the Internet of Things (IoT), which are largely reflected in the more recent Mauritius Declaration adopted by the Data Protection and Privacy Commissioners from Europe and elsewhere in the world. I expressed my doubts that the approach of the regulators will encourage the right behaviours while enabling us to reap the benefits that the IoT promises to deliver. Here is why I have these concerns.

Thoughts about what the regulators say

As with previous WP29 Opinions (think cloud, for example), the regulators have taken a very broad brush approach and have set the bar so high, that there is a risk that their guidance will be impossible to meet in practice and, therefore, may be largely ignored. What we needed at this stage was a somewhat more balanced and nuanced guidance that aimed for good privacy protections while taking into account the technological and operational realities and the public interest in allowing the IoT to flourish.

I am also unsure whether certain statements in the Opinion can withstand rigorous legal analysis. For instance, isn’t it a massive generalisation to suggest that all data collected by things should be treated as personal, even if it is anonymised or it relates to the ‘environment’ of individuals as opposed to ‘an identifiable individual’? How does this square with the pretty clear definition of the Data Protection Directive? Also, is the principle of ‘self-determination of data’ (which, I assume is a reference to the German principle of ‘informational self-determination’) a principle of EU data protection law that applies across the EU? And how is a presumption in favour of consent justified when EU data protection law makes it very clear that consent is one among several grounds on which controllers can rely?

Few people will suggest that the IoT does not raise privacy issues. It does, and some of them are significant. But to say that (and I am paraphrasing the WP29 Opinion) pretty much all IoT data should be treated as personal data and can only be processed with the consent of the individual which, by the way, is very difficult to obtain at the required standards, leaves companies processing IoT data nowhere to go, is likely to unnecessarily stifle innovation, and slow down the development of the IoT, at least in Europe. We should not forget that the EU Data Protection Directive has a dual purpose: to protect the privacy of individuals and to enable the free movement of personal data.

Distinguishing between personal and non-personal data is essential to the future growth of the IoT. For instance, exploratory analysis to find random or non-obvious correlations and trends can lead to significant new opportunities that we cannot even imagine yet. If this type of analysis is performed on data sets that include personal data, it is unlikely to be lawful without obtaining informed consent (and even then, some regulators may have concerns about such processing). But if the data is not personal, because it has been effectively anonymised or does not relate to identifiable individuals in the first place, there should be no meaningful restrictions around consent for this use.

Consent will be necessary in several occasions such as for storing or accessing information stored on terminal equipment, for processing health data and other sensitive personal data, or for processing location data created in the context of public telecommunications services. But is consent really necessary for the processing of, e.g., device identifiers, MAC addresses or IP addresses? If the individual is sufficiently informed and makes a conscious decision to sign up for a service that entails the processing of such information (or, for that matter, any non-sensitive personal data), why isn’t it possible to rely on the legitimate interests ground, especially if the individual can subsequently chose to stop the further collection and processing of data relating to him/her? Where is the risk of harm in this scenario and why is it impossible to satisfy the balance of interests test?

Notwithstanding my reservations, the fact of the matter remains that the regulators have nailed their colours to the mast, and there is risk if their expectations are not met. So where does that leave us then?

Our approach

Sophisticated companies are likely to want to take the WP29 Opinion into account and also conduct a thorough analysis of the issues in order to identify more nuanced legal solutions and practical steps to achieve good privacy protections without unnecessarily restricting their ability to process data. Their approach should be guided by the following considerations:

  1. The IoT is global. The law is not.
  2. The law is changing, in Europe and around the world.
  3. The law is actively enforced, with increasing international cooperation.
  4. The law will never keep up with technology. This pushes regulators to try to bridge the gap through their guidance, which may not be practical or helpful.
  5. So, although regulatory guidance is not law, there is risk in implementing privacy solutions in cutting edge technologies, especially when this is done on a global scale.
  6. Ultimately, it’s all about trust: it’s the loss of trust that a company will respect our privacy and that it will do its best to protect our information that results in serious enforcement action, pushes companies out of business or results in the resignation of the CEO.

 

This is a combustible environment. However, there are massive business opportunities for those who get privacy right in the IoT, and good intentions, careful thinking and efficient implementation can take us a long way. Here are the key steps that we recommend organisations should take when designing a privacy compliance programme for their activities in the IoT:

  1. Acknowledge the privacy issue. ‘Privacy is dead’ or ‘people don’t care’ type of rhetoric will get you nowhere and is likely to be met with significant pushback by regulators.
  2. Start early and aim to bake privacy in. It’s easier and less expensive than leaving it for later. In practice this means running privacy impact assessments and security risk assessments early in the development cycle and as material changes are introduced.
  3. Understand the technology, the data, the data flows, the actors and the processing purposes. In practice, this may be more difficult than it sounds.
  4. Understand what IoT data is personal data taking into account if, when and how it is aggregated, pseudonymised or anonymised and how likely it is to be linked back to identifiable individuals.
  5. Define your compliance framework and strategy: which laws apply, what they require, how the regulators interpret the requirements and how you will approach compliance and risk mitigation.
  6. When receiving data from or sharing data with third parties, allocate roles and responsibilities, clearly defining who  is responsible for what, who protects what, who can use what and for what purposes.
  7. Transparency is absolutely essential. You should clearly explain to individuals what information you collect, what you do with it and the benefit that they receive by entrusting you with their data. Then do what you said you would do – there should be no surprises.
  8. Enable users to exercise choice by enabling them to allow or block data collection at any time.
  9. Obtain consents when the law requires you to do so, for instance if as part of the service you need to store information on a terminal device, or if you are processing sensitive personal data, such as health data. In most cases, it will be possible to rely on ‘implied’ consent so as to not unduly interrupt the user journey (except when processing sensitive personal data).
  10. Be prepared to justify your approach and evidence compliance. Contractual and policy hygiene can help a lot.
  11. Have a plan for failure: as with any other technology, in the IoT things will go wrong, complaints will be filed and data security breaches will happen. How you react is what makes the difference.
  12. Things will change fast: after you have implemented and operationalised your programme, do not forget to monitor, review, adapt and improve it.

 

What does EU regulatory guidance on the Internet of Things mean in practice? Part 1

Posted on October 31st, 2014 by



The Internet of Things (IoT) is likely to be the next big thing, a disruptive technological step that will change the way in which we live and work, perhaps as fundamentally as the ‘traditional’ Internet did. No surprise then that everyone wants a slice of that pie and that there is a lot of ‘noise’ out there. This is so despite the fact that to a large extent we’re not really sure about what the term ‘Internet of Things’ means – my colleague Mark Webber explores this question in his recent blog. Whatever the IoT is or is going to become, one thing is certain: it is all about the data.

There is also no doubt that the IoT triggers challenging legal issues that businesses, lawyers, legislators and regulators need to get their heads around in the months and years to come. Mark discusses these challenges in the second part of his blog (here), where he considers the regulatory outlook and briefly discusses the recent Article 29 Working Party Opinion on the Internet of Things.

Shortly after the WP29 Opinion was published, Data Protection and Privacy Commissioners from Europe and elsewhere in the world adopted the Mauritius Declaration on the Internet of Things. It is aligned to the WP29 Opinion, so it seems that privacy regulators are forming a united front on privacy in the IoT. This is consistent with their drive towards closer international cooperation – see for instance the latest Resolution on Enforcement Cooperation and the Global Cross Border Enforcement Cooperation Agreement (here).

The regulatory mind-set

You only need to read the first few lines of the Opinion and the Declaration to get a sense of the regulatory mind-set: the IoT can reveal ‘intimate details'; ‘sensor data is high in quantity, quality and sensitivity’ and the inferences that can be drawn from this data are ‘much bigger and sensitive’, especially when the IoT is seen alongside other technological trends such as cloud computing and big data analytics. The challenges are ‘huge’, ‘some new, some more traditional, but then amplified with regard to the exponential increase of data processing’, and include ‘data losses, infection by malware, but also unauthorized access to personal data, intrusive use of wearable devices or unlawful surveillance’.

In other words, in the minds of privacy regulators, it does not get much more intrusive (and potentially unlawful) than this, and if the IoT is left unchecked, it is the quickest way to an Orwellian dystopia. Not a surprise then that the WP29 supports the incorporation of the highest possible guarantees, with users remaining in complete control of their personal data, which is best achieved by obtaining fully informed consent. The Mauritius Declaration echoes these expectations.

What the regulators say

Here are the main highlights from the WP29 Opinion:

  1. Anyone who uses an IoT object, device, phone or computer situated in the EU to collect personal data is captured by EU data protection law. No surprises here.
  2. Data that originates from networked ‘things’ is personal data, potentially even if it is pseudonymised or anonymised (!), and even if it does not relate to individuals but rather relates to their environment. In other words, pretty much all IoT data should be treated as personal data.
  3. All actors who are involved in the IoT or process IoT data (including device manufacturers, social platforms, third party app developers, other third parties and IoT data platforms) are, or at least are likely to be, data controllers, i.e. responsible for compliance with EU data protection law.
  4. Device manufacturers are singled out as having to take more practical steps than other actors to ensure data protection compliance (see below). Presumably, this is because they have a direct relationship with the end user and are able to collect ‘more’ data than other actors.
  5. Consent is the first legal basis that should be principally relied on in the IoT. In addition to the usual requirements (specific, informed, freely given and freely revocable), end users should be enabled to provide (or withdraw) granular consent: for all data collected by a specific thing; for specific data collected by anything; and for a specific data processing. However, in practice it is difficult to obtain informed consent, because it is difficult to provide sufficient notice in the IoT.
  6. Controllers are unlikely to be able to process IoT data on the basis that it is on their legitimate interests to do so, because it is clear that this processing significantly affects the privacy rights of individuals. In other words, in the IoT there is a strong regulatory presumption against the legitimate interests ground and in favour of consent as the legitimate basis of processing.
  7. IoT devices constitute ‘terminal devices’ for EU law purposes, which means that any storage of information, or access to information stored, on an IoT device requires the end user’s consent (note: the requirement applies to any information, not just personal data).
  8. Transparency is absolutely essential to ensure that the processing is fair and that consent is valid. There are specific concerns around transparency in the IoT, for instance in relation to providing notice to individuals who are not the end users of a device (e.g. providing notice to a passer-by whose photo is taken by a smart watch).
  9. The right of individuals to access their data extends not only to data that is displayed to them (e.g. data about calories burnt that is displayed on a mobile app), but also the raw data processed in the background to provide the service (e.g. the biometric data collected by a wristband to calculate the calories burnt).
  10. There are additional specific concerns and corresponding expectations around purpose limitation, data minimisation, data retention, security and enabling data subjects to exercise their rights.

 

It is also worth noting that some of the expectations set out in the Opinion do not currently have an express statutory footing, but rather reflect provisions of the draft EU Data Protection Regulation (which may or may not become law): privacy impact assessments, privacy by design, privacy by default, security by design and the right to data portability feature prominently in the WP29 Opinion.

The regulators’ recommendations

The WP29 makes recommendations regarding what IoT stakeholders should do in practice to comply with EU data protection law. The highlights include:

  1. All actors who are involved in the IoT or process IoT data as controllers should, carry out Privacy Impact Assessments and implement Privacy by Design and Privacy by Default solutions; should delete raw data as soon as they have extracted the data they require; and should empower users to be in control in accordance with the ‘principle of self-determination of data’.
  2. In addition, device manufacturers should:
    1. follow a security by design principle;
    2. obtain consents that are granular (see above), and the granularity should extend to enabling users to determine the time and frequency of data collection;
    3. notify other actors in the IoT supply chain as soon as a data subject withdraws their consent or opposes a data processing activity;
    4. limit device finger printing to prevent location tracking;
    5. aggregate data locally on the devices to limit the amount of data leaving the device;
    6. provide users with tools to locally read, edit and modify data before it is shared with other parties;
    7. provide interfaces to allow users to extract aggregated and raw data in a structured and commonly used format; and
    8. enable privacy proxies that inform users about what data is collected, and facilitate local storage and processing without transmitting data to the manufacturer.
  3. The Opinion sets out additional specific expectations for app developers, social platforms, data platforms, IoT device owners and additional data recipients.

 

Comment

I have no doubt that there are genuinely good intentions behind the WP29 Opinion and the Mauritius Declaration. What I am not sure about is whether the approach of the regulators will encourage behaviours that protect privacy without stifling innovation and impeding the development of the IoT. I am not even sure if, despite the good intentions, in the end the Opinion will encourage ‘better’ privacy protections in the IoT. I explain why I have these concerns and how I think organisations should be approaching privacy compliance in the IoT in Part 2 of this piece.

Global apps sweep: should developers be worried?

Posted on October 24th, 2014 by



A recent sweep with participation from 26 data protection authorities (“DPA”) across the world revealed a high proportion of mobile apps are accessing large amounts of personal data without meeting their data privacy obligations.

How did they do?

Not well. Out of the 1,200 apps surveyed, 85% failed to clearly explain how they were collecting and using personal information, 59% did not display basic privacy information and one in three requested excessive personal information. Another common finding was that many apps fail to tailor privacy information to the small screen.

The Information Commissioner’s Office (“ICO”), as the UK’s DPA, surveyed 50 apps including many household names which produced results in line with the global figures.

Rare examples of good practice included pop-up notifications asking permission prior to additional data being collected and basic privacy information with links to more detailed information for users who wish to know more.

What are they told to do?

As a result, Ofcom and ICO have produced some guidance for users on how to use apps safely. It is written in consumer-friendly language and contains straightforward advice on some common pitfalls such as checking content ratings or always logging out of banking apps.

Contrast this with the 25 page guidance from ICO aimed at app developers, which has drawn some criticism for being overly lengthy and complex. Rather than participate in research and point to long guidance documents, it would be more effective to promote simple rules (eg. requiring pop-up notifications) and holding app stores accountable for non-compliance. However, site tests demonstrate users are irritated enough by constant pop-ups to stop using the site, so developers are reluctant to implement them.

Why is it important?

The lack of compliance is all the more alarming if read in conjunction with Ofcom research which surveyed a range of UK app users. Users view the apps environment as a safer, more contained space than browser-based internet access. Many believed apps are discrete pieces of software with little interconnectivity, were unaware of the virus threat or that apps can continue to run in the background. There is implicit trust in established brands and recognised app stores, who users felt must monitor and vet all apps before selling them. Peer recommendation also played a significant role in deciding whether to download an app.

This means little, if any, attention is paid to privacy policies and permission requests. Users interviewed generally felt full permission had to be granted prior to using the app and were frustrated by the lack of ability to accept some and refuse other permissions.

So what’s the risk for developers?

ICO has the power to fine those companies who breach the relevant laws up to £500,000. The threshold for issuing a fine is high, however, and this power has not yet been used in the context of mobile apps. Having said this, we know that ‘internet and mobile’ are one of ICO’s priority areas for enforcement action.

Perhaps a more realistic and potentially more damaging risk is the reputational and brand damage associated with being named and shamed publically. ICO is more likely, when a lower level of harm has been caused, to seek undertakings that the offending company will change its practices. As we know, ICO publishes its enforcement actions on its website. For a company whose business model relies on processing data and on peer recommendations as the main way to grow its user base and its brand, the trust of its users is paramount and hard to rebuild once lost.

ICO has said it will be contacting app developers who need to improve their data collection and processing practices. The next stage for persistent offenders would be enforcement action.

Developers would be wise to pay attention. If enforcement action is not in itself a concern, ICO’s research showed almost half app users have decided against downloading an app due to privacy concerns. If that’s correct, privacy is important to mobile app users and could ‘make’ or ‘break’ a new app.

CNIL: a regulator to watch in 2014

Posted on March 18th, 2014 by



Over the years, the number of on-site inspections by the French DPA (CNIL) has been on a constant rise. Based on the CNIL’s latest statistics (see CNIL’s 2013 Annual Activity Report), 458 on-site inspections were carried out in 2012, which represents a 19 percent increase compared with 2011. The number of complaints has also risen to 6,000 in 2012, most of which were in relation to telecom/Internet services, at 31 percent. In 2012, the CNIL served 43 formal notices asking data controllers to comply. In total, the CNIL pronounced 13 sanctions, eight of which were made public. In the majority of cases, the sanction pronounced was a simple warning (56 percent), while fines were pronounced in only 25 percent of the cases.

The beginning of 2014 was marked by a landmark decision of the CNIL. On January 3, 2014, the CNIL pronounced a record fine against Google of €150,000 ($204,000) on the grounds that the terms of use available on its website since March 1, 2012, allegedly did not comply with the French Data Protection Act. Google was also required to publish this sanction on the homepage of Google.fr within eight days of it being pronounced. Google appealed this decision, however, on February 7th, 2014, the State Council (“Conseil d’Etat”) rejected Google’s claim to suspend the publication order.

Several lessons can be learnt from the CNIL’s decision. First, that the CNIL is politically motivated to hit hard on the Internet giants, especially those who claim that their activities do not fall within the remit of the French law. No, says the CNIL. Your activities target French consumers, and thus, you must comply with the French Data Protection Act even if you are based outside the EU. This debate has been going on for years and was recently discussed in Brussels within the EU Council of Ministers’ meeting in the context of the proposal for a Data Protection Regulation. As a result, Article 4 of the Directive 95/46/EC could soon be amended to allow for a broader application of European data protection laws to data controllers located outside the EU.

Second, despite it being the highest sanction ever pronounced by the CNIL, this is hardly a dissuasive financial sanction against a global business with large revenues. Currently, the CNIL cannot pronounce sanctions above €150,000 or €300,000 ($410,000) in case of a second breach within five years from the first sanction pronounced, whereas some of its counterparts in other EU countries can pronounce much heavier sanctions; e.g., last December, the Spanish DPA pronounced a €900,000 ($1,230,000) fine against Google. This could soon change, however, in light of an announcement made by the French government that it intends to introduce this year a bill on “the protection of digital rights and freedoms,” which could significantly increase the CNIL’s enforcement powers.

Furthermore, it seems that the CNIL’s lobbying efforts within the French Parliament are finally beginning to pay off. A new law on consumer rights came into force on 17 March 2014, which amends the Data Protection Act and grants the CNIL new powers to conduct online inspections in addition to the existing on-site inspections. This provision gives the CNIL the right, via an electronic communication service to the public, “to consult any data that are freely accessible, or rendered accessible, including by imprudence, negligence or by a third party’s action, if required, by accessing and by remaining within automatic data protection systems for as long as necessary to conduct its observations.” This new provision opens up the CNIL’s enforcement powers to the digital world and, in particular, gives it stronger powers to inspect the activities of major Internet companies. The CNIL says that this law will allow it to verify online security breaches, privacy policies and consent mechanisms in the field of direct marketing.

Finally, the Google case is a good example of the EU DPAs’ recent efforts to conduct coordinated cross-border enforcement actions against multinational organizations. In the beginning of 2013, a working group was set up in Paris, led by the CNIL, for a simultaneous and coordinated enforcement action against Google in several EU countries. As a result, Google was inspected and sanctioned in multiple jurisdictions, including Spain and The Netherlands. Google is appealing these sanctions.

As the years pass by, the CNIL continues to grow and to become more resourceful. It is also more experienced and better organized. The CNIL is already very influential within the Article 29 Working Party, as recently illustrated by the Google case, and Isabelle Falque-Pierrotin, the chairwoman of the CNIL, was recently elected chair of the Article 29 Working Party. Thus, companies should pay close attention to the actions of the CNIL as it becomes a more powerful authority in France and within the European Union.

This article was first published in the IAPP’s Privacy Tracker on 27 February 2014 and was updated on 18th March 2014.

Belgian DPA overhauls enforcement strategy

Posted on October 21st, 2013 by



Belgium has long been one of the low risk EU Member States in terms of data protection enforcement. Aside from the fact that pragmatism can be considered part of a Belgian’s nature, this view was also due to the fact that the Belgian DPA, the Privacy Commission, could be termed as one of those so-called ‘toothless tigers’.

As De Standaard reports, it seems this is now about to change, with the Privacy Commission set to follow the example of the Dutch DPA by adopting a more severe enforcement strategy.

Until now, the Privacy Commission did not pro-actively investigate companies or sectors, despite the fact that the Belgian Privacy Act grants them such powers. However, the Privacy Commission has recently decided to establish a team of inspectors who will actively search for companies that process personal data in a non-compliant manner. It seems the Privacy Commission is finally adopting an approach which the CNIL has been applying for a number of years, with the idea being that each year a specific sector would be subject of increased scrutiny.

In addition, anticipating the adoption of the Regulation, the Privacy Commission has called upon the Belgian legislator to grant it more robust enforcement powers. Currently, if a company is found to be in breach of the Belgian data protection laws, the Privacy Commission has a duty to inform the public prosecutor. However, in practice criminal prosecution for data protection non-compliance is virtually non-existent and leads to de facto impunity.  This could drastically change if greater enforcement powers are granted to the Privacy Commission.

In the wake of the coming Regulation, this new enforcement strategy does not come as a surprise. In addition, earlier this year, Belgium faced a couple of high-profile mediatised data breach cases for the first time. Both the Ministry of Defense, the Belgian railroad company and recruting agency Jobat suffered a massive data leak. More recently, the massive hacking of Belgacom’s affiliate BICS gave rise to a lot of controversy. It would appear that these cases highlighted to the Privacy Commission the limits of its current powers .

However, if even a pragmatic DPA, such as the Privacy Commission, starts adopting a more repressive enforcement strategy, it is clear that the days of complacency are fading. Organisations processing personal data really cannot afford to wait until the Regulation becomes effective in the next few years. They will have to make sure they have done their homework immediately, as it seems the DPA’s won’t wait until the Regulation becomes effective to show their teeth.

One-stop-shop – In search of legal and political effectiveness

Posted on October 7th, 2013 by



The proposed EU Data Protection Regulation is an ambitious piece of legislation by any measure. Perhaps the most ambitious element of all is the introduction of the one-stop-shop principle: one single data protection authority being exclusively competent over an organisation collecting and using data throughout the EU. The reason why this is such a big deal is that even if the law ends up being exactly the same across all Member States (in itself a massive achievement), regulators are human and often show different interpretations of the same issues and rules. So if one-stop-shop becomes a reality, all EU data protection regulators will simply have to accept the position adopted by the one deemed to be competent and keep their own interpretation to themselves. But will they???

Today the Council of the EU is debating how to structure and shape this principle in a way that provides the benefits that the European Commission and global organisations are seeking, whilst meeting the national expectations of each Member State at the same time. It is a matter of legal and political effectiveness. So far and not surprisingly, the Council’s scale seems to be tilting towards greater national intervention than what the Commission originally aimed for. Whilst most Member States appear to be in favour of the philosophy underlying the one-stop-shop mechanism, only a few accept that one single authority should have exclusive jurisdiction to supervise all of the processing activities of a pan-European data user and decide exclusively upon all measures (including penalties). They cite the likely detriment to the protection of the data protection rights of individuals as their main stumbling block.

Therefore, there are a number of possible changes to this principle that will be discussed today, including:

* Limiting the powers of the ‘competent’ authority to authorisation and consultation functions only. So basically, leaving the paperwork for one regulator whilst any other EU authorities would continue to have enforcement powers.

* Replacing the one-stop-shop with a co-decision model (at least for the most important cases) where all relevant regulators need to agree.

* Adopting a consultation model where the competent authority is legally required to consult the other supervisory authorities concerned with a view to reaching consensus.

* Allowing appeals by unhappy authorities to the European Data Protection Board, which would then collectively be empowered to make the final decision.

How realistic these potential changes are is no doubt something that will come up in the discussions. What is clear is that any weakening of the one-stop-principle will affect the effectiveness of the core ‘one law/one regulator’ thinking of the Commission.

What will be the impact of the revised OECD Guidelines?

Posted on September 24th, 2013 by



This month, the Organisation for Economic Cooperation and Development (OECD) published its first ever revision to the original 1980 guidelines on the protection of privacy and transborder flows of personal data. It has been over 20 years since the OECD published the first internationally agreed set of privacy principles, and now they seem armed and ready to tackle the modern challenges of the international privacy world. But what is the real impact of these provisions?   

The primary aim of the Revised Guidelines is to increase organisations’ accountability for data security practices through a number of new mechanisms including an obligation on data controllers to implement a robust privacy management programme. There is also a shift to a more risk-based approach, with the guidelines focusing on ‘risk’ and ‘proportionality’.

The guidelines also introduce a number of other new provisions including: the implementation of national privacy strategies that are effectively coordinated at the highest levels of government; an obligation for member countries to support international arrangements promoting global interoperability and an obligation to notify authorities and individuals of data security breaches. 

The revisions are a clear indication of the OECD’s attempt to modernise their approach to international data flows and to strengthen privacy enforcement. They have also attempted to tighten their link with the EU regime by including ‘good practice’ references to different collaborative approaches taken by EU data protection authorities as a way of emphasising to its members the need for increased interoperability. 

But perhaps the most significant revision is the obligation to implement a robust privacy management programme. This is the first time that members of the OECD around the world will be uniformly required to implement a comprehensive programme. In addition, they will be required to ensure the privacy programme: 

  • gives effect to the Revised Guidelines for all personal data under its control;
  • is tailored to the structure, scale, volume and sensitivity of its operations;
  • provides for appropriate safeguards based on privacy risk assessment;
  • is integrated into its governance structure and establishes internal oversight mechanisms;
  • includes plans for responding to inquiries and incidents; and
  • is updated in light of ongoing monitoring and periodic assessment. 

The OECD’s proposal attempts to align with the EU’s approach of ensuring privacy mechanisms are properly documented and are supported by effective procedures. Will this be the catalyst that motivates organisations and data protection authorities around the world to adopt a uniform approach to data privacy? We shall see.     

What will happen if there is no new EU privacy law next year

Posted on June 20th, 2013 by



The European Parliament has just announced another delay affecting the vote on its version of the EU Data Protection Regulation. That means that we will now not know where the Parliament truly stands on this issue until September or October at the earliest. Although this was sort of expected, optimistic people like me were still hoping that the LIBE Committee would get enough consensus to issue a draft this side of the Summer, but clearly the political will is not quite there. This is obviously disappointing for a number of reasons, so in case the MEPs need a bit of motivation to get their act together, here are a few things that are likely to happen if the new Regulation is not adopted before next year’s deadline:

* Inconsistent legal regimes throughout the EU – The current differences in both the letter of the law and the way it is interpreted are confusing at best and one of the biggest weakness to achieve the right level of compliance.

* Non application of EU law to global Internet players – Thanks to its 90’s references to the ‘use of equipment’, the Directive’s framework is arguably not applicable to Internet businesses based outside the EU even if they collect data from millions EU residents. Is that a good idea?

* Death by paperwork – One of the most positive outcomes of the proposed Regulation will be the replacement of the paper-based compliance approach of the Directive with a more practical focus. Do we really want to carry on spending compliance resources filling in forms?

* Uncertainty about the meaning of personal data – Constantly evolving technology and the increasing value of data generated by our interaction with that technology have shaken the current concept of personal data. We badly need a 21st century definition of personal data and its different levels of complexity.

* Massive security exposures – The data security obligations under the existing Directive are rather modest compared to the well publicised wish list of regulators and, frankly, even some of those legal frameworks regarded as ‘inadequate’ by comparison to European data protection are considerably ahead of Europe in areas like data breach notification.

* Toothless regulators – Most EU data protection authorities still have very weak enforcement powers. Without going overboard, the Regulation is their chance to make their supervisory role truly effective.

The need to modernise EU data protection law is real and, above all, overdue. A bit of compromise has to be better that not doing anything at all.

UK e-privacy enforcement ramps up

Posted on April 29th, 2013 by



The times when one could say that the UK ICO was a fluffy, teethless regulator are over. Recently, the ICO has been going through its most prolific period of enforcement activity – by the end of 2012 it had imposed 25 fines, issued 3 enforcement notices, secured 6 prosecutions and obtained 31 undertakings and 2013 looks set to bring similar activities (in March for example the ICO issued its first monetary penalty for a serious breach of the Privacy and Electronic Communications Regulations 2003 (‘PECR’) relating to live marketing calls – a £90,000 fine for Glasgow-based DM Design for unwanted marketing calls.

To coincide with such activities, the ICO has recently updated the enforcement section of its website. What this tells us is that whilst data security breaches will continue to be a significant area of focus for the ICO, PECR breaches will also figure highly in the ICO’s enforcement agenda. In this regard, the ICO tell us that it has already been active in the areas of ‘spam texts’, sales calls and cookies.

Spam texts are identified as ‘one of the biggest concerns to consumers’ (the ICO refers to texts about accident and ‘PPI’ claims, in particular) and refers to the work it has carried out with members of the mobile phone industry in order to identify an organisation which is now the subject of enforcement action. The ICO also identifes ‘Live’ Sales Calls and ‘Automated Calls’ as other areas of priority, and have explicitly identified (and published) the names of a number of companies where they have either met to discuss compliance issues; or indeed are in the process of activeley monitoring ‘concerns’ about compliance with a view to considering enforcement action. This is not only related to UK-based companies, but also those based overseas who are targeting UK-based consumers. The ICO tell us that they are actively working with the FTC in the US and with other regulators based in Ireland, Belgium and Spain through Consumer Protection Co-operation arrangements.

Finally the ICO tells us that between January and March 2013 it received a further 87 reported concerns via its website from individuals about cookies (many less than the amount of concerns about unwanted marketing communications from individuals, it has to be said). The ICO will continue to focus on those websites that are doing nothing to raise awareness of cookies or obtain users’ consent, and also on those sites they receive complaints about or are ‘visited most by consumers’. However the ICO also say that they have ‘maintained a consumer threat level of ‘low’ in this area due to the low level of concerns reported’.

It is obvious that as consumer technologies such as tablets and smart-phones continue to develop, so too will the ICO’s enforcement strategy in this area. Compliance with PECR should therefore also figure highly on any business’s data protection compliance strategy.

The Leveson Report and UK Data Protection

Posted on November 29th, 2012 by



So, the Leveson Report has been published.  Whilst not yet having read all 2000 + pages, the key recommendations that Lord Justice Leveson has made to the Ministry of Justice about the Data Protection Act are:

* Amend s. 32 (journalism, liteature and art exemption) including making it narrower

* Amend the right to compensation under s. 13 so that it includes compensation for pure distress

* Repeal certain procedural provisons around journalism in the DPA

* Consider requiring the ICO to give special regard to the balance of the public interest in freedom of expression alongside the public interest in upholding the DPA

* Bring into force amendments made to s. 55 around increasing sentencing and an enhanced defence for the public interest with respect to journalism

* Extend the prosecuting powers of the ICO to include any offence which also constitutes a breach of the Data Protection Principles

* Impose a new duty on the ICO to consult with the Crown Prosecution Service regarding the exercise of any power to undertake criminal proceedings

* Amend the DPA to reconstitute the ICO as an Information Commission led by a Board of Commissioners

The Report also has a whole part examining the relationship between the Press and Data Protection including comments on the structure and workings of the ICO.