Archive for the ‘Article 29 Working Party’ Category

Beware: Europe’s take on the notification of personal data breaches to individuals

Posted on April 10th, 2014 by



Article 29 Working Party (“WP 29“) has recently issued an Opinion on Personal Data Breach Notification (the “Opinion“). The Opinion focuses on the interpretation of the criteria under which individuals should be notified about the breaches that affect their personal data.

Before we analyse the take aways from the Opinion, let’s take a step back: are controllers actually required to notify personal data breaches?

In Europe, controllers have, for a while now, been either legally required or otherwise advised to consider notifying personal data breaches to data protection regulators and/or subscribers or individuals.

Today, the only EU-wide personal data breach notification requirement derives from Directive 2002/58/EC, as amended by Directive 2009/136/EC, (the “e-Privacy Directive“) and  applies to providers of publicly available electronic communications services. In some EU member states (for example, in Germany), this requirement has been extended to controllers in other sectors or to all  controllers. Similarly, some data protection regulators have issued guidance whereby controllers are advised to report data breaches under certain circumstances.

Last summer, the European Commission adopted Regulation 611/2013 (the “Regulation“), (see our blog regarding the Regulation here), which  sets out the technical implementing measures concerning the circumstances, format and procedure for data breach notification required under Article 4 of the e-Privacy Directive.

In a nutshell, providers  must notify individuals of breaches that are likely to adversely affect their personal data or privacy without undue delay and taking account of: (i) the nature and content of the personal data concerned; (ii) the likely consequences of the personal data breach for the individual concerned (e.g. identify theft, fraud, distress, etc); and (iii) the circumstances of the personal data breach. Providers are exempt to notify individuals (not regulators) if they have demonstrated to the satisfaction of the data protection regulator that they have implemented appropriate technological protection measures to render that data unintelligible to any person who is not authorised to access it.

The Opinion provides guidance on how controllers may interpret this notification requirement by analysing 7 practical scenarios of breaches that will meet the ‘adverse effect’ test. For each of them, the  WP 29 identifies the potential consequences and adverse effects of the breach and the security safeguards which might have reduced the risk of the breach occurring in the first place or, indeed, might have exempted the controller from notifying the breach to individuals all together.

From the Opinion, it is worth highlighting:

The test. The ‘adverse effect’ test is interpreted broadly to include ‘secondary effects’. The  WP 29 clearly states that all the potential consequences and potential adverse effects are to be taken into account. This interpretation may be seen a step too far as not all ‘potential’ consequences are ‘likely’ to happen and will probably lead to a conservative interpretation of the notification requirement across Europe.

Security is key. Controllers should put in place security measures that are appropriate to the risk presented by the processing with emphasis on the implementation of those controls rendering data unintelligible. Compliance with data security requirements should result in the mitigation of the risks of personal data breaches and even, potentially, in the application of the exception to notify individuals about the breach. Examples of security measures that are identified to be likely to reduce the risk of a breach occurring are: encryption (with strong key); hashing (with strong key), back-ups, physical and logical access controls and regular monitoring of vulnerabilities.

Procedure. Controllers should have procedures in place to manage personal data breaches. This will involve a detailed analysis of the breach and its potential consequences. In the Opinion, the  data breaches fall under three categories, namely, availability, integrity or confidentiality breaches. The application of this model may help controllers analyse the breach too.

How many individuals? The number of individuals affected by the breach should not have a bearing on the decision of whether or not to notify them.

Who must notify? It is explicitly stated in the Opinion that breach notification constitutes good practice for all controllers, even for those who are currently not required to notify by law.

There is a growing consensus in Europe that it is only a matter of time before an EU-wide personal data breach notification requirement that applies to all controllers (regardless of the sector they are in) is in place. Indeed, this will be the case if/when the proposed General Data Protection Regulation is approved. Under it, controllers would be subject to strict notification requirements both to data protection regulators and individuals. This Opinion provides some insight into  how the  European regulators may interpret these requirements under the General Data Protection Regulation.

Therefore, controllers will be well-advised to prepare for what is coming their way (see previous blog here). Focus should be on the application of security measures (in order to prevent a breach and the adverse effects to individuals once a breach has occurred) and on putting procedures in place to effectively manage breaches. Start today, burying the head in the sand is just no longer an option.

Article 29 Working Party issues draft model clauses for processor-to-subprocessor data transfers

Posted on April 9th, 2014 by



On 21st March 2014, the Article 29 Working Party (“WP 29″) issued a working document (WP 214) proposing new contractual clauses for cross-border transfers between an EU-based processor and a non-EU-based sub-processor (“draft model clauses”). This document addresses the situation where personal data are initially transferred by a controller to a processor within the European Union (“EU”) and are subsequently transferred by the processor to a sub-processor located outside the EU.

Back in 2010, the EU Commission adopted a revised version of its model clauses for transfers between a controller in the EU and a processor outside the EU, partly to integrate new provisions on sub-processing. However, it deliberately chose not to apply these new model clauses to situations whereby a processor established in the EU and performing the processing of personal data on behalf of a controller established in the EU subcontracts his processing operations to a sub-processor established in a third country (see recital 23 of the EU Commission’s Decision 2010/87/EU).

Absent Binding Corporate Rules, many EU data processors were left with few options for transferring the data outside the EU. This issue is particularly relevant in the context of a growing digital economy where more and more companies are transferring their data to cloud computing service providers who are often based outside the EU. Negotiating ad hoc model clauses on a case-by-case basis with the DPAs seemed to be the only solution available. This is precisely what the Spanish DPA undertook in 2012 when it adopted a specific set of standard contractual clauses for processor–to-sub-processor transfers and put in place a new procedure allowing data processors based in Spain to obtain authorizations for transferring data processed on behalf of their customers (the data controllers) to sub-processors based outside the EU.

This has inspired the WP 29 to use the Spanish model as a basis for preparing draft ad hoc model clauses for transfers from an EU data processor to a non-EU sub-processor that could be used by any processor established in the EU. However, these draft model clauses have yet to be formally adopted by the European Commission before they can be used by companies and it may take a while before the EU Commission adopts a new official set of model clauses for data processors. Meanwhile, companies cannot rely on the draft model clauses to obtain approval from their DPAs to transfer data outside the EU. While the WP 29′s document certainly paves the way in the right direction, it remains to be seen how these draft model clauses will be received by the business sector and whether they can work in practice.

Below is a list of the key provisions under the draft model clauses for data processors:

  • Structure: the overall structure and content of these draft clauses are similar to those that already exist under the controller-to-processor model clauses, but have been adapted to the context of transfers between a processor and sub-processor.
  • Framework Contract: the EU data processor must sign a Framework Contract with its controller, which contains a detailed list of obligations (16 in total) specified in the draft model clauses – including restrictions on onward sub-processing.  The practical effect of this could be to see the service terms between controllers and their EU processors expand to include a substantially greater number of data protection commitments, all with a view to facilitating future extra-EU transfers by the processor to international sub-processors under these model clauses.
  • Sub-processing: the EU processor must obtain its controller’s prior written approval in order to subcontract data processing activities to non-EU processors. It is up to the controller to decide, under the Framework Contract, whether it grants a general consent up front for all sub-processing activities, or whether a specific case-by-case approval is required each time the EU processor intends to subcontract its activities. The same applies to the sub-processing by the importing non-EU sub-processors. Any non-EU sub-processor must be contractually bound by the same obligations (including the technical and organisational security measures) as those that are imposed on the EU processor under the Framework Agreement.
  • List of sub-processing agreements: the EU processor must keep an updated list of all sub-processing agreements concluded and notified to it by its non-EU sub-processor at least once per year and must make this list available to the controller.
  • Third party beneficiary clause: depending on the situation, the data subject has three options to enforce model clause breaches against data processing parties to it – including initially against the exporting EU data processor (where the controller has factually disappeared or has ceased to exist in law), the importing non-EU data processor (where both the controller and the EU data processor have factually disappeared or have ceased to exist in law), or any subsequent sub-processor (where the controller, the exporting EU data processor and the importing non-EU data processor have all factually disappeared or have ceased to exist in law).
  • Audits: the exporting EU data processor must agree, at the request of its controller, to submit its data processing facilities for audit of the processing activities covered by the Framework Contract, which shall be carried out by the controller himself, or alternatively, an independent inspection body selected by the controller. The DPA competent for the controller has the right to conduct an audit of the exporting EU data processor, the importing non-EU data processor, and any subsequent sub-processor under the same conditions as those that would apply to an audit of the controller. The recognition of third party independent audits is especially important for cloud industry businesses who – for security and operational reasons – will often be reluctant to have clients conduct on-site audits but will typically be more comfortable holding themselves to independent third party audits.
  • Disclosure of the Framework Contract: the controller must make available to the data subjects and the competent DPA upon request a copy of the Framework Contract and any sub-processing agreement with the exception of commercially sensitive information which may be removed. In practice, it is questionable how many non-EU suppliers will be willing to sign sub-processing agreements with EU data processors on the understanding that provisions within those agreements could end up being disclosed to regulators and other third parties.
  • Termination of the Framework Contract: where the exporting EU processor, the importing non-EU data processor or any subsequent sub-processor fails to fulfil their model clauses obligations, the controller may suspend the transfer of data and/or terminate the Framework Contract.

Click here to access the WP 29′s working document WP 214 on draft ad hoc contractual clauses “EU data processor to non-EU sub-processor”.

Click here to view the article published in the World Data Protection Report.

CNIL: a regulator to watch in 2014

Posted on March 18th, 2014 by



Over the years, the number of on-site inspections by the French DPA (CNIL) has been on a constant rise. Based on the CNIL’s latest statistics (see CNIL’s 2013 Annual Activity Report), 458 on-site inspections were carried out in 2012, which represents a 19 percent increase compared with 2011. The number of complaints has also risen to 6,000 in 2012, most of which were in relation to telecom/Internet services, at 31 percent. In 2012, the CNIL served 43 formal notices asking data controllers to comply. In total, the CNIL pronounced 13 sanctions, eight of which were made public. In the majority of cases, the sanction pronounced was a simple warning (56 percent), while fines were pronounced in only 25 percent of the cases.

The beginning of 2014 was marked by a landmark decision of the CNIL. On January 3, 2014, the CNIL pronounced a record fine against Google of €150,000 ($204,000) on the grounds that the terms of use available on its website since March 1, 2012, allegedly did not comply with the French Data Protection Act. Google was also required to publish this sanction on the homepage of Google.fr within eight days of it being pronounced. Google appealed this decision, however, on February 7th, 2014, the State Council (“Conseil d’Etat”) rejected Google’s claim to suspend the publication order.

Several lessons can be learnt from the CNIL’s decision. First, that the CNIL is politically motivated to hit hard on the Internet giants, especially those who claim that their activities do not fall within the remit of the French law. No, says the CNIL. Your activities target French consumers, and thus, you must comply with the French Data Protection Act even if you are based outside the EU. This debate has been going on for years and was recently discussed in Brussels within the EU Council of Ministers’ meeting in the context of the proposal for a Data Protection Regulation. As a result, Article 4 of the Directive 95/46/EC could soon be amended to allow for a broader application of European data protection laws to data controllers located outside the EU.

Second, despite it being the highest sanction ever pronounced by the CNIL, this is hardly a dissuasive financial sanction against a global business with large revenues. Currently, the CNIL cannot pronounce sanctions above €150,000 or €300,000 ($410,000) in case of a second breach within five years from the first sanction pronounced, whereas some of its counterparts in other EU countries can pronounce much heavier sanctions; e.g., last December, the Spanish DPA pronounced a €900,000 ($1,230,000) fine against Google. This could soon change, however, in light of an announcement made by the French government that it intends to introduce this year a bill on “the protection of digital rights and freedoms,” which could significantly increase the CNIL’s enforcement powers.

Furthermore, it seems that the CNIL’s lobbying efforts within the French Parliament are finally beginning to pay off. A new law on consumer rights came into force on 17 March 2014, which amends the Data Protection Act and grants the CNIL new powers to conduct online inspections in addition to the existing on-site inspections. This provision gives the CNIL the right, via an electronic communication service to the public, “to consult any data that are freely accessible, or rendered accessible, including by imprudence, negligence or by a third party’s action, if required, by accessing and by remaining within automatic data protection systems for as long as necessary to conduct its observations.” This new provision opens up the CNIL’s enforcement powers to the digital world and, in particular, gives it stronger powers to inspect the activities of major Internet companies. The CNIL says that this law will allow it to verify online security breaches, privacy policies and consent mechanisms in the field of direct marketing.

Finally, the Google case is a good example of the EU DPAs’ recent efforts to conduct coordinated cross-border enforcement actions against multinational organizations. In the beginning of 2013, a working group was set up in Paris, led by the CNIL, for a simultaneous and coordinated enforcement action against Google in several EU countries. As a result, Google was inspected and sanctioned in multiple jurisdictions, including Spain and The Netherlands. Google is appealing these sanctions.

As the years pass by, the CNIL continues to grow and to become more resourceful. It is also more experienced and better organized. The CNIL is already very influential within the Article 29 Working Party, as recently illustrated by the Google case, and Isabelle Falque-Pierrotin, the chairwoman of the CNIL, was recently elected chair of the Article 29 Working Party. Thus, companies should pay close attention to the actions of the CNIL as it becomes a more powerful authority in France and within the European Union.

This article was first published in the IAPP’s Privacy Tracker on 27 February 2014 and was updated on 18th March 2014.

BCR for processors get EU regulators’ vital endorsement

Posted on May 1st, 2013 by



The fact that with everything that is going on in the world of data protection right now, the Article 29 Working Party has devoted a thorough 19 page explanatory document to clarifying and endorsing the role of BCR for Processors or “Binding Safe Processor Rules” is very telling. It is nearly 10 years since BCR was conceived and whilst the approval process is not precisely a walk in the park, much has been achieved in terms of its status, simplification and even international recognition. However, the idea of applying the same approach to an international group of vendors or to cloud service providers is still quite novel.

The prospect of the forthcoming EU data protection framework specifically recognising both flavours of BCR is obviously encouraging but right now, the support provided by the Working Party is invaluable. The benefits of BSPR are well documented – easier contractual arrangements for customers and suppliers, one stop shop in terms of data transfers compliance for cloud customers, no need for cumbersome model clauses… It sounds like a much needed panacea to overcome the tough EU restrictions on international data transfers affecting global outsourcing and data processing operations. But as in the early days of the traditional BCR, potential suitors need to know that the idea is workable and regulators will value the efforts made to achieve safe processor status.

Those who were already familiar with the previous opinions by the Working Party on BSPR – in particular WP195 – will not find the content of the new opinion particularly surprising. However, there are very useful and reassuring pointers in there, as highlighted by the following key statements and clarifications:

*    The outsourcing industry has been constant in its request for a new legal instrument that would allow for a global approach to data protection in the outsourcing business and officially recognise internal rules organisations may have implemented.

*    That kind of legal instrument would provide an efficient way to frame massive transfers made by a processor to subprocessors which part of the same organisation acting on behalf and under the instructions of a controller.

*    BCR for processors should be understood as adequate safeguards provided by the processor to the controller allowing the latter to comply with applicable EU data protection law.

*    However, BCR for processors do not aim to shift controllers’ duties to processors.

*    A processor’s organisation that have implemented BCR for processors will not need to sign contracts to frame transfers with each of the sub-processors part of its organisation as BCR for processors adduce safeguards to data transferred and processed on behalf and under the instructions of a controller.

*    BCR for processors already “approved” at EU level will be referred by the controller as the appropriate safeguards proposed for the international transfers.

*    Updates to the BCR for processors or to the list of the members of the BCR are possible without having to re-apply before the data protection authorities.

So in summary, and despite the detailed requirements that must be met, the overall approach of the Working Party is very “can do” and pragmatic. To finish things off in a collaborative manner, the Working Party points out at the end of the document that further input from interested circles and experts on the basis of the experience obtained will be welcomed. Keep it up!

 

What will happen to Safe Harbor?

Posted on April 27th, 2013 by



As data protection-related political dramas go, the debate about the suitability and future viability of Safe Harbor is right at the top. The truth is that even when the concept was first floated by the US Department of Commerce as a self-regulatory mechanism to enable personal data transfers between the EU and the USA, and avert the threat of a trade war, it was clear that the idea would prove controversial. The fact that an agreement was finally reached between the US Government and the European Commission after several years of negotiations did not settle the matter, and European data protection authorities have traditionally been more or less publicly critical of the arrangement. The level of discomfort with Safe Harbor as an adequate mechanism in accordance with European standards was made patently obvious in the Article 29 Working Party Opinion on cloud computing of 2012, which argued that sole self-certification with Safe Harbor would not be sufficient to protect personal data in a cloud environment.

The Department of Commerce has now issued its own clarifications in response to the concerns raised by the Working Party Opinion. Understandably, the Department of Commerce makes a fierce defence of Safe Harbor as an officially recognised mechanism, which was approved by the European Commission and cannot be dismissed by the EU regulators. That is and will always be correct. Whilst the clarifications do not go into the detail of the Working Party Opinion, they certainly confirm that as far as data transfers are concerned, a Safe Harbor certification provides a public guarantee of adequate protection under the scrutiny of the Federal Trade Commission.

Such robust remarks will be music to the ears of those US cloud computing service providers that have chosen to rely on Safe Harbor to show their European compliance credentials. But the debate is far from over. The European regulators are unlikely to change their mind any time soon and if their enforcement powers increase and allow them to go after cloud service providers directly (rather than their customers) as intended by the draft Data Protection Regulation, they will be keen to put those powers into practice. In addition, we are at least a year away from the new EU data protection legal framework being agreed but some of the stakeholders are using the opportunity of a new law to reopen the validity of Safe Harbor adding to the sense of uncertainty about its future.

If I were to make a prediction about what will happen to Safe Harbor, I would say that the chances of Safe Harbor disappearing altogether are nil. However, it is very likely that the European Commission will be forced to reopen the discussions about the content of the Safe Harbor Principles in an attempt to bring them closer to the requirements of the new EU framework and indeed Binding Corporate Rules. That may actually be a good outcome for everyone because it will help the US Government assert its position that Safe Harbor matches the desired privacy standards – particularly if some tweaks are eventually introduced to incorporate new elements of the EU framework – and it may address for once and for all the perennial concerns of the EU regulators.

 

Designing privacy for mobile apps

Posted on March 16th, 2013 by



My phone is my best friend.  I carry it everywhere with me, and entrust it with vast amounts of my personal information, for the most part with little idea about who has access to that information, what they use it for, or where it goes.  And what’s more, I’m not alone.  There are some 6 billion mobile phone subscribers out there, and I’m willing to bet that most – if not all of them – are every bit as unaware of their mobile data uses as me.

So it’s hardly surprising that the Article 29 Working Party has weighed in on the issue with an “opinion on apps on smart devices” (available here).  The Working Party splits its recommendations across the four key players in the mobile ecosystem (app developers, OS and device manufacturers, app stores and third parties such as ad networks and analytics providers), with app developers receiving the bulk of the attention.

Working Party recommendations

Much of the Working Party’s recommendations don’t come as a great surprise: provide mobile users with meaningful transparency, avoid data usage creep (data collected for one purpose shouldn’t be used for other purposes), minimise the data collected, and provide robust security.  But other recommendations will raise eyebrows, including that:

(*)  the Working Party doesn’t meaningfully distinguish between the roles of an app publisher and an app developer – mostly treating them as one and the same.  So, the ten man design agency engaged by Global Brand plc to build it a whizzy new mobile app is effectively treated as having the same compliance responsibilities as Global Brand, even though it will ultimately be Global Brand who publicly releases the app and exploits the data collected through it;

(*)  the Working Party considers EU data protection law to apply whenever a data collecting app is released into the European market, regardless of where the app developer itself is located globally.  So developers who are based outside of Europe but who enjoy global release of their app on Apple’s App Store or Google Play may unwittingly find themselves subjected to EU data protection requirements;

(*)  the Working Party takes the view that device identifiers like UDID, IMEI and IMSI numbers all qualify as personal data, and so should be afforded the full protection of European data protection law.  This has a particular impact on the mobile ad industry, who typically collect these numbers for ad serving and ad tracking purposes, but aim to mitigate regulatory exposure by carefully avoiding collection of “real world” identifiers;

(*)  the Working Party places a heavy emphasis on the need for user opt-in consent, and does not address situations where the very nature of the app may make it so obvious to the user what information the app will collect as to make consent unnecessary (or implied through user download); and

(*)  the Working Party does not address the issue of data exports.  Most apps are powered by cloud-based functionality and supported by global service providers meaning that, perhaps more than in any other context, the shortfalls of common data export solutions like model clauses and safe harbor become very apparent.

Designing for privacy
Mobile privacy is hard.  In her guidance on mobile apps, the California Attorney-General rightly acknowledged that: “Protecting consumer privacy is a team sport. The decisions and actions of many players, operating individually and jointly, determine privacy outcomes for users. Hardware manufacturers, operating system developers, mobile telecommunications carriers, advertising networks, and mobile app developers all play a part, and their collaboration is crucial to enabling consumers to enjoy mobile apps without having to sacrifice their privacy.
Building mobile apps that are truly privacy compliant requires a privacy by design approach from the outset.  But, for any mobile app build, there are some top tips that developers should be aware of:
  1. Always, always have a privacy policy.  The poor privacy policy has been much maligned in recent years but, whether or not it’s the best way to tell people what you do with their information (it’s not), it still remains an expected standard.  App developers need to make sure they have a privacy policy that accurately reflects how they will use and protect individuals’ personal information and make this available both prior to download (e.g. published on the app store download page) and in-app.  Not having this is a sure fire way to fall foul of privacy authorities – as evidenced in the ongoing Delta Airlines case.
  2. Surprise minimisation.  The Working Party emphasises the need for user consents and, in certain contexts, consent will of course be appropriate (e.g. when accessing real-time GPS data).  But, to my mind, the better standard is that proposed by the California Attorney-General of “surprise minimisation”, which she explains as the use of “enhanced measures to alert users and give them control over data practices that are not related to an app’s basic functionality or that involve sensitive information.” Just-in-time privacy notices combined with meaningful user controls are the way forward.
  3. Release “free” and “premium” versions.  The Working Party says that individuals must have real choice over whether or not apps collect personal information about them.  However, developers will commonly complain that real choice simply isn’t an option – if they’re going to provide an app for free, then they need to collect and monitise data through it (e.g. through in-app targeted advertising).  An obvious solution is to release two versions of the app – one for “free” that is funded by exploiting user data and one that is paid for, but which only collects user data necessary to operate the app.  That way, users that don’t want to have their data monitised can choose to download the paid for “premium” version instead – in other words, they have choice;
  4. Provide privacy menu settings.   It’s suprising how relatively few apps offer this, but privacy settings should be built into app menus as a matter of course – for example, offering users the ability to delete app usage histories, turn off social networking integration, restrict location data use etc.  Empowered users are happy users, and happy users means happy regulators; and
  5. Know Your Service Providers.  Apps serve as a gateway to user data for a wide variety of mobile ecosystem operators – and any one of those operators might, potentially, misuse the data it accesses.  Developers need to be particularly careful when integrating third party APIs into their apps, making sure that they properly understand their service providers’ data practices.  Failure to do proper due diligence will leave the developer exposed.

Any developer will tell you that you don’t build great products by designing to achieve compliance; instead, you build great products by designing a great user experience.  Fortunately, in privacy, both goals are aligned.  A great privacy experience is necessarily part and parcel of a great user experience, and developers need to address users’ privacy needs at the earliest stages of development, through to release and beyond.

2013 to be the year of mobile regulation?

Posted on January 4th, 2013 by



After a jolly festive period (considerably warmer, I’m led to understand, for me in Palo Alto than for my colleagues in the UK), the New Year is upon us and privacy professionals everywhere will no doubt be turning their minds to what 2013 has in store for them.  Certainly, there’s plenty of developments to keep abreast of, ranging from the ongoing EU regulatory reform process through to the recent formal recognition of Binding Corporate Rules for processors.  My partner, Eduardo Ustaran, has posted an excellent blog outlining his predictions here.

But one safe bet for greater regulatory attention this year is mobile apps and platforms.  Indeed, with all the excitement surrounding cookie consent and EU regulatory reform, mobile has remained largely overlooked by EU data protection authorities to date.  Sure, we’ve had the Article 29 Working Party opine on geolocation services and on facial recognition in mobile services.  The Norwegian Data Protection Inspectorate even published a report on mobile apps in 2011 (“What does your app know about you?“).  But really, that’s been about it.  Pretty uninspiring, not to mention surprising, when consumers are fast abandoning their creaky old desktop machines and accessing online services through shiny new smartphones and tablets: Forbes even reports that mobile access now accounts for 43% of total minutes spent on Facebook by its users.

Migration from traditional computing platforms to mobile computing is not, in and of itself, enough to guarantee regulator interest.  But there are plenty of other reasons to believe that mobile apps and platforms will come under increased scrutiny this year:

1.  First, meaningful regulatory guidance is long overdue.  Mobiles are inherently more privacy invasive than any other computing platform.  We entrust more data to our mobile devices (in my case, my photos, address books, social networking, banking and shopping account details, geolocation patterns, and private correspondence) than any other platform and generally with far less security – that 4 digit PIN really doesn’t pass muster.  We download apps from third parties we’ve often scarcely ever heard of, with no idea as to what information they’re going to collect or how they’re going to use it, and grant them all manner of permissions without even thinking – why, exactly, does that flashlight app need to know details of my real-time location?  Yet despite the huge potential for privacy invasion, there persists a broad lack of understanding as to who is accountable for compliance failures (the app store, the platform provider, the network provider or the app developer) and what measures they should be implementing to avoid privacy breaches in the first place.  This uncertainty and confusion makes regulatory involvement inevitable.

2.  Second, regulators are already beginning to get active in the mobile space – if this were not the case, the point above would otherwise be pure speculation.  It’s not, though.  On my side of the Pond, we’ve recently seen the California Attorney General file suit against Delta Air Lines for its failure to include a privacy policy within its mobile app (this action itself following letters sent by the AG to multiple app providers warning them to get their acts together).  Then, a few days later, the FTC launched a report on children’s data collection through mobile apps, in which it indicated that it was launching multiple investigations into potential violations of the Children’s Online Privacy Protection Act (COPPA) and the FTC Act’s unfair and deceptive practices regime.  The writing is on the wall, and it’s likely EU regulators will begin following the FTC’s lead.

3.  Third, the Article 29 Working Party intends to do just that.  In a press release in October, the Working Party announced that “Considering the rapid increase in the use of smartphones, the amount of downloaded apps worldwide and the existence of many small-sized app-developers, the Working Party… [will] publish guidance on mobile apps… early next year.” So guidance is coming and, bearing in mind that the Article 29 Working Party is made up of representatives from national EU data protection authorities, it’s safe to say that mobile privacy is riding high on the EU regulatory agenda.

In 2010, the Wall Street Journal reported: “An examination of 101 popular smartphone “apps”—games and other software applications for iPhone and Android phones—showed that 56 transmitted the phone’s unique device ID to other companies without users’ awareness or consent. Forty-seven apps transmitted the phone’s location in some way. Five sent age, gender and other personal details to outsiders… Many apps don’t offer even a basic form of consumer protection: written privacy policies. Forty-five of the 101 apps didn’t provide privacy policies on their websites or inside the apps at the time of testing.”  Since then, there hasn’t been a great deal of improvement.  My money’s on 2013 being the year that this will change.

Article 29 Working Party pushes for Binding Safe Processor Rules

Posted on December 9th, 2012 by



 

The Article 29 Working Party has taken another crucial step towards the full recognition of BCR for processors or ‘Binding Safe Processor Rules’. Following the unqualified backing by the European Commission in the proposal for a Data Protection Regulation early in 2012 and the publication of the criteria for approval by the Working Party itself last summer, an agreement has now been reached by the European data protection authorities on the application and approval process.

The official announcement of a mutual recognition and cooperation procedure-type approach will take place in January 2013 and shortly after, the Working Party will issue the appropriate application form. This is the strongest indication to date that applications for BCR for processors will be dealt with in the same way as the traditional BCR, opening the door for hybrid BCRs for those organisations with global data protection programmes that apply to their dual role as controllers (in respect of their own data) and processors (in respect of their clients’ data, as in the case of cloud service providers).

 

A week in Brussels

Posted on November 16th, 2012 by



Life is always busy in Brussels.  Policy making and legislative activities never stop but this particular week has been rather eventful for the current European data protection reform process.  The Data Protection Congress organised by the IAPP has served as an open and constructive forum for some of the key players to get together and debate their views in front of a very sophisticated audience.  The most visible message of the week has been that all parties involved – European Parliament, Commission, Council of the EU, EDPS and of course the data protection authorities – are now working at full pace to consider the issues, listen to other stakeholders and inject their thinking into the end result.

Here are some of the key takeaways about the data protection legislative reform we heard at the IAPP Data Protection Congress:

*    Francoise Le Bail, Director General for Justice at the European Commission, kicked off a prestigious roster of keynote speakers by acknowledging the need to simplify the current proposal, particularly for the benefit of SMEs.  However, she fiercely defended two commonly criticised aspects of the draft Regulation: the Commission’s delegated acts, which she believes are needed to maintain the Regulation’s flexibility; and monetary fines, which are meant to give the new framework much needed teeth.

*    For Jan Philipp Albrecht, Rapporteur of the LIBE Committee with primary responsibility for leading the European Parliament’s position, the main challenge is to convince everyone (individuals and businesses) that a harmonised approach is needed.  Reiterating his aim to approve the final text before the next European Parliament elections in June 2014, he emphasised the need for a regulation (rather than a directive) for the sake of certainty going forward, making clear LIBE’s stance on this issue.  Mr Albrecht also said that whilst we are on the right track in terms of principles, we also need to achieve foreseeability, which suggests that some of the more technology-specific provisions will be revised.

*    Jacob Kohnstamm, Chairman of the Article 29 Working Party showed his concern about some essential elements being under attack, namely: personal data, consent and purpose limitation.  With regard to personal data, he would favour of a slight extension of the definition to cover any data that may be used to single out individuals.  He believes that it is crucial to leave the concept of consent untouched because if data protection is a fundamental right, the individual’s consent must override everything else.  With regard to purpose limitation, as well as profiling, Mr Kohnstamm announced that the Article 29 Working Party is working on alternative proposals.  Not surprisingly, Mr Kohnstamm is wary of the ‘one stop shop’ principle and emphasised the role of the proposed European Data Protection Board to get the balance right.

*    The ‘one stop shop’ principle became one of the most heatedly debated topics.  Isabelle Falque-Pierrotin, President of the CNIL, indicated that the current proposal was simply not realistic and that local data protection authorities should not be prevented from enforcing the law.  Jan Philipp Albrecht responded by saying that it is very important to have one competent regulator to ensure consistency of interpretation and actions.  The debate on this issue is clearly wide open with Peter Hustinx, the European Data Protection Supervisor, taking a position somewhere in between where there is one regulator as a single point of contact for the same organisation across the EU but all regulators are still competent.

Clearly, the pressure to get the balance right is on and whilst there is no sense of urgency yet, Sophie in ‘t Veld, MEP, summarised the situation perfectly when she referred to the fact that after months of familiarisation with the Commission’s proposals, it was now time to put our heads down and get on with the business of building the future data protection framework for Europe.

 

Weather forecast for cloud computing in Europe is “overall good”

Posted on October 8th, 2012 by



The end of September has seen the UK Information Commissioner’s Office release its guidance on cloud computing, shortly followed by the European Commission’s announcement on a new strategy for “Unleashing the potential of cloud computing in Europe”.

ICO

The ICO’s new guidance starts with a helpful ‘setting the scene’ introduction for those new to the topic of cloud computing by going through definitions, different deployment and service models before moving on to an analysis of the data protection obligations.

According to the ICO, based on the fact of determining the purposes and the manner in which any personal data may be processed, the cloud customer is most likely to be the data controller. The guidance does contain a caveat that each case of outsourcing to the cloud and the controller/processor roles of each party will need to be determined separately. The end of the document has a useful checklist of considerations.

The guidance sets out a logical approach that should be followed by potential customers of cloud computing services and which comprises the following steps:

  1. Data selection – selecting which data to move to the cloud and creating a record of which categories of data you are planning to move.
  2. Risk assessment – carrying out privacy impact assessments is recommended for large and complex personal data processing operations in the cloud.
  3. The type of service and provider selection– taking into account the maturity of the service offered and whether it targets a specific market.
  4. Monitoring performance – ongoing obligation throughout the time the outsourcing to the cloud takes place.
  5. Informing cloud users – this reflects the transparency principle; cloud customers who are data controllers (who make services that run on the cloud available to individuals) will need to consider informing the individuals/cloud end users of the service about the processing in the cloud.
  6. Written contract – it is a legal requirement under the Data Protection Act to have a written contract in place between a data controller and a data processor.

 

With regard to selecting a cloud provider the ICO points potential cloud users to the need to look at the security offered, how the data will be protected and the access controls that have been put in place. Helpfully for data controllers, the ICO recognises that it is not always possible to carry out physical audits of the cloud provider but highlights the importance of ensuring that appropriate technical and organisational security measures are maintained at all times.

On the data transfers front the ICO states that cloud customers should ask potential cloud providers for a list of countries where data is likely to be processed and for information relating to the safeguards in place there. It is unfortunate that in this aspect the ICO follows the recent Article 29 Working Party Opinion on Cloud Computing.

EU

Turning to the European Commission’s announcement of a new strategy for “Unleashing the potential of cloud computing in Europe”, the main aim of the strategy is to support the take-up of cloud computing services through creating new homogenised technical standards on interoperability, data portability and reversibility by 2013; as well as certification schemes for cloud providers. A key area where, according to the strategy document, the Commission will concentrate its work on will be safe and fair contract terms and conditions for cloud computing services. This will involve developing model terms for service level agreements. The strategy stresses the importance of the ongoing work on the proposed Data Protection Regulation and the expectation that this work should be completed in 2013.

The new strategy when coupled with the recent Article 29 Working Party Opinion shows clear signs that cloud computing is fast gaining prominence on the European Commission’s Digital Agenda. At this stage it is important to track the developments in this area and for industry members to continue providing their feedback to proposals. The ICO’s guidance proves that a pragmatic approach to cloud computing is achievable without minimising the protection afforded to individuals’ personal data.

In short, the key takeaways from these developments are that in addition to contributing to the development of model contract terms, customers of cloud computing services must look at the selection process and the contractual documentation as their top priorities when approaching a cloud service relationship.