Archive for June, 2012

Belgium finally adopts cookie consent rules

Posted on June 29th, 2012 by



Following the European Commission’s initiation of infringement proceedings against Belgium, the Belgian Chamber of Representatives and the Senate have finally voted on the Act which would bring Belgium law in line with Europe’s ‘cookie consent’ requirement.

So what does this Act say about cookies?

Belgium eventually opted for a pragmatic approach: Cookies may be served if the individual gives his or her consent, having been provided with clear and comprehensive information about why their personal data will be collected and processed. 

Website operators in Belgium can breath a sigh of relief, as contrary to the initial draft prepared by the Telco regulator, the Act does not require ‘prior written consent’. In the absence of any further guidance about how consent must be obtained, it seems that it will be possible for website operators to rely on both ‘express’ and ‘implied’ consent, provided it is ‘freely given, unambiguous, specific and informed‘.

Practical tips for website operators

In the absence of any explanatory notes from the Chamber of Representatives or more practical guidance from Belgian regulators, website operators may find it useful to have a look at the guidance published by other regulators such as the ICO or the CNIL.

In any event, it is clear that the adoption of the cookie consent rules in Belgium is now a reality and website operators in Belgium no longer have an excuse not to take appropriate action.

Why the Big Buzz about Big Data?

Posted on June 29th, 2012 by



Another year, another buzz word, and this time around it’s “Big Data” that’s getting everyone’s attention. But what exactly is Big Data, and why is everyone – commercial organisations, regulators and lawyers – so excited about it?

Put simply, the term Big Data refers to datasets that are very, very large – so large that, traditionally, supercomputers would ordinarily have been required to process them. But, with the irrepressible evolution of technology, falling computing costs, and scalable, distributed data processing models (think cloud computing) Big Data processing is increasingly within the capability of most commercial and research organisations.

In its oft-quoted article “The Data Deluge”, the Economist reports that “Everywhere you look, the quantity of information in the world is soaring. According to one estimate, mankind created 150 exabytes (billion gigabytes) of data in 2005. [In 2010], it will create 1,200 exabytes.”  Let’s put that in perspective – 1,200 exabytes is 1,200,000,000,000 gigabytes of data. A typical Blu-Ray disc can hold 25 gigabytes – so 1,200 exabytes is about the equivalent of about 48 billion Blu-Ray discs. Estimating your typical Blu-Ray movie at about 2 hours long (excluding special features and the like), then there’s at least 96 billion hours of viewing time there, or about 146,000 human life times.  OK, this is a slightly fatuous example, but you get my point – and bear in mind that global data is growing year-on-year at an exponential rate so these figures are already well out of date.

Much of this Big Data will be highly personal to us: think about the value of the data we all put “out there” when we shop online or post status updates, photos and other content through our various social networking accounts (I have at least 5). And don’t forget the search terms we post when we use our favourite search engines, or the data we generate when using mobile – particularly location-enabled – services. Imagine how organisations, if they had access to all this information, could use it to better advertise their products and services, roadmap product development to take account of shifting consumer patterns, spot and respond to potentially-brand damaging viral complaints – ultimately, keep their customers happier and improve their revenues.

The potential benefits of Big Data are vast and, as yet, still largely unrealised. It goes against the grain of any privacy professional to admit that there are societal advantages to data maximisation, but it would be disingenuous to deny this. Peter Fleischer, Google’s Privacy Counsel, expressed it very eloquently on his blog when he wrote “I’m sure that more and more data will be shared and published, sometimes openly to the Web, and sometimes privately to a community of friends or family. But the trend is clear. Most of the sharing will be utterly boring: nope, I don’t care what you had for breakfast today. But what is boring individually can be fascinating in crowd-sourcing terms, as big data analysis discovers ever more insights into human nature, health, and economics from mountains of seemingly banal data bits. We already know that some data sets hold vast information, but we’ve barely begun to know how to read them yet, like genomes. Data holds massive knowledge and value, even, perhaps especially, when we do not yet know how to read it. Maybe it’s a mistake to try to minimize data generation and retention. Maybe the privacy community’s shibboleth of data deletion is a crime against science, in ways that we don’t even understand yet.” (You can access Peter’s blog “Privacy…?” here.)

This quote raises the interesting question of whether the compilation and analysis of Big Data sets should really be considered personal data processing. Of course, many of the individual records within commercial Big Data sets will be personal – but the true value of Big Data processing is often (though not always) in the aggregate trends and patterns they reveal – less about predicting any one individual’s behaviours, reactions and preferences, and more about understanding the global picture. Perhaps its time that we stop thinking of privacy in terms of merely collecting data, and look more to the intrusiveness (or otherwise) of the purposes to which our data are put?

This is perhaps something for a wider, philosophical debate about the pros and cons of Big Data, and I wouldn’t claim to have the answers. What I can say, though, is that Big Data faces some big issues under data protection law as it stands today, not least in terms of data protection principles that mandate user notice and choice, purpose limitation, data minimisation, data retention and – of course – data exports. These are not issues that will go away under the new General Data Protection Regulation which, as if to gear itself up for a fight with Big Data proponents, further bolsters transparency, consent and data minimisation principles, while also proposing a new, highly controversial ‘right to be forgotten’.

So what can and should Big Data collectors do for now? Fundamentally, accountability for the data you collect and process will be key. Your data subjects need to understand how their data will be used, both at the individual and the Big Data level, to feel in control of this and to be comforted that their data won’t be used in ways that sit outside their reasonable expectations of privacy. This is not just a matter of external facing privacy policies, but also a matter of carefully-constructed internal policies that impose sensible checks and balances on the organisation’s use of data. It’s also about adopting Privacy Impact Assessments as a matter of organisational culture to identify and address risks whenever using Big Data analysis for new or exciting reasons.

Big Data is, and should be, the future of data processing, and our laws should not prevent this. But, equally, organisations need to be careful that they do not see the Big Data age as a free for all hunting season on user data that invades personal privacy and control. Big issues for Big Data indeed.

Binding Safe Processor Rules a reality

Posted on June 20th, 2012 by



Following the European Commission’s endorsement of BCR for processors or ‘Binding Safe Processor Rules’ (BSPR) in the proposed EU Data Protection Regulation, the EU data protection authorities have now given their definitive and public backing to a concept that is set to make a massive contribution to the protection of personal data throughout the world.  In their new WP195 Document, the Article 29 Working Party provides a toolbox, describing the conditions to be met, for the adoption and approval of BSPR (or as the Working Party puts it “BCR for third party data”).

With the publication by the Article 29 Working Party of their expectations for BSPR programmes, suppliers of data processing services all around the world have been clearly told what it takes to be a safe recipient of data in their role as service providers.  Whilst pure contractual solutions will remain as a mechanism to legitimise the engagement of global data service providers, the prospect of getting an upfront approval by the EU regulators is likely to become a much more appealing way forward.

The benefits of BSPR are obvious:

•   The official approval of a set of BSPR will automatically grant the service provider the status of “safe processor” which will, in turn, allow its clients to overcome the data transfers limitations under EU data protection law.

•   BSPR replace the need for inflexible and onerous data transfers agreements.

•   BSPR can be tailored to the data protection practices of the service provider – they are a form of self-regulation.

As with the current proposal for a new EU data protection framework, the success of BSPR in realising their potential depends on how realistic the relevant obligations and compliance expectations are.  Fortunately, if the criteria for BSPR approval set out by the Article 29 Working Party is anything to go by, the success of BSPR is well within reach of any responsible data processing services provider.

ICO’s Draft Anonymisation Code of Practice – How to effectively anonymise…

Posted on June 8th, 2012 by



The Information Commissioner’s Office has published a draft Anonymisation Code of Practice for consultation. The consultation period runs until 23 August 2012 and the aim is to publish the final Code in September 2012.  The Consultation document sets out the questions that organisations and members of the public are invited to respond to.

The Code contains the ICO’s good practice recommendations about achieving effective anonymisation and is relevant for organisations considering obligations under both data protection law and freedom of information laws. The Code explains the benefits of anonymisation, the type of issues to consider when anonymising personal data effectively as well as whether consent to produce or disclose anonymised data is required (generally it’s not). It also examines mechanisms that organisations can use to demonstrate effective anonymisation i.e. the motivated intruder and motivated defender tests.

There is a specific section on spatial information which is drawn from the ICO’s previous guidance on crime mapping and the Code also sets out what the ICO expects an organisation to have in place to demonstrate effective governance when deploying anonymisation e.g. Privacy Impact Assessments, procedures for dealing with cases where anonymisation is difficult to achieve. In particular, the Code underlines the importance of re-identification testing so that an organisation should frequently assess the likelihood of anonymised data being linked to individuals.

Practical examples of anonymisation techniques including variations of data reduction and data perturbation methods (some of which are easier to follow than others) are set out in Appendix 1 and specific techniques (de-identification, pseudonymisation, aggregation, derived data items and banding) identified in Appendix 3.

It is clear that the ICO wants to encourage organisations to anonymise personal data where appropriate and, through the Code, to remove some of the nervousness around anonymisation. However, an organisation that adopts anonymisation will need to consider implementing a proper process both before anonymisation and throughout the life of the anonymised data (in proportion to the risks involved) to demonstrate that an appropriate anonymisation technique is adopted and that the risk of re-identification is kept under scrutiny. On the latter point, the Code concedes that the risk of re-identification through data-linkage is essentially unpredictable and therefore urges organisations to carry out a thorough risk analysis before anonymising personal data in the first place.