Archive for the ‘Financial penalties’ Category

Spam texts: “substantially distressing” or just annoying?

Posted on November 11th, 2014 by



The Department for Culture, Media and Sport (“DCMS”) recently launched a consultation to reduce or even remove the threshold of harm the Information Commissioner’s Office (“ICO”) needs to establish in order to fine nuisance callers, texters or emailers.

Background

In 2010 ICO was given powers to issue Monetary Penalty Notices (“MPNs”, or fines to you and me) of up to £500,000 for those companies who breach the Data Protection Act 1998 (“DPA”).  In 2011 these were extended to cover breaches of the Privacy and Electronic Communications Regulations 2003 (“PECR”), which sought to control the scourge of nuisance calls, texts and emails.

At present the standard ICO has to establish before issuing an MPN is a high one: that there was a serious, deliberate (or reckless) contravention of the DPA or PECR which was of a kind likely to cause substantial damage or substantial distress.  Whilst unsolicited marketing calls are certainly irritating, can they really be said to cause “substantial distress”?  Getting a text from a number you didn’t know about a PPI claim is certainly annoying, but could it seriously be considered “substantial damage”?  Not exactly; and therein lies the problem.

Overturned

In the first big case where ICO used this power, it issued an MPN of £300,000 to an individual who’d allegedly sent millions of spam texts for PPI claims to users who had not consented to receive them.  Upon appeal the Information Rights Tribunal overturned the fine.  The First Tier Tribunal found that whilst there was a breach of PECR (the messages were unsolicited, deliberate, with no opt-out link and for financial gain), the damage or distress caused could not be described as substantial.  Every mobile user knew what a PPI spam text meant and was unlikely to be concerned for their safety or have false expectations of compensation.  A short tut of irritation and then deleting the message solved the problem.  The Upper Tribunal agreed: a few spam texts did not substantial damage or distress cause.  Interestingly, the judge pointed out that the “substantial” requirement had come from the UK government, was stricter than that required by the relevant EU Directive and suggested the statutory test be revisited.

This does not however mean that ICO has not been able to use the power.  Since 2012 it has issued nine MPNs totalling £1.1m to direct marketers who breach PECR.  More emphasis is placed on the overall level of distress suffered by hundreds or thousands of victims, which can be considered substantial.  ICO concentrates on the worst offenders: cold callers who deliberately and repeatedly call numbers registered with the Telephone Preference Service, (“TPS” – Ofcom’s “do not call” list) even when asked to stop and those that attract hundreds of complaints.

In fact, in this particular case there were specific problems with the MPN document (this will not necessarily come as a surprise for those familiar with ICO MPNs).  The Tribunal criticised ICO for a number of reasons: not being specific about the Regulation contravened, omitting important factual information, including in the period of contravention time when ICO did not yet have fining power and changing the claim from the initial few hundred complaints to the much wider body that may have been sent.  Once all this was taken into consideration, only 270 unsolicited texts were sent to 160 people.

Proposal

ICO has been very vocal about having its hands tied in this matter and has long pushed for a change in the law (which is consistent with ICO’s broader campaigning for new powers).  Nuisance calls are a cause of great irritation for the public and currently only the worst offenders can be targeted.  Statistics compiled by ICO and TPS showed that the most nuisance is caused by large numbers of companies making a smaller number of calls.  Of 982 companies that TPS received complaints about, 80% received fewer than 5 complaints and only 20 more than 25 complaints.

Following a select committee enquiry, an All Party Parliamentary Group and a backbench debate, DCMS has launched the consultation, which invites responses on whether the threshold should be lowered to “annoyance, inconvenience or anxiety“.  This would bring it in line with the threshold Ofcom must consider when fining telecoms operators for persistent misuse for silent/abandoned calls. ICO estimates that had this threshold been in place since 2012, a further 50 companies would have been investigated/fined.

The three options being considered are: to do nothing, to lower the threshold or to remove it altogether.  Both ICO and DCMS favour complete removal.  ICO would thus only need to prove a breach was serious and deliberate/reckless.

Comment

I was at a seminar last week with the Information Commissioner himself, Chris Graham, at which he announced the consultation.  It was pretty clear he is itching to get his hands on these new powers to tackle rogue callers/emailers/texters, but emphasised any new powers would still be used proportionally and in conjunction with other enforcement actions such as compliance meetings and enforcement notices.  Even the announcement of any new law should act as a deterrent: typically whenever a large MPN is announced, the number of complaints about direct marketers reduces the following month.

The consultation document is squarely aimed at unsolicited calls, texts and emails and is consistently stated to only apply to certain regulations of PECR.  There is no suggestion that the threshold be reduced for other breaches of the PECR or the DPA.  It will be interesting to see how any reform will work in practice as the actual threshold is contained within the DPA and so will require its amendment.

The consultation will run until 7 December 2014, the document can be found here.  Organisations that are concerned about these proposals now have an opportunity to make their voices heard.

DPA update: finally the end of enforced Subject Access Requests?

Posted on November 10th, 2014 by



Employers who force prospective employees to obtain a Subject Access Request from the police detailing any criminal history or investigation will soon themselves be committing a criminal offence.

Background

The Ministry of Justice recently announced that on 1 December 2014, section 56 of the Data Protection Act 1998 (“DPA”) will come into force across the UK.  It will make it a criminal offence for employers to demand prospective employees obtain Subject Access Request (“SAR”) reports.

Some employers are concerned that s56 will make it an offence to undertake Disclosure & Barring Service (“DBS”, the new name for the Criminal Records Bureau) checks on prospective employees.  This is not the case.  In fact it is designed to encourage the use of these and to prevent enforced SARs.

Purpose

The correct procedure to obtain criminal records of prospective employees is via the disclosure service provided by DBS or the Scottish equivalent, Disclosure Scotland (“DS”).  Whilst these services were in the process of being developed, employers could demand that applicants made SARs directly to the police and pass on the report.

The purpose of s56 was to close this loophole once the DBS/DS system had become fully operational.  For that reason s56 was inserted into the DPA but not enacted with the rest of the provisions.  It applies only to records obtained by the individual from the police using their s7 SAR rights. SAR reports contain far more information than would be revealed under a DBS/DS check, such as police intelligence and spent convictions.  As a result the practice is frowned upon by the authorities: the police SAR form states enforced SARs are exploitative and contrary to the spirit of the DPA, and the Information Commissioner’s Office (“ICO”) guidance on employment has long advised against it using stern wording (“Do not force applicants…” !).

Exemptions

The only exemptions to s56 are situations when the report is justified as being in the public interest or when required by law; the s28 national security exemption does not apply.

Opinion

There has been no specific guidance released on s56.  However, it is clear from the Written Ministerial Statement which announced the change in March 2014 and the ICO release which followed it that the section is being brought into force to close the loophole.  ICO has publically stated it intends to prosecute infringers under the offence so as to encourage the correct use of the DBS/DS procedure and prevent enforced SARs.  s56 does nothing to prevent employers requesting DBS/DS checks on prospective employees in the usual way.

What this means in practice is that any employer who demands a potential employee to file an SAR with the police and provide the results will be committing a criminal offence and there is a potentially unlimited fine for infringement.  Instead, employers should utilise the DBS procedure (DS if in Scotland) for the purpose of background criminal checks.  This sneaky backdoor route to obtaining far more sensitive personal data than employers are entitled to – often harming the individual’s job prospects in the process – will be shut for good.  Non-compliant employers should take note.

Update 19 November 2014

In an informative webinar on this subject yesterday, ICO mentioned a delay in the commencement date.  When I queried this the official response was: “a technical issue encountered when finalising arrangements for introduction means there will be a delay to the date for commencing Section 56 of the Data Protection Act. The Government is working to urgently resolve this issue. There is no exact date as yet.”

EU cookie issues alive and well

Posted on June 16th, 2014 by



It’s hard to believe that it has been a few years since the updated cookie “consent” rules came into effect across Europe. At that time, it was pretty much the hot topic in the data privacy world as we all grappled with the rules’ implications and how to implement appropriate compliance mechanisms. However in recent times, one would be forgiven for almost forgetting those days. The early forecasts of intense DPA cookie enforcement activity didn’t quite happen and we’ve also had the minor issue of the new draft Regulation and the Snowden affair (not to mention the on-going daily challenges presented by data security, data processing contracts, BYOD, cloud computing issues etc) to keep us all occupied.

Therefore, it’s nice to hear that there have been enough recent cookie developments in various EU member states to remind us that it is still an important compliance issue for any organisation that uses cookies and related tracking technologies. Here’s a run-down of what’s been happening in Europe:

Italy

The Italian Data Protection Authority (Garante) has published guidance on complying with the cookie requirements in Italy in order to obtain the express consent of the user. The main points are as follows:

  • Website operators are required to implement a web banner on the landing page outlining cookies used, the right to refuse cookies and a link to a separate notice setting out full details of the cookies used and the means by which a user can turn them on or off.
  • The requirement to notify the Garante where profiling cookies and related technologies are used.
  • Penalties under Italian data protection law can range from €6,000 to €120,000 (for example for serving cookies without obtaining the appropriate consent and failing to notify the Garante of such processing activities).
  • Operators shall benefit from a one-year grace period (expiring on 3rd June 2015) to implement the relevant measures.

Spain

After being the first EU member state to issue fines for infringement of its cookie rules (see here) the law regulating the use of cookies has been amended. We highlight the following changes. It has been clarified that it is an infringement to serve cookies without the individual’s consent. Due to a legislative error this was previously not the case and the Spanish DPA could not undertake enforcement action on this issue. Infringements may be ‘low’ or ‘serious’. The latter category will apply if the organisation infringes the cookie rules on several occasions within a period of three years. The enforcement powers available to the Spanish DPA have also changed so that it is able to issue warnings for failure to comply with the cookie rules, or decide that it will apply the lowest category of fines for serious infringements under certain circumstances. Advertising networks will also now be liable for their failure to comply with the cookie rules.

Netherlands

Following the Dutch DPA’s first investigation into an organisation’s use of cookies, the online advertising agency ‘YD Display Advertising Benelux’ (YD) was found to have infringed the Dutch cookie rules by placing tracking cookies on users’ web browsers in order to provide personalised advertising without the user’s consent. The cookies enabled YD and its network of advertisers to track the behaviour of visitors through multiple websites. The DPA found that the ability of users to opt-out of receiving personalised advertising was not sufficient to construe unambiguous consent and the information provided by YD to its users on the use of use of such cookies did not satisfy the notice requirements.

The Dutch DPA noted that such violations would still exist even if the proposed amendments to the current Dutch cookie rules (currently going through the Dutch Parliament) were applied because such tracking cookies would still require user consent. This investigation follows the Dutch DPA’s earlier announcement that one of its priorities for 2014 is to focus on the profiling, tracking and tracing of internet users.

France

This year has, and will continue to be, a busy year for the French Data Protection Authority (CNIL) (see here).  A new consumer rights law came into force on 17 March, which amends the Data Protection Act and grants the CNIL new powers to conduct online inspections (in addition to the existing on-site inspections). This provision gives the CNIL the right, via an electronic communication service to the public, “to consult any data that are freely accessible, or rendered accessible, including by imprudence, negligence or by a third party’s action, if required, by accessing and by remaining within automatic data protection systems for as long as necessary to conduct its observations.” This new provision opens up the CNIL’s enforcement powers to the digital world and, in particular, gives it stronger powers to inspect the online activities of companies. The CNIL says that this law will allow it to verify online security breaches, privacy policies and consent mechanisms in the field of direct marketing. One can expect the use of cookies to also fall under this remit.

Belgium

Finally, the Belgian DPA has recently launched a public consultation on its draft cookie guidance (see our previous blog), stating that implied user consent may be an acceptable model for the use of cookies.

What this means now

Whilst the adoption of the draft Regulation may currently be grabbing all the headlines, regulating the use of cookies has not been completely forgotten by Europe’s national regulators. This presents challenges to organisations operating on an EU-wide basis as they attempt to understand and comply with the various developments and requirements in specific EU member states. Therefore the message is clear for businesses operating in Europe:

  • Audit your cookie use and find out what you’ve got
  • Assess the intrusiveness of those cookies
  • Adopt a notice and consent strategy
  • Implement forward-facing cookie management mechanisms

CNIL: a regulator to watch in 2014

Posted on March 18th, 2014 by



Over the years, the number of on-site inspections by the French DPA (CNIL) has been on a constant rise. Based on the CNIL’s latest statistics (see CNIL’s 2013 Annual Activity Report), 458 on-site inspections were carried out in 2012, which represents a 19 percent increase compared with 2011. The number of complaints has also risen to 6,000 in 2012, most of which were in relation to telecom/Internet services, at 31 percent. In 2012, the CNIL served 43 formal notices asking data controllers to comply. In total, the CNIL pronounced 13 sanctions, eight of which were made public. In the majority of cases, the sanction pronounced was a simple warning (56 percent), while fines were pronounced in only 25 percent of the cases.

The beginning of 2014 was marked by a landmark decision of the CNIL. On January 3, 2014, the CNIL pronounced a record fine against Google of €150,000 ($204,000) on the grounds that the terms of use available on its website since March 1, 2012, allegedly did not comply with the French Data Protection Act. Google was also required to publish this sanction on the homepage of Google.fr within eight days of it being pronounced. Google appealed this decision, however, on February 7th, 2014, the State Council (“Conseil d’Etat”) rejected Google’s claim to suspend the publication order.

Several lessons can be learnt from the CNIL’s decision. First, that the CNIL is politically motivated to hit hard on the Internet giants, especially those who claim that their activities do not fall within the remit of the French law. No, says the CNIL. Your activities target French consumers, and thus, you must comply with the French Data Protection Act even if you are based outside the EU. This debate has been going on for years and was recently discussed in Brussels within the EU Council of Ministers’ meeting in the context of the proposal for a Data Protection Regulation. As a result, Article 4 of the Directive 95/46/EC could soon be amended to allow for a broader application of European data protection laws to data controllers located outside the EU.

Second, despite it being the highest sanction ever pronounced by the CNIL, this is hardly a dissuasive financial sanction against a global business with large revenues. Currently, the CNIL cannot pronounce sanctions above €150,000 or €300,000 ($410,000) in case of a second breach within five years from the first sanction pronounced, whereas some of its counterparts in other EU countries can pronounce much heavier sanctions; e.g., last December, the Spanish DPA pronounced a €900,000 ($1,230,000) fine against Google. This could soon change, however, in light of an announcement made by the French government that it intends to introduce this year a bill on “the protection of digital rights and freedoms,” which could significantly increase the CNIL’s enforcement powers.

Furthermore, it seems that the CNIL’s lobbying efforts within the French Parliament are finally beginning to pay off. A new law on consumer rights came into force on 17 March 2014, which amends the Data Protection Act and grants the CNIL new powers to conduct online inspections in addition to the existing on-site inspections. This provision gives the CNIL the right, via an electronic communication service to the public, “to consult any data that are freely accessible, or rendered accessible, including by imprudence, negligence or by a third party’s action, if required, by accessing and by remaining within automatic data protection systems for as long as necessary to conduct its observations.” This new provision opens up the CNIL’s enforcement powers to the digital world and, in particular, gives it stronger powers to inspect the activities of major Internet companies. The CNIL says that this law will allow it to verify online security breaches, privacy policies and consent mechanisms in the field of direct marketing.

Finally, the Google case is a good example of the EU DPAs’ recent efforts to conduct coordinated cross-border enforcement actions against multinational organizations. In the beginning of 2013, a working group was set up in Paris, led by the CNIL, for a simultaneous and coordinated enforcement action against Google in several EU countries. As a result, Google was inspected and sanctioned in multiple jurisdictions, including Spain and The Netherlands. Google is appealing these sanctions.

As the years pass by, the CNIL continues to grow and to become more resourceful. It is also more experienced and better organized. The CNIL is already very influential within the Article 29 Working Party, as recently illustrated by the Google case, and Isabelle Falque-Pierrotin, the chairwoman of the CNIL, was recently elected chair of the Article 29 Working Party. Thus, companies should pay close attention to the actions of the CNIL as it becomes a more powerful authority in France and within the European Union.

This article was first published in the IAPP’s Privacy Tracker on 27 February 2014 and was updated on 18th March 2014.

How do EU and US privacy regimes compare?

Posted on March 5th, 2014 by



As an EU privacy professional working in the US, one of the things that regularly fascinates me is each continent’s misperception of the other’s privacy rules.  Far too often have I heard EU privacy professionals (who really should know better) mutter something like “The US doesn’t have a privacy law” in conversation; equally, I’ve heard US colleagues talk about the EU’s rules as being “nuts” without understanding the cultural sensitivities that drive European laws.

So I thought it would be worth dedicating a few lines to compare and contrast the different regimes, principally to highlight that, yes, they are indeed different, but, no, you cannot draw a conclusion from these differences that one regime is “better” (whatever that means) than the other.  You can think of what follows as a kind of brief 101 in EU/US privacy differences.

1.  Culturally, there is a stronger expectation of privacy in the EU.  It’s often said that there is a stronger cultural expectation of privacy in the EU than the US.  Indeed, that’s probably true.   Privacy in the EU is protected as a “fundamental right” under the European Union’s Charter of Fundamental Rights – essentially, it’s akin to a constitutional right for EU citizens.  Debates about privacy and data protection evoke as much emotion in the EU as do debates about gun control legislation in the US.

2.  Forget the myth: the US DOES have data protection laws.  It’s simply not true that the US doesn’t have data protection laws.  The difference is that, while the EU has an all-encompassing data protection framework (the Data Protection Directive) that applies across every Member State, across all sectors and across all types of data, the US has no directly analogous equivalent.  That’s not the same thing as saying the US has no privacy laws – it has an abundance of them!  From federal rules designed to deal with specific risk scenarios (for example, collection of child data online is regulated under the Children’s Online Privacy Protection Act), to sector-specific rules (Health Insurance Portability and Accountability Act for health-related information and the Gramm-Leach-Bliley Act for financial information), to state-driven rules (the California Online Privacy Protection Act in California, for example – California, incidentally, also protects individuals’ right to privacy under its constitution).  So the next time someone tells you that the US has no privacy law, don’t fall for it – comparing EU and US privacy rules is like comparing apples to a whole bunch of oranges.

3.  Class actions.  US businesses spend a lot of time worrying about class actions and, in the privacy realm, there have been multiple.  Countless times I’ve sat with US clients who agonise over their privacy policy drafting to ensure that the disclosures they make are sufficiently clear and transparent in order to avoid any accusation they may have misled consumers.  Successful class actions can run into the millions of $$$ and, with that much potential liability at stake, US businesses take this privacy compliance risk very seriously.  But when was the last time you heard of a successful class action in the EU?  For that matter, when was the last time you heard of ANY kind of award of meaningful damages to individuals for breaches of data protection law?

4.  Regulatory bark vs. bite.  So, in the absence of meaningful legal redress through the courts, what can EU citizens do to ensure their privacy rights are respected?  The short answer is complain to their national data protection authorities, and EU data protection authorities tend to be very interested and very vocal.  Bodies like the Article 29 Working Party, for example, pump out an enormous volume of regulatory guidance, as do certain national data protection authorities, like the UK Information Commissioner’s Office or the French CNIL. Over in the US, American consumers also have their own heavyweight regulatory champion in the form of Federal Trade Commission which, by using its powers to take enforcement against “unfair and deceptive practices” under the FTC Act, is getting ever more active in the realm of data protection enforcement.  And look at some of the settlements it has reached with high profile companies – settlements that, in some cases, have run in excess of US$20m and resulted in businesses having to subject themselves to 20 year compliance audits.  By contrast, however vocal EU DPAs are, their powers of enforcement are typically much more limited, with some even lacking the ability to fine.

So those are just some of the big picture differences, but there are so many more points of detail a well-informed privacy professional ought to know – like how the US notion of “personally identifiable information” contrasts with EU “personal data”, why the US model of relying on consent to legitimise data processing is less favoured in the EU, and what the similarities and differences are between US “fair information practice principles” and EU “data protection principles”.

That’s all for another time, but for now take away this:  while they may go about it in different ways, the EU and US each share a common goal of protecting individuals’ privacy rights.  Is either regime perfect?  No, but each could sure learn a lot from the other.

 

 

 

FTC in largest-ever Safe Harbor enforcement action

Posted on January 22nd, 2014 by



Yesterday, the Federal Trade Commission (“FTC“) announced that it had agreed to settle with 12 US businesses for alleged breaches of the US Safe Harbor framework. The companies involved were from a variety of industries and each handled a large amount of consumer data. But aside from the surprise of the large number of companies involved, what does this announcement really tell us about the state of Safe Harbor?

This latest action suggests that the FTC is ramping up its Safe Harbor enforcement in response to recent criticisms from the European Commission and European Parliament about the integrity of Safe Harbor (see here and here) – particularly given that one of the main criticisms about the framework was its historic lack of rigorous enforcement.

Background to the current enforcement

So what did the companies in question do? The FTC’s complaints allege that the companies involved ‘deceptively claimed they held current certifications under the U.S.-EU Safe Harbor framework‘. Although participation in the framework is voluntary, if you publicise that you are Safe Harbor certified then you must, of course, maintain an up-to-date Safe Harbor registration with the US Department of Commerce and comply with your Safe Harbor commitments 

Key compliance takeaways

In this instance, the FTC alleges that the businesses involved had claimed to be Safe Harbor certified when, in fact, they weren’t. The obvious message here is don’t claim to be Safe Harbor certified if you’re not!  

The slightly more subtle compliance takeaway for businesses who are correctly Safe Harbor certified is that they should have in place processes to ensure:

  • that they keep their self-certifications up-to-date by filing timely annual re-certifications;
  • that their privacy policies accurately reflect the status of their self-certification – and if their certifications lapse, that there are processes to adjust those policies accordingly; and
  • that the business is fully meeting all of its Safe Harbor commitments in practice – there must be actual compliance, not just paper compliance.

The “Bigger Picture” for European data exports

Despite this decisive action by the FTC, European concerns about the integrity of Safe Harbor are likely to persist.  If anything, this latest action may serve only to reinforce concerns that some US businesses are either falsely claiming to be Safe Harbor certified when they are not or are not fully living up to their Safe Harbor commitments. 

The service provider community, and especially cloud businesses, will likely feel this pressure most acutely.  Many customers already perceive Safe Harbor to be “unsafe” for data exports and are insisting that their service providers adopt other EU data export compliance solutions.  So what other solutions are available?

While model contract have the benefit of being a ‘tried and tested’ solution, the suite of contracts required for global data exports is simply unpalatable to many businesses.  The better solution is, of course, Binding Corporate Rules (BCR) – a voluntary set of self-regulatory policies adopted by the businesses that satisfy EU data protection standards and which are submitted to, and authorised by, European DPAs.  Since 2012, service providers have been able to adopt processor BCR, and those that do find that this provides them with a greater degree of flexibility to manage their internal data processing arrangements while, at the same time, continuing to afford a high degree of protection for the data they process.       

It’s unlikely that Safe Harbor will be suspended or disappear – far too many US businesses are dependent upon it for their EU/CH to US data flows.  However, the Safe Harbor regime will likely change in response to EU concerns and, over time, will come under increasing amounts of regulatory and customer pressure.  So better to consider alternative data export solutions now and start planning accordingly rather than find yourself caught short!

 

EU Parliament delivers – The world awaits

Posted on October 21st, 2013 by



They said it couldn’t be done. A draconian initial text and 4,000 suggested amendments to digest made the task so difficult that many experts had already given up hope. However, today the European Parliament has silenced many sceptical voices by approving a draft Data Protection Regulation which aims to replace the aging 1995 EU data protection directive.

The job is by no means completed. Now the Council of the EU (which shares the EU legislative power with the Parliament) has to deliver its own draft and provide the Member States’ contribution to this crucial process.

In the meantime, here are what I see as key highlights of the text approved by Parliament:

* The EU Parliament has considerably softened its original uber-strict approach and that should be welcomed because it makes the law more realistically applicable in practice.

* However, the complexity of the Commission’s proposal is retained and even expanded in some cases. For example, the one stop shop concept is now less clear cut and therefore, less likely to work.

* The EU Parliament wants to introduce a standardised format for privacy notices using icons. This is a brave move. The approach suggested is slightly dogmatic but the idea is a good one.

* The provisions on profiling remain but in a more reasonable format. This will continue to be a key area of debate over the coming months.

* There is a new emphasis on bi-annual compliance reviews, which together with the appointment of compulsory data protection officers will make legal compliance significantly more onerous.

* Disappointingly, there still are very unrealistic limitations on international data transfers, which are particularly onerous when made to non-EU public authorities. As predicted, the NSA revelations have distorted this issue and it will take a lot of work to untangle this.

* Finally, the massive fines of up to EUR 100,000,000 or 5% of annual turnover seem to be designed to send a clear signal out there about how serious this stuff is.

In summary, I don’t think the Parliament’s draft is entirely workable as it stands, but with the adoption of this text we are closer to having a modern EU data protection framework than ever before.

Belgian DPA overhauls enforcement strategy

Posted on October 21st, 2013 by



Belgium has long been one of the low risk EU Member States in terms of data protection enforcement. Aside from the fact that pragmatism can be considered part of a Belgian’s nature, this view was also due to the fact that the Belgian DPA, the Privacy Commission, could be termed as one of those so-called ‘toothless tigers’.

As De Standaard reports, it seems this is now about to change, with the Privacy Commission set to follow the example of the Dutch DPA by adopting a more severe enforcement strategy.

Until now, the Privacy Commission did not pro-actively investigate companies or sectors, despite the fact that the Belgian Privacy Act grants them such powers. However, the Privacy Commission has recently decided to establish a team of inspectors who will actively search for companies that process personal data in a non-compliant manner. It seems the Privacy Commission is finally adopting an approach which the CNIL has been applying for a number of years, with the idea being that each year a specific sector would be subject of increased scrutiny.

In addition, anticipating the adoption of the Regulation, the Privacy Commission has called upon the Belgian legislator to grant it more robust enforcement powers. Currently, if a company is found to be in breach of the Belgian data protection laws, the Privacy Commission has a duty to inform the public prosecutor. However, in practice criminal prosecution for data protection non-compliance is virtually non-existent and leads to de facto impunity.  This could drastically change if greater enforcement powers are granted to the Privacy Commission.

In the wake of the coming Regulation, this new enforcement strategy does not come as a surprise. In addition, earlier this year, Belgium faced a couple of high-profile mediatised data breach cases for the first time. Both the Ministry of Defense, the Belgian railroad company and recruting agency Jobat suffered a massive data leak. More recently, the massive hacking of Belgacom’s affiliate BICS gave rise to a lot of controversy. It would appear that these cases highlighted to the Privacy Commission the limits of its current powers .

However, if even a pragmatic DPA, such as the Privacy Commission, starts adopting a more repressive enforcement strategy, it is clear that the days of complacency are fading. Organisations processing personal data really cannot afford to wait until the Regulation becomes effective in the next few years. They will have to make sure they have done their homework immediately, as it seems the DPA’s won’t wait until the Regulation becomes effective to show their teeth.

ICO’s draft code on Privacy Impact Assessments

Posted on August 8th, 2013 by



This week the Information Commissioner’s Office (‘ICO’) announced a consultation on its draft Conducting Privacy Impact Assessments Code of Practice (the ‘draft code’). The draft code and the consultation document are available at http://www.ico.org.uk/about_us/consultations/our_consultations  and the deadline for responding is 5 November 2013.

When it comes into force, the new code of practice will set out ICO’s expectations on the conduct of Privacy Impact Assessments (‘PIAs’) and will replace ICO’s current PIA Handbook. So why is the draft code important and how does it differ from the PIA Handbook?

  • PIAs are a valuable risk management instrument that can function as an early warning system while, at the same time, promoting better privacy and substantive accountability. Although there is at present no statutory requirement to carry out PIAs, ICO expects them.
  • For instance, in the context of carrying out audits, ICO has criticised controllers who had not rolled out a framework for carrying out PIAs. More importantly, the absence or presence of a risk assessment is a determinative factor in ICO’s decision making to take enforcement action or not. When ICO talks about the absence or presence of a risk assessment, it means the conduct of some form of PIA.
  • Impact assessments are likely to soon become a mandatory statutory requirement across the EU, as the current version of the draft EU Data Protection Regulation requires ‘Data Protection Impact Assessments’. Note, however, that the DPIAs mandated by article 33 of the Draft Regulation have a narrower scope than PIAs.  The former focus on ‘data protection risks’ as opposed to ‘privacy risks’, which is a broader concept that in addition to data protection encompasses broader notions of privacy such as privacy of personal behaviour or privacy of personal communications.
  • The fact that ICO’s guidance on PIAs will now take the form of a statutory Code of Practice (as opposed to a ‘Handbook’) means that it will have increased evidentiary significance in legal proceedings before courts and tribunals on questions relevant to the conduct of PIAs.

The PIA Handbook is generally too cumbersome and convoluted. The aim of the draft code is to simplify the current guidance and promote practical PIAs that are less time consuming and complex, and as flexible as possible in order to be adapted to an organisation’s existing project and risk management processes.  However, on an initial review of the draft code I am not convinced that it achieves the optimum results in this regard.  Consider for example the following expectations set out in the draft code which did not appear in the PIA Handbook:

  • In addition to internal stakeholders, organisations should work with partner organisations and with the public. In other words, ICO encourages controllers to test their PIA analysis with the individuals who will be affected by the project that is being assessed.
  • Conducting and publicising the PIA will help build trust with the individuals using the organisation’s services. In other words, ICO expects that PIAs will be published in certain circumstances.
  • PIAs should incorporate 7 distinct steps and the draft code provides templates for questionnaires and reports, as well as guidance on how to integrate the PIA with project and risk management processes.

Overall, although the draft code is certainly an improvement compared to the PIA Handbook, it remains cumbersome and prescriptive.  It also places a lot of emphasis on documentation, recording decisions and record keeping.  In addition, the guidance and some of the templates include privacy jargon that is unlikely to be understood by staff who are not privacy experts, such as project managers or work-stream leads who are most likely to be asked to populate the PIA documentation in practice.

Many organisations are likely to want a simpler, more streamlined and more efficient PIA process with fewer steps, simpler tools / documents and clearer guidance, and which incorporates legal requirements and ICO’s essential expectations without undully delaying the launch of new processing operations. Such orgaisations are also likely to want to make their voice heard in the context of ICO’s consultation on the draft code.

UK Government to consult on introducing custodial penalties for breaches of the DPA (again!)

Posted on July 12th, 2013 by



One of the issues that the Information Commissioner (ICO) (along with other voices) has been persistent about in recent years is the need for stiffer penalties for breaches of the Data Protection Act 1998. It is understandably frustrating for the regulator that those individuals who flagrantly disregard data protection responsibilities (e.g. through offences such as blagging) typically only face a penalty of up to £5,000. There has been a campaign from various quarters to increase the maximum sentence that can be awarded for a breach of s. 55 of the DPA (the unlawful obtaining and use of personal data) and the previous Government provided for a tougher regime when they amended the DPA through the Criminal Justice and Immigration Act 2008 to increase the penalty to a maximum of 2 years imprisonment. However, this provision has still to be brought into force. The campaign to increase the penalty gained greater impetus when Lord Justice Leveson, in his 2012 report, also recommended that the maximum sentence be increased. It was examined again recently by the UK Parliament’s Justice Committee in their report on the role of the ICO.

Yesterday Lord McNally’s response on behalf of the Government to the Justice Committee’s report was published. In his short letter, Lord McNally commented on the ICO’s status and funding, accountability to Parliament and powers to compel audits of the public sector. He also announced that the Government will be holding a public consultation on the full range of data protection proposals that Lord Justice Leveson recommended including, of course, the proposal to introduce custodial penalties for breaches of s. 55. In reality, this announcement is not a surprise given the Government’s response to other related work in this area such as the Shakespeare Review of Public Sector Information in June this year.

The previous Government consulted twice on the proposal to introduce custodial penalties – in 2006 and 2009 – but in each case decided not to do so even though there was considerable support from the public for the change. Since then, the usual Government response to select committee’s recommendations has been to hold the line and not take the plunge of introducing stricter penalties. So in 2011, the Government responded to the Justice Committee’s report on referral fees and the theft of personal data by stating that it wasn’t yet convinced that it was the right time to introduce custodial sentences for s. 55 offences (partly this was because the Government wanted to wait until the Leveson Inquiry (then in full swing) had reported). The Government has already been extensively criticised for not responding more fulsomely to the Leveson proposals. Now that the s. 55 proposal is being put to a third round of public consultation with the weight of the Leveson Report behind it, it will become more difficult for the Government to side-step this thorny issue again.