Archive for the ‘Uncategorized’ Category

German Federal Court further strengthens review platforms

Posted on September 24th, 2014 by

With ever increasing relevance of online review platforms, the discussion about the platform´s red lines becomes more and more heated in Germany. The Federal Court of Justice now issued its second decision in this area within only a couple of months. This time, a medical practitioner demanded his profile to be completely deleted on a review platform focusing on health care professionals, arguing on the basis of unlawful processing of his personal data.

The case concerned a typical review platform where users may search for information about health care professionals. Aside from the review content, information such as name, address, expertise, contact data and opening hours are accessible on the platform. Users have to register with their email address before posting a review.

The Federal Court dismissed the claim. The court held that the platform´s freedom of communication outweighs the claimant´s right in informational self-determination, which forms the constitutional-right basis for privacy rights under German law. According to the court, it is legitimate for the platform provider to publish the practitioner´s profile and the review content based on Sec. 29 German Data Protection Act. This result does not come as a surprise, as the Federal Court already decided on a similar case back in 2008 that a teacher cannot request to be deleted from a review platform dedicated to teachers.

What is slightly more surprising is that the court made some remarks emphasizing that the practitioner would be “not insignificantly” burdened by the publication of reviews on the portal, as he may face adverse economic effects caused by negative reviews. However, the court saw even a greater weight in the public´s interest in information about medical services, in particular as the publication would only concern the “social sphere” of the claimant, rather than his private or intimate sphere.

In July 2014, the Federal Court also dismissed a claim for disclosure of contact details of a reviewer who repeatedly posted defamatory statements on a review platform.



Germany: Federal Court stops disclosure claims against review platforms

Posted on August 1st, 2014 by

In Germany, the Federal Court of Justice pulled the rug from under claims for the disclosure of user data against the providers of online services. The court ruled that statutory law would not permit a service provider to disclose user data to persons and businesses concerned by a negative and potentially unlawful review posted on a review platform (judgement of 1 July 2014, court ref. VI ZR 345/13). Only if the review constitutes a criminal act in itself, such as a defamation or slander, the prosecution may request disclosure in the course of a criminal investigation. The judgement eventually ended a debate that had been simmering for a long time.


The case concerned a medical practitioner who sued a review platform dedicated to medical services. A user had posted a review on the platform in which he alleged that patients´ files would be kept in clothesbaskets, average waiting times would be extraordinarily long, follow-up appointments would not be offered in due time, a thyroid hyperfunction had not been identified and been treated contraindicative. Shortly afterwards, further reviews were posted which were identical in places to the first review. The claimant repeatedly notified the platform provider of these reviews, and the platform provider took the reviews down. In July 2012, another review was posted with the same allegations. The claimant now sued the platform provider for cessation and desistance and for disclosure of the name and the address of the user who posted the reviews. The defendant never denied that the facts stated in the reviews were untrue.

The Judgement

The claim was dismissed. The court´s decision is based on Sec. 12 (2) German Telemedia Act (“TMG”), which stipulates that a service provider may only disclose user data where a specific statute exists that permits such a disclosure and expressly references “Telemedia” services, i.e. online services. The court argued that the general civil law claim for disclosure of third-party data, which is based on bona fide aspects (Sec. 242 German Civil Code), would not fulfil the requirements of Sec. 12 (2) TMG. Further, the requirements of Sec. 14 (2) TMG, which allows for a disclosure of user data if this is necessary for the purposes of criminal prosecution, protection of the constitution, averting public dangers and national security, and for the enforcement of copyright, would not apply. According to the court, there is also no room for an analogous application of Sec. 14 (2) TMG, because there would not be an unintentional gap in the statutes as required for an analogy. In this regard, the court highlighted that the question whether an individual whose personality rights were unlawfully affected by a user posting should have a claim for disclosure of that user´s data was debated in the process of legislation without further consequences.

The court noted, however, that the result of the legal assessment may be regarded as unbalanced against the statutory right for disclosure of user data in the event of a copyright infringement, and that it deems the extension of this statutory right desirable. However, the court emphasized that this decision is up to the legislator, not the court.


The question of whether a claim for disclosure of user data would be supported by German civil law had long been debated in legal literature, and courts of lower instances had issued conflicting decisions in similar cases. The appellate court (Higher Regional Court of Stuttgart) had decided in favour of the claimant, too. This debate has now been ended by the Federal Court for the time being. The judgement is clear and leaves no room for interpretation or loopholes. This is good news for both providers of online platforms, who can safely assure to their users that their identity is protected, and users who will not need to fear de-anonymization, which could result in a pre-emptive self-limitation when posting comments.

However, the question remains whether the court duly considered constitutional law aspects, as the German-law concept of personality rights is rooted in the German constitution (right to human dignity, right to personal freedoms). This has been the main reason why some courts of lower instances had obvious concerns about the result of their legal assessment and tried to find a way out of the dilemma by applying analogies, or considerations of interest, on dubious legal grounds to overcome the statutory law situation which had been deemed inappropriate in some cases. The Federal Court now has not touched constitutional law issues, so it can be concluded that at least it did not see a blatant violation of constitutional law. However, the Federal Court articulated concerns about the outcome too by declaring a revision of the statutes desirable, and by emphasizing the responsibility of the legislator to consider respective amendments of the law. Even though the judgement is final and binding, the claimant may seek additional relief by lodging a constitutional complaint.

The decision does not affect the right of the competent authorities to request a disclosure of user data in the case of criminal prosecution, i.e. in cases where the content of a user review does not only constitute a violation of personality rights as protected by civil law, but reach the threshold of criminal offences such as in the case of defamation and slander.

UK to introduce emergency data retention measures

Posted on July 15th, 2014 by

The UK Prime Minister David Cameron announced last week that the Government is taking emergency measures to fast track new legislation, The Data Retention and Investigations Powers Bill, which will force communications service providers (i.e. telecommunications companies and internet service providers, together “CSPs“) to store communications data (including call and internet search metadata) for 12 months.

This announcement follows the CJEU’s ruling in April that the Data Retention Directive 2006/24/EC (the “Directive“), which requires companies to store communications data for up to two years, is invalid because it contravenes the right to privacy and data protection and the principle of proportionality under the EU Charter of Fundamental Rights (the “Charter“). The CJEU was particularly concerned about the lack of restrictions on how, why and when data could be used. It called for a measure which was more specific in terms of crimes covered and respective retention periods.

The PM said that the emergency law was necessary to protect existing interception capabilities, and that without it, the Government would be less able to protect the country from paedophiles, terrorists and other serious criminals. Cameron said the new legislation will respond to the CJEU’s concerns and provide a clear legal basis for companies to retain such communications data and also stressed that the new measures would cover the retention of only metadata, such as the time, place and frequency of communications, and would not cover the content of communications.  The emergency Bill is intended as a temporary measure and is to expire in 2016. The Government intends that the legislation will ensure that, in the short term, UK security and law enforcement agencies can continue to function whilst Parliament has time to examine the Regulation of Investigatory Powers Act 2000 (RIPA) to make recommendations on how it could be modernised and improved. Whilst Cameron stressed that the measures did not impose new obligations on CSPs and insisted they would not authorise new intrusions on civil liberties, the Bill faces criticism that it extends on the already far reaching interception rights under RIPA and also that in light of the CJEU decision, the temporary measure also contravenes the Charter.

At present, in order to comply with their obligations under the Directive, CSPs already operate significant storage and retrieval systems to retain data from which they can derive no further use or revenue. If the draft Bill is enacted with little further amendment, the UK’s Secretary of State could be issuing new retention notices later this year. Those CSPs subject to retention obligations today will be reading carefully as these arrive. It is not yet clear whether the legislative burden and cost of compliance is likely to spread to additional CSPs not previously notified under the current retention regime. From the Bill’s drafting it appears this could conceivably happen. It is equally clear that there is no mechanism to recoup these costs other than from their general business operations.

Britain is the first EU country to seek to rewrite its laws to continue data retention since the CJEU decision, and the Government said it was in close contact with other European states on the issue.

By comparison, in Germany, when the Directive was initially implemented, the German courts took the view that the German implementation of it by far exceeded the limits set by the German constitutional right of informational self-determination of the individual in that it did not narrow down the scope of use of the retained data sufficiently, e. g., by not limiting it to the prosecution or prevention of certain severe criminal acts. In Germany’s new Telecommunication Act, enacted in 2012, the provisions pertaining to data retention were deleted and not replaced by the compulsory principles in the Directive. Treaty violation proceedings against Germany by the EU Commission ensued, however the proceedings have now lost their grounds entirely as a result of the CJEU ruling.

Meanwhile the Constitutional Court of Austria last month declared that Austrian data retention laws were unconstitutional. Austria is the first EU Member State to annul data retention laws in response to the CJEU decision.  Austrian companies are now only obliged to retain data for specific purposes provided by law, such as billing of fault recovery.

Whether other EU countries will now follow the UK’s lead, potentially introducing a patchwork of data retention standards for CSPs throughout the EU, remains to be seen. If this happens, then equally uncertain is the conflict this will create between, on the one hand, nationally-driven data retention standards and, on the other, EU fundamental rights of privacy and data protection.


European Parliament votes in favour of data protection reform

Posted on March 21st, 2014 by

On 12 March 2014, the European Parliament (the “Parliament”) overwhelmingly voted in favour of the European Commission’s proposal for a Data Protection Regulation (the “Data Protection Regulation”) in its plenary assembly. In total 621 members of Parliament voted for the proposals and only 10 against. The vote cemented the Parliament’s support of the data protection reform, which constitutes an important step forward in the legislative procedure. Following the vote, Viviane Reding – the EU Justice Commissioner – said that “The message the European Parliament is sending is unequivocal: This reform is a necessity, and now it is irreversible”. While this vote is an important milestone in the adoption process, there are still several steps to go before the text is adopted and comes into force.

So what happens next?

Following the Civil Liberties, Justice and Home Affairs (LIBE) Committee’s report published in October 2013 (for more information on this report – see this previous article), this month’s vote  means that the Council of the European Union (the “Council”) can now formally conduct its reading of the text based on the Parliament’s amendments. Since the EU Commission made its proposal, preparatory work in the Council has been running in parallel with the Parliament. However, the Council can only adopt its position after the Parliament has acted.

In order for the proposed Data Protection Regulation to become law, both the Parliament and the Council must adopt the text in what is called the “ordinary legislative procedure” – a process in which the decisions of the Parliament and the Council have the same weight. The Parliament can only begin official negotiations with the Council as soon as the Council presents its position. It seems unlikely that the Council will accept the Parliament’s position and, on the contrary, will want to put forward its own amendments.

In the meantime, representatives of the Parliament, the Council and the Commission will probably organise informal meetings, the so-called “trilogue” meetings, with a view to reaching a first reading agreement.

The EU Justice Ministers have already met several times in Council meetings in the past months to discuss the data protection reform. Although there seems to be a large support between Member States for the proposal, they haven’t yet reached an agreement over some of the key provisions, such as the “one-stop shop” rule. The next meeting of the Council ministers is due to take place in June 2014.

Will there be further delays?

As the Council has not yet agreed its position, the speed of the development of the proposed regulation in the coming months largely depends on this being finalised. Once a position has been reached by the Council then there is also the possibility that the proposals could be amended further. If this happens, the Parliament may need to vote again until the process is complete.

Furthermore, with the elections in the EU Parliament coming up this May, this means that the whole adoption process will be put on hold until a new Parliament comes into place and a new Commission is approved in the autumn this year. Given these important political changes, it is difficult to predict when the Data Protection Regulation will be finally adopted.

It is worth noting, however, that the European heads of state and government publicly committed themselves to the ‘timely’ adoption of the data protection legislation by 2015 – though, with the slow progress made to date and work still remaining to be done, this looks a very tall order indeed.

CNIL issues new guidelines on the processing of bank card details

Posted on February 27th, 2014 by

On February 25, 2014, the French Data Protection Authority (“CNIL”) issued a press release regarding new guidelines adopted last November on the processing of bank card details relating to the sale of goods and the provision of services at a distance (the “Guidelines”). Due to the increase of on-line transactions and the higher number of complaints received by the CNIL from customers in recent years, the CNIL decided to update and repeal its previous guidelines, which dated from 2003. The new guidelines apply to all types of bank cards including private payment cards and credit cards.

Purposes of processing

The CNIL defines the main purpose of using a bank card number as processing a transaction with a view to delivering goods or providing a service in return for payment. In addition, bank card details may be processed for the following purposes:

  • to reserve a good or service;
  • to create a payment account to facilitate future payments on a merchant’s website;
  • to enable payment service providers to offer dedicated payment solutions at a distance (e.g., virtual cards or wallets, rechargeable accounts, etc.); and
  • to combat fraud.

Types of data collected

As a general rule, the types of data that are strictly necessary to process online payments should be limited to:

  • the bank card number;
  • the expiry date; and
  • the 3 digit cryptogram number on the back of the card.

The cardholder’s identity must not be collected, unless it is necessary for a specific and legitimate purpose, such as to combat fraud.

Period of retention

Bank card details may only be stored for the duration that is necessary to process the transaction, and must be deleted once the payment has taken place (or, where applicable, at the end of the period corresponding to the right of withdrawal). Following this period, the bank card details may be archived and kept for 13 months (or 15 months in the case of a deferred debit card) for evidence purposes (e.g., in case of a dispute over a transaction).

Beyond this period, the bank card details may be kept only if the cardholder’s prior consent is obtained or to prevent fraudulent use of the card. In particular, the merchant must obtain the customer’s prior consent in order to create a payment account that remembers the customer’s bank card details for future payments.

However, the CNIL considers that the 3-digit cryptogram on the card is meant to verify that the cardholder is in possession of his/her card, and thus, it is prohibited to store this number after the end of the transaction, including for future payments.

Security measures

Due to the risk of fraud, controllers must implement appropriate security measures, including preventing unauthorized access to, or use of, the data. These security measures must comply with applicable industry standards and requirements, such as the Payment Card Industry Data Security Standards (PCI DSS), which must be adopted by all organizations with payment card data.

The CNIL recommends that the customer’s bank card details are not stored on his/her terminal equipment (e.g., computer, smartphone) due to the lack of appropriate security measures. Furthermore, bank card numbers cannot be used as a means of customer identification.

For security reasons (including those that are imposed on the cardholder), the controller (or processor) cannot request a copy of the bank card to process a payment.

Finally, the CNIL recommends notifying the cardholder if his/her bank card details are breached in order to limit the risk of fraudulent use of the bank card details (e.g., to ask the bank to block the card if there is a risk of fraud).

Future legislation

In light of the anticipated adoption of the Data Protection Regulation, organizations will face more stringent obligations, including privacy-by-design, privacy impact assessments and more transparent privacy policies.

CNIL amends legal framework for whistleblowing schemes in France

Posted on February 25th, 2014 by

In France, the legal framework for whistleblowing schemes is based on a decision of the French Data Protection Authority (the “CNIL”) of 2005 adopting a “single authorization” AU-004  for the processing of personal data in the context of whistleblowing schemes. In principle, companies must obtain the CNIL’s approval prior to implementing a whistleblowing scheme. The CNIL’s single authorization AU-004 allows companies to do so simply via a self-certification procedure, whereby they make a formal undertaking that their whistleblowing hotline complies with the pre-established conditions set out in the CNIL’s single authorization AU-004.

Initially, companies could only self-certify to the CNIL’s single authorization AU-004 if they were required to adopt a whistleblowing scheme either to comply with legal or regulatory requirements in specific and limited areas (i.e., finance, accounting, banking, fight against corruption), or if they could demonstrate a legitimate purpose, which at the time, was limited to complying with Section 301(4) of the U.S. Sarbanes-Oxley Act. In 2010, the CNIL broadened the scope of its single authorization by expanding the “legitimate purpose” condition to cover two new areas: compliance with the Japanese Financial Instruments Act and the prevention of anti-competitive practices (i.e., anti-trust matters). On January 30th, 2014, the CNIL amended its single authorization AU-004 a second time, essentially to add the following areas to the scope of whistleblowing schemes: the fight against discriminations and work harassment, compliance with health, hygiene and safety measures at the workplace, and the protection of the environment.

These successive amendments show that the CNIL’s view on whistleblowing schemes has evolved over time and it has adopted a more realistic and pragmatic approach given that, in today’s world, many multinational organizations require their affiliates to implement a streamlined and globalized whistleblowing scheme across multiple jurisdictions. Under the revised framework, whistleblowing schemes are still limited to pre-defined areas and cannot be used for general and unlimited purposes. Nevertheless, the broadened scope of whistleblowing schemes allows companies and their employees to act more in line with an organization’s internal code of business conduct and the various areas that it covers. The CNIL’s decision should therefore enable companies to use their whistleblowing schemes more consistently across jurisdictions and to streamline the reporting process in areas that are commonly recognized as fraudulent or unethical.

The CNIL also clarified its position regarding anonymous reporting. Historically, the CNIL considers that anonymous reporting creates a high risk of slanderous reporting and can have a disruptive effect for companies. In its decision of January 30, 2014, the CNIL states that organizations must not encourage individuals to make anonymous reports and, on the contrary, anonymous reporting should remain exceptional. The CNIL also specifies the conditions that apply to anonymous reporting, namely:

– the seriousness of the facts that were reported must be established and the facts must be sufficiently precise; and

– the anonymous report must be handled with specific precautions. For example, the initial receiver of the report should assess whether it is appropriate to disclose the facts within the whistleblowing framework prior to doing so.

The CNIL’s intention here is to limit the risk of slanderous reporting by encouraging companies to establish a clear and transparent system for employees, while ensuring that the appropriate security and confidentiality measures have been implemented, particularly to protect the identity of the whistleblower.

Effectively, the revision of the CNIL’s single authorization AU-004 can also be viewed as a tactical move by the CNIL to funnel companies through the self-certification approval process, rather than to seek ad hoc approval from the CNIL. It also encourages companies to be more transparent regarding the purposes for which their whistleblowing schemes are used and allows the CNIL to enforce compliance with the Data Protection Act more efficiently.

The CNIL’s decision does not specify any date of entry into force. Therefore, these amendments came into force on January 30th, 2014, date of the publication of the decision in the Official Journal. The decision also does not specify any grace period for complying with the new conditions; therefore, companies are required to comply with them immediately.

This article was initially published in the March 2014 edition of the The Privacy Advisor.

EU Parliament’s LIBE Committee Issues Report on State Surveillance

Posted on February 19th, 2014 by

Last week, the European Parliament’s Civil Liberties Committee (“LIBE“) issued a report into the US National Security Agency (“NSA“) and EU member states’ surveillance of EU citizens (the “Report“). The Report was passed by 33 votes to 7 with 17 abstentions questioning whether data protection rules should be included in the trade negotiations with the US. The release of the report comes at a crucial time for both Europe and the US but what does this announcement really tell us about the future of international data flows in the eyes of the EU and the EU’s relationship with the US?

Background to the Report

The Report follows the US Federal Trade Commission (“FTC“)’s recent response to criticisms from the European Commission and European Parliament following the NSA scandal and subsequent concerns regarding Safe Harbor (for more information on the FTC – see this previous article). The Report calls into question recent revelations by whistleblowers and journalists about the extent of mass surveillance activities by governments. In addition, the LIBE Committee argues that the extent of the blanket data collection, highlighted by the NSA allegations, goes far beyond what would be reasonably expected to counter terrorism and other major security threats. The Report also criticises the international arrangements between the EU and the US, and states that these mechanisms “have failed to provide for the necessary checks and balances and for democratic accountability“.

LIBE Committee’s Recommendations

In order to address the deficiencies highlighted in the Report and to restore trust between the EU and the US, the LIBE Committee proposes several recommendations with a view to preserving the right to privacy and the integrity of EU citizens’ data, including:

  • US authorities and EU Member States should prohibit blanket mass surveillance activities and bulk processing of      personal data;
  • The Safe Harbor framework should be suspended, and all transfers currently operating under this mechanism should stop immediately;
  • The status of New Zealand and Canada as ‘adequate’ jurisdictions for the purposes of data transfers should be reassessed;
  • The adoption of the draft EU Data Protection Regulation should be accelerated;
  • The establishment of the European Cloud Partnership must be fast-tracked;
  • A framework for the protection of whistle-blowers must be established;
  • An autonomous EU IT capability must be developed by September 2014, including ENISA minimum security and privacy standards for IT networks;
  • The EU Commission must present an European strategy for democratic governance of the Internet by January 2015; and
  • EU Member States should develop a coherent strategy with the UN, including support of the UN resolution on ‘the right to privacy in the digital age‘.

Restoring trust

The LIBE Committee’s recommendations were widely criticised by politicians for being disproportionate and unrealistic. EU politicians also commented that the Report sets unachievable deadlines and appears to be a step backwards in the debate and, more importantly, in achieving a solution. One of the most controversial proposals in the Report consists of effectively ‘shutting off‘ all data transfers to the US. This could have the counterproductive effect of isolating Europe and would not serve the purpose of achieving an international free flow of data in a truly digital society as is anticipated by the EU data protection reform.

Consequences for Safe Harbor?

The Report serves to communicate further public criticism about the NSA’s alleged intelligence overreaching.  Whatever the LIBE Committee’s position, it is highly unlikely that as a result Safe Harbor will be suspended or repealed – far too many US-led businesses are dependent upon it for their data flows from the EU, meaning a suspension of Safe Harbor would have a very serious impact on transatlantic trade. Nevertheless, as a consequence of these latest criticisms, it is now more likely than ever that the EU/US Safe Harbor framework will undergo some changes in the near future.  As to what, precisely, these will be, only time will tell – though more active FTC enforcement of Safe Harbor breaches now seems inevitable.


FTC in largest-ever Safe Harbor enforcement action

Posted on January 22nd, 2014 by

Yesterday, the Federal Trade Commission (“FTC“) announced that it had agreed to settle with 12 US businesses for alleged breaches of the US Safe Harbor framework. The companies involved were from a variety of industries and each handled a large amount of consumer data. But aside from the surprise of the large number of companies involved, what does this announcement really tell us about the state of Safe Harbor?

This latest action suggests that the FTC is ramping up its Safe Harbor enforcement in response to recent criticisms from the European Commission and European Parliament about the integrity of Safe Harbor (see here and here) – particularly given that one of the main criticisms about the framework was its historic lack of rigorous enforcement.

Background to the current enforcement

So what did the companies in question do? The FTC’s complaints allege that the companies involved ‘deceptively claimed they held current certifications under the U.S.-EU Safe Harbor framework‘. Although participation in the framework is voluntary, if you publicise that you are Safe Harbor certified then you must, of course, maintain an up-to-date Safe Harbor registration with the US Department of Commerce and comply with your Safe Harbor commitments 

Key compliance takeaways

In this instance, the FTC alleges that the businesses involved had claimed to be Safe Harbor certified when, in fact, they weren’t. The obvious message here is don’t claim to be Safe Harbor certified if you’re not!  

The slightly more subtle compliance takeaway for businesses who are correctly Safe Harbor certified is that they should have in place processes to ensure:

  • that they keep their self-certifications up-to-date by filing timely annual re-certifications;
  • that their privacy policies accurately reflect the status of their self-certification – and if their certifications lapse, that there are processes to adjust those policies accordingly; and
  • that the business is fully meeting all of its Safe Harbor commitments in practice – there must be actual compliance, not just paper compliance.

The “Bigger Picture” for European data exports

Despite this decisive action by the FTC, European concerns about the integrity of Safe Harbor are likely to persist.  If anything, this latest action may serve only to reinforce concerns that some US businesses are either falsely claiming to be Safe Harbor certified when they are not or are not fully living up to their Safe Harbor commitments. 

The service provider community, and especially cloud businesses, will likely feel this pressure most acutely.  Many customers already perceive Safe Harbor to be “unsafe” for data exports and are insisting that their service providers adopt other EU data export compliance solutions.  So what other solutions are available?

While model contract have the benefit of being a ‘tried and tested’ solution, the suite of contracts required for global data exports is simply unpalatable to many businesses.  The better solution is, of course, Binding Corporate Rules (BCR) – a voluntary set of self-regulatory policies adopted by the businesses that satisfy EU data protection standards and which are submitted to, and authorised by, European DPAs.  Since 2012, service providers have been able to adopt processor BCR, and those that do find that this provides them with a greater degree of flexibility to manage their internal data processing arrangements while, at the same time, continuing to afford a high degree of protection for the data they process.       

It’s unlikely that Safe Harbor will be suspended or disappear – far too many US businesses are dependent upon it for their EU/CH to US data flows.  However, the Safe Harbor regime will likely change in response to EU concerns and, over time, will come under increasing amounts of regulatory and customer pressure.  So better to consider alternative data export solutions now and start planning accordingly rather than find yourself caught short!


In search of global privacy compliance

Posted on December 19th, 2013 by

In the same way that most activities involving data are global, complying with the rules and regulations affecting those activities is a markedly global endeavour. Whether we are talking of multinational corporations with hundreds of thousands of employees or of a humble start up with a clever idea, an app or a website, the ambitions are the same: tapping into the opportunities of the global marketplace. A digital marketplace that is free from the physical constraints attached to distance, cultures and infrastructure. A marketplace that is huge and that has already turned college dorm ideas into some of the most successful and influential businesses on the planet. But, we must not forget that going global and using personal information collected from all over the world carries equally huge responsibilities which expand well beyond filing forms and sweet talking regulators.

One of the challenges faced by anyone operating globally is the fragmentation of legal regimes affecting the handling of personal information. Today, there is no leading privacy model that has arisen as the one to follow universally. Some regimes take an all encompassing approach throwing principles, obligations and rights to all possible activities involving personal information. In some cases – think Europe – this approach is not only comprehensive, but unashamedly strict. Other regimes go for a more down to earth, but still meaningful approach to regulating privacy, allowing users of data a greater degree of discretion in terms of the precise compliance steps to take. There are jurisdictions where the use of data within some sectors is firmly regulated whilst other sectors are entirely off the hook. This colourful variety of legal regimes and data privacy obligations contributes to make the challenge of managing privacy on a global scale even more challenging.

An obvious route to take is to look at things on a country by country basis and simply try to do whatever it takes to get it right within each jurisdiction, whatever the differences. The trouble here is that compliance often becomes a matter of running a prohibitively expensive exercise where the only advantage is not falling foul of each local law. The reality is that only a very limited number of organisations have the energy, resources and budget to do this. An insurmountable drawback of this approach is not just the cost of compliance, but the inability to operate globally in a truly consistent way. It is frustrating to see how valuable resources are devoted to tailoring practices to local demands, which contributes to an inefficient and unproductive way of addressing global privacy needs.

This is exasperated by the limitations on international data transfers and the finicky ways in which such transfers are meant to be legitimised. Take the standard contractual clauses approved by the European Commission for these purposes, for example. Although the clauses have the seal of approval of the Commission, more than half of the EU Member States still require organisations to submit their data transfer agreements for review and authorisation by the relevant data protection authorities. That is simply absurd. Then, the fact that approvals are restricted to a single contractual document covering a defined set of transfers makes the concept completely unworkable for multiple and evolving data flows. A static contractual agreement is likely to become out of date between the time it is signed and the time it is filed with the authorities – hardly a solid ground on which to build a compliance programme.

Against this background, an unfortunate, but popular, choice is to do nothing. Lawyers and regulators will cringe at the thought of thousands – if not hundred of thousands – of situations where nothing is actually done to properly address the legal restrictions affecting international data flows. Some organisations manage to spend a little fortune legitimising transfers of data across jurisdictions – both within their own international structures and to third parties – but I have the suspicion that these are a minority in the whole scheme of things. Amongst that minority, only a select group will actually get their act together and implement a workable set of global privacy safeguards. The system seems to tolerate this and regulators appear content with their ability to scrutinise those who do something about it. But, this cannot be right. Global data privacy compliance is neither optional nor a pastime for those selected few with the guts and stamina to go public about their practices. It is an essential need that requires a combination of fresh thinking, a workable global framework, a team approach and the right tools.

This article was first published in Data Protection Law & Policy in December 2013 and is an extract from Eduardo Ustaran’s new book The Future of Privacy.

Shifting sands – From consent to user accountability

Posted on December 18th, 2013 by

My colleague Phil Lee in an earlier blog ( pointed out that we are entering a brave new world and that it demands brave new thinking – thinking that moves away from seeing consent as some kind of data collection panacea. At the IAPP Europe Data Protection Congress in Brussels last week, some of that brave new thinking was evident. One of the buzz phrases was “data user accountability”, a term used in a speech given by Viktor Mayer-Schonberger – as the term suggests, at the heart of this concept is the idea that data users should be held to account for the uses that they make of personal data.

One thing that struck me in the context of a separate debate about wearable technology was the sense of outrage in some quarters at the idea of unfair collection of personal data per se, the idea that personal data might be collected about individuals without their knowledge or consent. Increasingly it is becoming evident however that trying to itemize each data collection scenario and stamp it with a “Notice given/consent provided” label is like trying to count grains of sand on the beach or stop the inexorable ebb and flow of the tides.

In any event, the nature of a valid consent is that it must be informed. In other words, it is not so much about the fact that personal data is collected but rather about the purposes for which it is used. If we divorce collection of personal data from its use, informed consent becomes an impossible task. If, on the other hand, we focus on use, we can have a sensible discussion about the constraints that can be placed on use and the ways in which data users can be held to account for their actions. It is only in the context of a discussion about use that reference to concepts such as legitimacy, proportionality and the domestic purpose exemption really begin to make sense. If there is a case to be made for some form of notice and choice mechanism, we could also more sensibly explore this issue at the point of use of personal data rather than at the point of collection.

Against the background of this type of discussion, the idea of consent being at the heart of data protection is largely a red herring since the focus will move away from an automatic ticking of a consent box at the point of collection to a discussion about how to restrict the uses of personal data in a meaningful way or to give individuals an effective way of opting out of particular data uses if it is feasible to do so.

Aside from some extreme scenarios involving breaches of civil liberties, if the benefits of technological innovation have as a side-effect the collection of personal data about us, who really cares if it is just sitting in an electronic storage bunker for a limited period? The key concerns are that appropriate restrictions are placed on its use, data quality standards are adhered to, appropriate security measures are in place and suitable retention periods are applied.

Let’s stop fooling ourselves that consent mechanisms offer a way of exercising genuine choice about how our personal data is used. Let’s stop trying to account for every grain of sand but instead begin to discuss how we can hold accountable those who use it in ways for which it was never intended to be used.