Archive for the ‘Mobile telecoms’ Category

Data security breach notification: it’s coming your way!

Posted on July 2nd, 2013 by



Data breach notification laws have existed in the US for several years. California was the first state to introduce a data breach notification law in 2002, followed soon after by forty-five other US states. In 2012, the US Senate introduced a Data Security and Breach Notification Act which, if enacted, would establish a national data security and beach notification standard for the protection of consumer’s electronic personal information across the US.

In Europe, data breach notification has only drawn attention at a political and legislative level following recent press coverage of data breach scandals. Nevertheless, the numerous debates, initiatives and legislative proposals that have appeared in recent months are evidence of Europe’s growing interest in this topic, and recognition of the need to regulate. As an example, the EU Commission’s Directorate General for Communications Network, Content and Technology (DG CONNECT) recently proposed to “explore the extension of security breach notification provisions, as part of the modernisation of the EU personal data protection regulatory framework” in its Digital Agenda for Europe (action 34).

From a legislative perspective, things have been moving forward rather steadily for several years. In 2009, the European legislator adopted a pan-European data breach notification requirement for the first time, under the amended ePrivacy directive 2002/58/EC (“ePrivacy directive”). True, the directive only applies to “providers of publicly available electronic communications services” (mainly telecom operators and ISPs), but in a limited number of EU Member States the ePrivacy directive was implemented with a much broader scope (e.g., Germany). In June 2013, the European Commission released a new regulation explaining the technical implementing measures for data breach notification by telecom operators and ISPs.

Following this first legislative step, the European Commission has recently made two further legislative proposals. The first, which has drawn the most attention, was the European Commission’s proposal of a new regulation to replace the current Data Protection Directive 95/46/EC. If adopted, this Regulation would introduce a general obligation for all data controllers, across business sectors, to notify the regulator in case of a breach without undue delay, and not later than 24 hours after having become aware of it. Companies would also have to report data breaches that could adversely affect individuals without undue delay. This Regulation would apply not only to organizations that are established on the territory of the EU, but also to those that are not established within the EU, but target EU citizens either by offering them goods and services, or by monitoring their behaviour.

Needless to say, in Brussels, stakeholders and lobbyists have been actively campaigning against the proposed data breach provisions for months on the grounds that they are unfriendly to business, cumbersome and impractical. Following the debates at the European Parliament and the Council of Ministers on the proposed Regulation, a less prescriptive, more business-friendly version of the data breach provisions may end up being adopted. Currently, discussions are ongoing in an attempt to limit the scope of the data breach requirements to breaches that are “likely to severely affect the rights and freedoms of individuals”. The deadline for reporting breaches could also be extended to 72 hours. At this point, it is impossible to predict with certainty what will be the final wording of those provisions. However, there does seem to be a consensus among the EU institutions and member states that, one way or another, a data breach notification requirement must be introduced in the Regulation.

Secondly, the European Commission has proposed a directive that aims to impose new measures to ensure a high common level of network and information security across the EU. The Directive concerns public administrations and market operators, namely “providers of information society services” (i.e., e-commerce platforms, internet payment gateways, social networks, search engines, cloud computing services, application stores) and “operators of critical infrastructure that are essential for the maintenance of vital economic and societal activities in the fields of energy, transport, banking, stock exchanges and health.” The Directive would require them to report significant cyber incidents (e.g., an electricity outage, the unavailability of an online booking engine, or the compromise of air traffic control due to an outage or a cyber attack) to a national competent authority.

So what does this tell companies?

First, that data security in general and data breach notification in particular are drawing more and more attention, and thus cannot be ignored. As was the case a few years ago in the US, data breach notification is bound to become one of the hottest legal issues in Europe in the coming years. The legal framework for data breach notification may still be a work-in-progress, but nevertheless it is becoming a reality in Europe. Second, companies should not wait until data breach laws come into force in Europe to start implementing an action plan for handling data breaches. While data breach notification may not yet be a legal requirement for all companies in Europe, the reputational damage caused by a single data breach should motivate companies to implement robust data breach handling procedures. Finally, data breach notification can be viewed as a competitive advantage that enables companies to be more forthcoming and transparent vis-à-vis clients and customers who entrust them with their personal data.

For more information on data security breach notification rules in France, view my article in English: “Complying with Data Breach Requirements in France” (first published in BNA’s World Data Protection Report); and in French: “La notification des violations de données à caractère personnel: analyse et décryptage” (first published in Lamy Droit de l’Immatériel) .

The conflicting realities of data globalisation

Posted on June 17th, 2013 by



The current data globalisation phenomenon is largely due to the close integration of borderless communications with our everyday comings and goings. Global communications are so embedded in the way we go about our lives that we are hardly aware of how far our data is travelling every second that goes by. But data is always on the move and we don’t even need to leave home to be contributing to this. Ordinary technology right at our fingertips is doing the job for us leaving behind an international trail of data – some more public than other.

The Internet is global by definition. Or more accurately, by design. The original idea behind the Internet was to rely on geographically dispersed computers to transmit packets of information that would be correctly assembled at destination. That concept developed very quickly into a borderless network and today we take it for granted that the Internet is unequivocally global. This effect has been maximised by our ability to communicate whilst on the move. Mobile communications have penetrated our lives at an even greater speed and in a more significant way than the Internet itself.

This trend has led visionaries like Google’s Eric Schmidt to affirm that thanks to mobile technology, the amount of digitally connected people will more than triple – going from the current 2 billion to 7 billion people – very soon. That is more than three times the amount of data generated today. Similarly, the global leader in professional networking, LinkedIn, which has just celebrated its 10th anniversary, is banking on mobile communications as one of the pillars for achieving its mission of connecting the world’s professionals.

As a result, everyone is global – every business, every consumer and every citizen. One of the realities of this situation has been exposed by the recent PRISM revelations, which highlight very clearly the global availability of digital communications data. Perversely, the news about the NSA programme is set to have a direct impact on the current and forthcoming legislative restrictions on international data flows, which is precisely one of the factors disrupting the globalisation of data. In fact, PRISM is already being referred to as a key justification for a tight EU data protection framework and strong jurisdictional limitations on data exports, no matter how non-sensical those limitations may otherwise be.

The public policy and regulatory consequences of the PRISM affair for international data flows are pretty predictable. Future ‘adequacy findings’ by the European Commission as well as Safe Harbor will be negatively affected. We can assume that if the European Commission decides to have a go at seeking a re-negotiation of Safe Harbor, this will be cited as a justification. Things will not end there. Both contractual safeguards and binding corporate rules will be expected to address possible conflicts of law involving data requests for law enforcement or national security reasons in a way that no blanket disclosures are allowed. And of course, the derogations from the prohibition on international data transfers will be narrowly interpreted, particularly when they refer to transfers that are necessary on grounds of public interest.

The conflicting realities of data globalisation could not be more striking. On the one hand, every day practice shows that data is geographically neutral and simply flows across global networks to make itself available to those with access to it. On the other, it is going to take a fair amount of convincing to show that any restrictions on international data flows should be both measured and realistic. To address these conflicting realities we must therefore acknowledge the global nature of the web and Internet communications, the borderless fluidity of the mobile ecosystem and our human ability to embrace the most ambitious innovations and make them ordinary. So since we cannot stop the technological evolution of our time and the increasing value of data, perhaps it is time to accept that regulating data flows should not be about putting up barriers but about applying globally recognised safeguards.

This article was first published in Data Protection Law & Policy in June 2013.

A Brave New World Demands Brave New Thinking

Posted on June 3rd, 2013 by



Much has been said in the past few weeks and months about Google Glass, Google’s latest innovation that will see it shortly launch Internet-connected glasses with a small computer display in the corner of one lens that is visible to, and voice-controlled by, the wearer. The proposed launch capabilities of the device itself are—in pure computing terms—actually relatively modest: the ability to search the web, bring up maps, take photographs and video and share to social media.

So far, so iPhone.

But, because users wear and interact with Google Glass wherever they go, they will have a depth of relationship with their device that far exceeds any previous relationship between man and computer. Then throw in the likely short- to mid-term evolution of the device—augmented reality, facial recognition—and it becomes easy to see why Google Glass is so widely heralded as The Next Big Thing.

Of course, with an always-on, always-worn and always-connected, photo-snapping, video-recording, social media-sharing device, the privacy issues are a-plenty, ranging from the potential for crowd-sourced law enforcement surveillance to the more mundane forgetting-to-remove-Google-Glass-when-visiting-the-men’s-room scenario. These concerns have seen a very heated debate play out across the press, on TV and, of course, on blogs and social media.

But to focus the privacy debate just on Google Glass really misses the point. Google Glass is the headline-grabber, but in reality it’s just the tip of the iceberg when it comes to the wearable computing products that will increasingly be hitting the market over the coming years. Pens, watches, glasses (Baidu is launching its own smart glasses too), shoes, whatever else you care to think of—will soon all be Internet-connected. And it doesn’t stop at wearable computing either; think about Internet-connected home appliances: We can already get Internet-connected TVs, game consoles, radios, alarm clocks, energy meters, coffee machines, home safety cameras, baby alarms and cars. Follow this trend and, pretty soon, every home appliance and personal accessory will be Internet-connected.

All of these connected devices—this “Internet of Things”—collect an enormous volume of information about us, and in general, as consumers we want them: They simplify, organize and enhance our lives. But, as a privacy community, our instinct is to recoil at the idea of a growing pool of networked devices that collect more and more information about us, even if their purpose is ultimately to provide services we want.

The consequence of this tends to be a knee-jerk insistence on ever-strengthened consent requirements and standards: Surely the only way we can justify such a vast collection of personal information, used to build incredibly intricate profiles of our interests, relationships and behaviors, is to predicate collection on our explicit consent. That has to be right, doesn’t it?

The short answer to this is “no”—though not, as you might think, for the traditionally given reasons that users don’t like consent pop-ups or that difficulties arise when users refuse, condition or withdraw their consents. 

Instead, it’s simply that explicit consent is lazy. Sure, in some circumstances it may be warranted, but to look to explicit consent as some kind of data collection panacea will drive poor compliance that delivers little real protection for individuals.

Why? 

Because when you build compliance around explicit consent notices, it’s inevitable that those notices will become longer, all-inclusive, heavily caveated and designed to guard against risk. Consent notices become seen as a legal issue, not a design issue, inhibiting the adoption of Privacy by Design development so that—rather than enhancing user transparency, they have the opposite effect. Instead, designers build products with little thought to privacy, safe in the knowledge that they can simply ‘bolt on’ a detailed consent notice as a ‘take it or leave it’ proposition on installation or first use, just like terms of service are now. And, as technology becomes ever more complicated, so it becomes ever more likely that consumers won’t really understand what it is they’re consenting to anyway, no matter how well it’s explained. It’s also a safe bet that users will simply ignore any notice that stands between them and the service they want to receive. If you don’t believe me, then look at cookie consent as a case in point.

Instead, it’s incumbent upon us as privacy professionals to think up a better solution. One that strikes a balance between the legitimate expectations of the individual with regard to his or her privacy and the legitimate interests of the business with regard to its need to collect and use data. One that enables the business to deliver innovative new products and services to consumers in a way that demonstrates respect for their data and engenders their trust and which does not result in lazy, consent-driven compliance. One that encourages controllers to build privacy functionality into their products from the very outset, not address it as an afterthought.

Maybe what we need is a concept of an online “personal space.”

In the physical world, whether through the rules of social etiquette, an individual’s body language or some other indicator, we implicitly understand that there is an invisible boundary we must respect when standing in close physical proximity to another person. A similar concept could be conceived for the online world—ironically, Big Data profiles could help here. Or maybe it’s as simple as promoting a concept of “surprise minimization” as proposed by the California attorney general in her guidance on mobile privacy—the concept that, through Privacy by Design methodologies, you avoid surprising individuals by collecting data from or about them that, in the given context, they would not expect or want.

Whatever the solution is, we’re entering a brave new world; it demands some brave new thinking.

This post first published on the IAPP Privacy Perspectives here.

Privacy pointers for appreneurs

Posted on May 31st, 2013 by



While parts of the global economy are continuing to suffer serious economic shocks, an individual with a computer, internet access and the necessary know-how can join the increasing ranks of the appreneurs – people developing and hoping to make money from apps. Buoyed by the stories of wunderkids such as 17 year old Nick D’Aloisio who sold his Summly app to Yahoo for around £18m earlier this year, many are seeking to become appillionaires! And undoubtedly a rosy future will beckon for those fortunate enough to hit on the right app at the right time.

As the popularity of mobile and tablet devices rises, the proliferation of apps will continue. But some apps will sink without a trace and some will become global hits. Amidst all the excitement, those developing apps would do well to consider certain essential privacy pointers in order to anticipate any potential obstacles to widespread adoption and in order to avoid any unwelcome regulator attention down the road. These include:

1. Think Privacy from the beginning – design your app so that it shows an understanding of privacy issues from the start i.e. include settings that give an individual control over what data you collect about them, usually through providing an opt-out;

2. Tell individuals what you’re doing – include a notice setting out how you use their data, make sure that the notice is accessible and in a language that people can understand, and adopt a ‘surprise minimisation’ approach so that you can reasonably argue that individuals would not be surprised by the data you collect on them in a given context;

3. Decide whether you’re sharing the data you collect with anyone else – if so, make sure that there’s a good reason to share the data, tell individuals about the data sharing and check to see whether there are any rules that require you to obtain individuals’ consent before sharing their data i.e. for marketing purposes;

4. Check to see whether you’re collecting special types of data – be aware that certain types of data (such as location data or health data) are considered more intrusive and you may need to obtain an individual’s consent before collecting this data;

5. Implement an implied consent solution when using cookies or other tracking technologies in the EU – the debate is pretty much over on how to comply with the EU cookie rule since implied consent is increasingly being adopted by regulators (see Phil Lee’s recent blog)

While an initiative scrutinising App privacy policies and practices (similar to the ‘Internet Sweep Day’ we have seen initiated recently by the Global Privacy Enforcement Network) is probably some time off, appreneurs that can get privacy ‘right’ from the start will have a competitive advantage over those that do not.

Is BYOD secure for your company?

Posted on May 24th, 2013 by



Over the past years, BYOD has developed rapidely and has even become common practise within some companies. More and more employees are using their electronic devices (e.g., smartphones and tablets) at work. The benefits for companies are undisputable in terms of cost-saving, work productivity, and the functionalities that smart devices can offer to employees. However, BYOD can also pose a threat for the security of a company’s information network  and systems when used without the proper level of security. On May 15, 2013, the French Agency for the Security of Information Systems (ANSSI) released a technical paper advizing companies to implement stronger security measures when authorizing their employees to use electronic devices.

The agency notes that the current security standards used by companies are insufficient to protect efficiently their professional data. Electronic devices enable to store lots of data obtained directly (e.g., emails, agenda, contacts, photos, documents, SMS) or indirectly (navigation data, geolocation data, history). Some of this data may be considered sensitive by companies (e.g., access codes and passwords, security certifications) and may be used fraudulently to access business information stored on the company’s professional network. Thus, the use of electronic devices in the work place contains a risk that business data may be modified, destroyed or disclosed unlawfully. In particular, the risk of a data security breach deriving from the use of an electronic device is quite high due to the numerous functionalities that they offer. This risk is generally explained by the vulnerability of the information systems installed on electronic devices, but also the wrongful behaviour of employees who are not properly informed about the risks.

The Agency realizes that it is unrealistic to want to reach a high level of security when using mobile devices, regardless of the security parameters used. Nevertheless, the Agency recommends that companies implement certain security parameters in order to mitigate the risk of a security incident. These security parameters should be installed on the employee’s device within a unique profile that he/she cannot modify. In addition to the technical measures, companies should also implement organizational measures, such as a security policy and an internal document explaining to employees the authorized uses of IT systems and devices. Finally, those security measures should be reassessed throughout the lifecycle of the electronic device (i.e., inherent security of the device, security of the information system before the device is used by the employee, security conditions applied to the entire pool of electronic devices, reinitializing the electronic devices before they are reaffected).

The twenty-one security measures that are outlined in the Agency’s paper are categorized as follows:

– access control: renewal of the password every three months; automatic lock-down of the device after five minutes; use of a PIN code when sensitive data are stored on the device; limit the number of attempts to unlock the device;

– security of applications: prohibit the ‘by default’ use of the on-line store for applications; prohibit the unauthorized installation of applications; block the geolocation functionality when not used for certain applications; switch off the geolocation functionality when not used; install security patches on a regular basis;

– security of data and communications:  wireless connections (e.g., Bluetooth, Wi-Fi) must be deactivated when not used; avoid connecting to unknown wireless networks when possible; apply robust encryption to the internal storage of the device; sensitive data must be shared by using encrypted communication channels in order to maintain the confidentiality and integrity of the data; 

– security of the information system: automatically upgrade information systems on a regular basis by installing security patches; if needed, reinitialize the device entirely once per year.

The Agency explains that these security parameters are incompatible with a BYOD policy involving the combined use of an electronic device both for private and professional purposes. The Agency recommends that professional devices be used exclusively for that purpose (meaning that employees should have a separate device for private purposes), and if the same device is used professionally and privately, that both environments be separated efficiently.

The Agency’s paper is available (in French) by clicking on the following link: NP_Ordiphones_NoteTech[1]

The familiar perils of the mobile ecosystem

Posted on March 18th, 2013 by



I had not heard the word ‘ecosystem’ since school biology lessons.  But all of a sudden, someone at a networking event dropped the ‘e’ word and these days, no discussion about mobile communications takes place without the word ‘ecosystem’ being uttered in almost every sentence.   An ecosystem is normally defined as a community of living things helping each other out (some more willingly than others) in a relatively contained environment.  The point of an ecosystem is that completely different organisms – each with different purposes and priorities – are able to co-exist in a more or less harmonious but eclectic way.  The parallel between that description and what is happening in the mobile space is evident.  Mobile communications have evolved around us to adopt a life of their own and separate from traditional desktop based computing and web browsing.  Through the interaction of very different players, our experience of communications on the go via smart devices has become an intrinsic part of our everyday lives. 

Mobile apps in particular have penetrated our devices and lifestyles in the most natural of ways.  Studies show that apparently an average smartphone user downloads 37 apps.  The fact that the term ‘app’ was listed as Word of the Year in 2010 by the American Dialect Society is quite telling.  Originally conceived to provide practical functions like calendars, calculators and ring tones, mobile apps bring us anything that can be digitised and has a role to play in our lives.  In other words, our use of technology has never been as close and personal.  Our mobile devices are an extension of ourselves and mobile apps are an accurate tool to record our every move (in some cases, literally!).  As a result, the way in which we use mobile devices tells a very accurate story of who we are, what we do and what we are about.  Conspiracy theories aside, it is a fact that smartphones are the perfect surveillance device and most of us don’t even know it!

Policy makers and regulators throughout the world are quickly becoming very sensitive to the privacy risks of mobile apps.  Enforcement is the loudest mechanism to show that nervousness but the proliferation of guidance around compliance with the law in relation to the development, provision and operation of apps has been a clear sign of the level of concern.  Regulators in Canada, the USA and more recently in Europe have voiced sombre concerns about such risks.  The close and intimate relationship between the (almost always on) devices and their users is widely seen as an aggravating factor of the potential for snooping, data collection and profiling.  Canadian regulators are particularly concerned about the seeming lightning speed of the app development cycle and the ability to reach hundreds of thousands of users within a very short period of time.  Another generally shared concern is the fragmentation between the many players in the mobile ecosystem – telcos, handset manufacturers, operating system providers, app stores, app developers, app operators and of course anybody else who wants a piece of the rich mobile cake – and the complexity that this adds to it.

All of that appears to compromise undisputed traditional principles of privacy and data protection: transparency, individuals’ control over their data and purpose limitation.  It is easy to see why that is the case.  How can we even attempt to understand – let alone control – all of the ways in which the information generated by our non-stop use of apps may potentially be used when all such uses are not yet known, the communication device is undersized and our eagerness to start using the app acts as a blindfold?  No matter how well intended the regulators’ guidance may be, it is always going to be a tall order to follow, particularly when the expectations of those regulators in terms of the quality of the notice and consent are understandably high.  In addition, the bulk of the guidance has been targeted at app developers, a key but in many cases insignificant player in the whole ecosystem.  Why is the enthusiastic but humble app developer the focus of the compliance guidelines when some of the other parties – led by the operator of the app, which is probably the most visible party to the user – play a much greater role in determining which data will be used and by whom?

Thanks to their ubiquity, physical proximity to the user and personal nature, mobile communications and apps pose a massive regulatory challenge to those who make and interpret privacy rules, and an even harder compliance conundrum to those who have to observe them.  That is obviously not a reason to give up and efforts must be made by anyone who plays a part to contribute to the solution.  People are entitled to use mobile technology in a private, productive and safe way.  But we must acknowledge that this new ecosystem is so complex that granting people full control of the data generated by such use is unlikely to be viable.  As with any other rapidly evolving technology, the privacy perils are genuine but attention must be given to all players and, more importantly, to any mechanisms that allow us to distinguish between legitimate and inappropriate uses of data.  Compliance with data protection in relation to apps should be about giving people what they want whilst avoiding what they would not want.

This article was first published in Data Protection Law & Policy in March 2013.

Designing privacy for mobile apps

Posted on March 16th, 2013 by



My phone is my best friend.  I carry it everywhere with me, and entrust it with vast amounts of my personal information, for the most part with little idea about who has access to that information, what they use it for, or where it goes.  And what’s more, I’m not alone.  There are some 6 billion mobile phone subscribers out there, and I’m willing to bet that most – if not all of them – are every bit as unaware of their mobile data uses as me.

So it’s hardly surprising that the Article 29 Working Party has weighed in on the issue with an “opinion on apps on smart devices” (available here).  The Working Party splits its recommendations across the four key players in the mobile ecosystem (app developers, OS and device manufacturers, app stores and third parties such as ad networks and analytics providers), with app developers receiving the bulk of the attention.

Working Party recommendations

Much of the Working Party’s recommendations don’t come as a great surprise: provide mobile users with meaningful transparency, avoid data usage creep (data collected for one purpose shouldn’t be used for other purposes), minimise the data collected, and provide robust security.  But other recommendations will raise eyebrows, including that:

(*)  the Working Party doesn’t meaningfully distinguish between the roles of an app publisher and an app developer – mostly treating them as one and the same.  So, the ten man design agency engaged by Global Brand plc to build it a whizzy new mobile app is effectively treated as having the same compliance responsibilities as Global Brand, even though it will ultimately be Global Brand who publicly releases the app and exploits the data collected through it;

(*)  the Working Party considers EU data protection law to apply whenever a data collecting app is released into the European market, regardless of where the app developer itself is located globally.  So developers who are based outside of Europe but who enjoy global release of their app on Apple’s App Store or Google Play may unwittingly find themselves subjected to EU data protection requirements;

(*)  the Working Party takes the view that device identifiers like UDID, IMEI and IMSI numbers all qualify as personal data, and so should be afforded the full protection of European data protection law.  This has a particular impact on the mobile ad industry, who typically collect these numbers for ad serving and ad tracking purposes, but aim to mitigate regulatory exposure by carefully avoiding collection of “real world” identifiers;

(*)  the Working Party places a heavy emphasis on the need for user opt-in consent, and does not address situations where the very nature of the app may make it so obvious to the user what information the app will collect as to make consent unnecessary (or implied through user download); and

(*)  the Working Party does not address the issue of data exports.  Most apps are powered by cloud-based functionality and supported by global service providers meaning that, perhaps more than in any other context, the shortfalls of common data export solutions like model clauses and safe harbor become very apparent.

Designing for privacy
Mobile privacy is hard.  In her guidance on mobile apps, the California Attorney-General rightly acknowledged that: “Protecting consumer privacy is a team sport. The decisions and actions of many players, operating individually and jointly, determine privacy outcomes for users. Hardware manufacturers, operating system developers, mobile telecommunications carriers, advertising networks, and mobile app developers all play a part, and their collaboration is crucial to enabling consumers to enjoy mobile apps without having to sacrifice their privacy.
Building mobile apps that are truly privacy compliant requires a privacy by design approach from the outset.  But, for any mobile app build, there are some top tips that developers should be aware of:
  1. Always, always have a privacy policy.  The poor privacy policy has been much maligned in recent years but, whether or not it’s the best way to tell people what you do with their information (it’s not), it still remains an expected standard.  App developers need to make sure they have a privacy policy that accurately reflects how they will use and protect individuals’ personal information and make this available both prior to download (e.g. published on the app store download page) and in-app.  Not having this is a sure fire way to fall foul of privacy authorities – as evidenced in the ongoing Delta Airlines case.
  2. Surprise minimisation.  The Working Party emphasises the need for user consents and, in certain contexts, consent will of course be appropriate (e.g. when accessing real-time GPS data).  But, to my mind, the better standard is that proposed by the California Attorney-General of “surprise minimisation”, which she explains as the use of “enhanced measures to alert users and give them control over data practices that are not related to an app’s basic functionality or that involve sensitive information.” Just-in-time privacy notices combined with meaningful user controls are the way forward.
  3. Release “free” and “premium” versions.  The Working Party says that individuals must have real choice over whether or not apps collect personal information about them.  However, developers will commonly complain that real choice simply isn’t an option – if they’re going to provide an app for free, then they need to collect and monitise data through it (e.g. through in-app targeted advertising).  An obvious solution is to release two versions of the app – one for “free” that is funded by exploiting user data and one that is paid for, but which only collects user data necessary to operate the app.  That way, users that don’t want to have their data monitised can choose to download the paid for “premium” version instead – in other words, they have choice;
  4. Provide privacy menu settings.   It’s suprising how relatively few apps offer this, but privacy settings should be built into app menus as a matter of course – for example, offering users the ability to delete app usage histories, turn off social networking integration, restrict location data use etc.  Empowered users are happy users, and happy users means happy regulators; and
  5. Know Your Service Providers.  Apps serve as a gateway to user data for a wide variety of mobile ecosystem operators – and any one of those operators might, potentially, misuse the data it accesses.  Developers need to be particularly careful when integrating third party APIs into their apps, making sure that they properly understand their service providers’ data practices.  Failure to do proper due diligence will leave the developer exposed.

Any developer will tell you that you don’t build great products by designing to achieve compliance; instead, you build great products by designing a great user experience.  Fortunately, in privacy, both goals are aligned.  A great privacy experience is necessarily part and parcel of a great user experience, and developers need to address users’ privacy needs at the earliest stages of development, through to release and beyond.

2013 to be the year of mobile regulation?

Posted on January 4th, 2013 by



After a jolly festive period (considerably warmer, I’m led to understand, for me in Palo Alto than for my colleagues in the UK), the New Year is upon us and privacy professionals everywhere will no doubt be turning their minds to what 2013 has in store for them.  Certainly, there’s plenty of developments to keep abreast of, ranging from the ongoing EU regulatory reform process through to the recent formal recognition of Binding Corporate Rules for processors.  My partner, Eduardo Ustaran, has posted an excellent blog outlining his predictions here.

But one safe bet for greater regulatory attention this year is mobile apps and platforms.  Indeed, with all the excitement surrounding cookie consent and EU regulatory reform, mobile has remained largely overlooked by EU data protection authorities to date.  Sure, we’ve had the Article 29 Working Party opine on geolocation services and on facial recognition in mobile services.  The Norwegian Data Protection Inspectorate even published a report on mobile apps in 2011 (“What does your app know about you?“).  But really, that’s been about it.  Pretty uninspiring, not to mention surprising, when consumers are fast abandoning their creaky old desktop machines and accessing online services through shiny new smartphones and tablets: Forbes even reports that mobile access now accounts for 43% of total minutes spent on Facebook by its users.

Migration from traditional computing platforms to mobile computing is not, in and of itself, enough to guarantee regulator interest.  But there are plenty of other reasons to believe that mobile apps and platforms will come under increased scrutiny this year:

1.  First, meaningful regulatory guidance is long overdue.  Mobiles are inherently more privacy invasive than any other computing platform.  We entrust more data to our mobile devices (in my case, my photos, address books, social networking, banking and shopping account details, geolocation patterns, and private correspondence) than any other platform and generally with far less security – that 4 digit PIN really doesn’t pass muster.  We download apps from third parties we’ve often scarcely ever heard of, with no idea as to what information they’re going to collect or how they’re going to use it, and grant them all manner of permissions without even thinking – why, exactly, does that flashlight app need to know details of my real-time location?  Yet despite the huge potential for privacy invasion, there persists a broad lack of understanding as to who is accountable for compliance failures (the app store, the platform provider, the network provider or the app developer) and what measures they should be implementing to avoid privacy breaches in the first place.  This uncertainty and confusion makes regulatory involvement inevitable.

2.  Second, regulators are already beginning to get active in the mobile space – if this were not the case, the point above would otherwise be pure speculation.  It’s not, though.  On my side of the Pond, we’ve recently seen the California Attorney General file suit against Delta Air Lines for its failure to include a privacy policy within its mobile app (this action itself following letters sent by the AG to multiple app providers warning them to get their acts together).  Then, a few days later, the FTC launched a report on children’s data collection through mobile apps, in which it indicated that it was launching multiple investigations into potential violations of the Children’s Online Privacy Protection Act (COPPA) and the FTC Act’s unfair and deceptive practices regime.  The writing is on the wall, and it’s likely EU regulators will begin following the FTC’s lead.

3.  Third, the Article 29 Working Party intends to do just that.  In a press release in October, the Working Party announced that “Considering the rapid increase in the use of smartphones, the amount of downloaded apps worldwide and the existence of many small-sized app-developers, the Working Party… [will] publish guidance on mobile apps… early next year.” So guidance is coming and, bearing in mind that the Article 29 Working Party is made up of representatives from national EU data protection authorities, it’s safe to say that mobile privacy is riding high on the EU regulatory agenda.

In 2010, the Wall Street Journal reported: “An examination of 101 popular smartphone “apps”—games and other software applications for iPhone and Android phones—showed that 56 transmitted the phone’s unique device ID to other companies without users’ awareness or consent. Forty-seven apps transmitted the phone’s location in some way. Five sent age, gender and other personal details to outsiders… Many apps don’t offer even a basic form of consumer protection: written privacy policies. Forty-five of the 101 apps didn’t provide privacy policies on their websites or inside the apps at the time of testing.”  Since then, there hasn’t been a great deal of improvement.  My money’s on 2013 being the year that this will change.

Technology issues that will shape privacy in 2013

Posted on December 13th, 2012 by



Making predictions as we approach a new year has become a bit of a tradition.  The degree of error is typically proportional to the level of boldness of those predictions, but as in the early days of weather forecasting, the accuracy expectations attached to big statements about what may or may not happen in today’s uncertain world are pretty low.  Having said that, it wouldn’t be particularly risky to assume that during 2013, the EU legislative bodies will be thinking hard about things like whether the current definition of personal data is wide enough, what kind of security breach should trigger a public disclosure, the right amount for monetary fines or the scope of the European Commission’s power to adopt ‘delegated acts’.  But whilst it is easy to get distracted by the fascinating data protection legislative developments currently taking place in the EU, next year’s key privacy developments will be significantly shaped by the equally fascinating technological revolution of our time.

A so far low profile issue from a regulatory perspective has been the ever growing mobile app phenomenon.  Like having a website in the late 90s, launching a mobile app has become a ‘must do’ for any self-respecting consumer-facing business.  However, even the simplest app is likely to be many times more sophisticated than the early websites and will collect much more useful and clever data about its users and their lifestyles.  That is a fact and, on the whole, apps are a very beneficial technological development for the 21st century homo-mobile.  The key issue is how this development can be reconciled with the current data protection rules dealing with information provision, grounds for processing and data proportionality.  Until now, technology has as usual led the way and the law is clumsily trying to follow, but in the next few months we are likely to witness much more legal activity on this front than what we have seen to date.

Mobile data collection via apps has been a focus of attention in theUSAfor a while but recent developments are a clue to what is about to happen.  The spark may well have been ignited by the California Attorney General who in the first ever legal action under the state’s online privacy law, is suing Delta Air Lines for distributing a mobile application without a privacy policy.  Delta had reportedly been operating its mobile app without a privacy policy since at least 2010 and did not manage to post one after being ordered by the authorities to do so.  On a similar although slightly more alarming note, children’s mobile game company Mobbles is being accused by the Center for Digital Democracy of violating COPPA, which establishes strict parental consent rules affecting the collection of children’s data.  These are unlikely to be isolated incidents given that app operators tend to collect more data than what is necessary to run the app.  In fact, these cases are almost certainly the start of a trend that will extend toEuropein 2013 and lead EU data protection authorities and mobile app developers to lock horns on how to achieve a decent degree of compliance in this environment.

Speaking of locking horns, next year (possibly quite early on) we will see the first instances of enforcement of the cookie consent requirement.  What is likely to be big about this is not so much the amount of the fines or the volume of enforcement actions, but the fact that we will see for real what the regulators’ compliance expectations actually are.  Will ‘implied consent’ become the norm or will websites suddenly rush to present their users with hard opt-in mechanisms before placing cookies on their devices?  Much would need to change for the latter to prevail but at the same time, the ‘wait and see’ attitude that has ruled to date will be over soon, as the bar will be set and the decision to comply or not will be based purely on risk – an unfortunate position to be in, caused by an ill-drafted law.  Let that be a lesson for the future.

The other big technological phenomenon that will impact on privacy and security practices – probably in a positive way – will be the cloud.  Much has been written on the data protection implications of cloud computing in the past months.  Regulators have given detailed advice.  Policy makers have made grand statements.  But the real action will be seen in 2013, when a number of leaders in the field start rolling out Binding Safe Processor Rules programmes and regulators are faced with the prospect of scrutinising global cloud vendors’ data protection offerings.  Let us hope that we can use this opportunity to listen to each other’s concerns, agree a commercially realistic set of standards and get the balance right.  That would be a massive achievement.

 

This article was first published in Data Protection Law & Policy in December 2012.

Mobile privacy – is there an app for that?

Posted on April 20th, 2012 by



Next week I’ll be chairing a session at the IAPP’s Data Protection Intensive in London on mobile privacy. In advance of my session (and without giving too much away – I highly recommend attending the event!), I thought I’d set out a few key thoughts on the issues mobile operators and developers need to consider when launching mobile apps:

  • Why does m-privacy matter? It’s simple: if you’re anything like me, your mobile device has become your closest, must trusted friend. No one know more about you: your phone knows where you go, who you know, and the passwords to your banking, shopping and social networking accounts. It looks after your diary and has access to all your most treasured and personal photos. This is all very sensitive information – and your phone holds an awful lot of it.
  • Why is m-privacy hard (practically)? Because the actors, devices and consumer expectations are so many and so varied. In the course of downloading, installing and running an app, a consumer will share data with or through its device platform, the relevant app marketplace, the application developer, and various ad networks, analytics providers, payment processors and mobile carriers. Consumers can access apps through smartphones, tablets, netbooks or other mobile devices – each with different platforms having their own data access permissions, device unique data types, and screen sizes and resolutions, thereby making efforts to design a simple ‘one size fits all’ privacy notice a real challenge. Adopting a privacy by design approach is not a nice to have in the mobile environment – it’s a necessity.
  • Why is m-privacy hard (legally)? From a privacy perspective, data protection, e-privacy, communications interception and data retention laws – both in the EU and beyond – can all apply to data collected from mobile devices. Widen the picture out into general consumer law, and issues arise around applicable law, mandatory consumer terms, liability and enforceability of terms (to name but a few). As a few press reports have highlighted recently, just because you CAN access data, doesn’t mean you should – the recent furore surrounding the Girls Around Me app being a very good case in point (see here). And to make matters more complicated, the data protection laws we have can often apply in surprising and unexpected ways – remember, many of them date back to before any of us even had a mobile. Should device ID data really be considered ‘personal data’? Why do ‘cookie consent’ rules apply to mobile apps? Do SoLoMo applications REALLY need to get opt-in consent to location data use?

If you’re attending the IAPP Intensive next week, then do come along and join my session to answer all of these questions – and more!