Archive for the ‘Geolocation’ Category

New law on real-time geolocation creates concerns over right to privacy in France

Posted on February 26th, 2014 by



On February 24, 2014, the French Parliament adopted a new law regulating the real-time geolocation of individuals for law enforcement purposes (the “Law”). This Law was adopted in response to two decisions of the Court of Cassation of October 22nd, 2013, which ruled that the use of real-time geolocation devices in the context of judicial proceedings constitutes an invasion of privacy that must be authorized by a judge on the grounds of an existing law. A similar ruling was handed by the European Court of Human Rights on September 2nd, 2010 (ECRH, Uzun v. Germany).

Essentially, this Law authorizes law enforcement authorities to use technical means to locate an individual in real-time, on the entire French territory, without that individual’s knowledge, or to locate a vehicle or any other object, without the owner or possessor’s consent. These methods may be applied in the course of a preliminary inquiry or a criminal investigation into:

-          felonies and misdemeanours against individuals that are punishable by at least 3 years’ imprisonment, or aiding/concealing a criminal, or a convict’s escape;

-          crimes and felonies (other than those mentioned above) that are punishable by at least 5 years’ imprisonment;

-          the cause of death or disappearance;

-          finding an individual on the run against whom a search warrant has been issued; and

-          investigating and establishing a customs felony punishable by at least 5 years’ imprisonment.

The use of real-time geolocation may only be conducted by a police officer for a maximum period of 15 days (in the case of a preliminary inquiry) or 4 months (in the case of an investigation) and must be authorized respectively by the public prosecutor conducting the inquiry or the judge authorizing the investigation.

However, there are serious concerns within the legal profession that this Law constitutes an invasion of privacy. According to the European Court of Human Rights, a public prosecutor is not an independent judicial authority, and therefore, the use of real-time geolocation in the context of a preliminary inquiry would constitute a violation of the individual’s civil liberties and freedoms. The use of real-time geolocation is considered to be a serious breach of privacy, which as a result, should only be used in exceptional circumstances for serious crimes and felonies, and should remain at all times within the control and authority of a judge. As a consequence, the French Minister of Justice, Christiane Taubira, has asked the Presidents of the French National Assembly and Senate to bring this Law before the Constitutional Council before it comes into force to decide whether it respects the Constitution.

The French Data Protection Authority (“CNIL”) stated in a press release that real-time geolocation of individuals is comparable to the interception of electronic communications, and therefore, identical safeguards to those that apply to the interception of electronic communications (in particular, the conditions for intercepting electronic communications in the course of criminal proceedings) should equally apply to geolocation data. The CNIL also considers that the installation of a geolocation device in an individual’s home, without that individual’s knowledge, should be supervised and authorized by a judge at all times, regardless of whether that operation takes place during the day or at night.

In a previous press release, the CNIL raised similar concerns over the adoption of the Law of December 18th, 2013, on military programming, which authorizes government authorities to request real-time access to any information or documents (including location data) stored by telecoms and data hosting providers on electronic communications networks for purposes of national security.

A Brave New World Demands Brave New Thinking

Posted on June 3rd, 2013 by



Much has been said in the past few weeks and months about Google Glass, Google’s latest innovation that will see it shortly launch Internet-connected glasses with a small computer display in the corner of one lens that is visible to, and voice-controlled by, the wearer. The proposed launch capabilities of the device itself are—in pure computing terms—actually relatively modest: the ability to search the web, bring up maps, take photographs and video and share to social media.

So far, so iPhone.

But, because users wear and interact with Google Glass wherever they go, they will have a depth of relationship with their device that far exceeds any previous relationship between man and computer. Then throw in the likely short- to mid-term evolution of the device—augmented reality, facial recognition—and it becomes easy to see why Google Glass is so widely heralded as The Next Big Thing.

Of course, with an always-on, always-worn and always-connected, photo-snapping, video-recording, social media-sharing device, the privacy issues are a-plenty, ranging from the potential for crowd-sourced law enforcement surveillance to the more mundane forgetting-to-remove-Google-Glass-when-visiting-the-men’s-room scenario. These concerns have seen a very heated debate play out across the press, on TV and, of course, on blogs and social media.

But to focus the privacy debate just on Google Glass really misses the point. Google Glass is the headline-grabber, but in reality it’s just the tip of the iceberg when it comes to the wearable computing products that will increasingly be hitting the market over the coming years. Pens, watches, glasses (Baidu is launching its own smart glasses too), shoes, whatever else you care to think of—will soon all be Internet-connected. And it doesn’t stop at wearable computing either; think about Internet-connected home appliances: We can already get Internet-connected TVs, game consoles, radios, alarm clocks, energy meters, coffee machines, home safety cameras, baby alarms and cars. Follow this trend and, pretty soon, every home appliance and personal accessory will be Internet-connected.

All of these connected devices—this “Internet of Things”—collect an enormous volume of information about us, and in general, as consumers we want them: They simplify, organize and enhance our lives. But, as a privacy community, our instinct is to recoil at the idea of a growing pool of networked devices that collect more and more information about us, even if their purpose is ultimately to provide services we want.

The consequence of this tends to be a knee-jerk insistence on ever-strengthened consent requirements and standards: Surely the only way we can justify such a vast collection of personal information, used to build incredibly intricate profiles of our interests, relationships and behaviors, is to predicate collection on our explicit consent. That has to be right, doesn’t it?

The short answer to this is “no”—though not, as you might think, for the traditionally given reasons that users don’t like consent pop-ups or that difficulties arise when users refuse, condition or withdraw their consents. 

Instead, it’s simply that explicit consent is lazy. Sure, in some circumstances it may be warranted, but to look to explicit consent as some kind of data collection panacea will drive poor compliance that delivers little real protection for individuals.

Why? 

Because when you build compliance around explicit consent notices, it’s inevitable that those notices will become longer, all-inclusive, heavily caveated and designed to guard against risk. Consent notices become seen as a legal issue, not a design issue, inhibiting the adoption of Privacy by Design development so that—rather than enhancing user transparency, they have the opposite effect. Instead, designers build products with little thought to privacy, safe in the knowledge that they can simply ‘bolt on’ a detailed consent notice as a ‘take it or leave it’ proposition on installation or first use, just like terms of service are now. And, as technology becomes ever more complicated, so it becomes ever more likely that consumers won’t really understand what it is they’re consenting to anyway, no matter how well it’s explained. It’s also a safe bet that users will simply ignore any notice that stands between them and the service they want to receive. If you don’t believe me, then look at cookie consent as a case in point.

Instead, it’s incumbent upon us as privacy professionals to think up a better solution. One that strikes a balance between the legitimate expectations of the individual with regard to his or her privacy and the legitimate interests of the business with regard to its need to collect and use data. One that enables the business to deliver innovative new products and services to consumers in a way that demonstrates respect for their data and engenders their trust and which does not result in lazy, consent-driven compliance. One that encourages controllers to build privacy functionality into their products from the very outset, not address it as an afterthought.

Maybe what we need is a concept of an online “personal space.”

In the physical world, whether through the rules of social etiquette, an individual’s body language or some other indicator, we implicitly understand that there is an invisible boundary we must respect when standing in close physical proximity to another person. A similar concept could be conceived for the online world—ironically, Big Data profiles could help here. Or maybe it’s as simple as promoting a concept of “surprise minimization” as proposed by the California attorney general in her guidance on mobile privacy—the concept that, through Privacy by Design methodologies, you avoid surprising individuals by collecting data from or about them that, in the given context, they would not expect or want.

Whatever the solution is, we’re entering a brave new world; it demands some brave new thinking.

This post first published on the IAPP Privacy Perspectives here.

Designing privacy for mobile apps

Posted on March 16th, 2013 by



My phone is my best friend.  I carry it everywhere with me, and entrust it with vast amounts of my personal information, for the most part with little idea about who has access to that information, what they use it for, or where it goes.  And what’s more, I’m not alone.  There are some 6 billion mobile phone subscribers out there, and I’m willing to bet that most – if not all of them – are every bit as unaware of their mobile data uses as me.

So it’s hardly surprising that the Article 29 Working Party has weighed in on the issue with an “opinion on apps on smart devices” (available here).  The Working Party splits its recommendations across the four key players in the mobile ecosystem (app developers, OS and device manufacturers, app stores and third parties such as ad networks and analytics providers), with app developers receiving the bulk of the attention.

Working Party recommendations

Much of the Working Party’s recommendations don’t come as a great surprise: provide mobile users with meaningful transparency, avoid data usage creep (data collected for one purpose shouldn’t be used for other purposes), minimise the data collected, and provide robust security.  But other recommendations will raise eyebrows, including that:

(*)  the Working Party doesn’t meaningfully distinguish between the roles of an app publisher and an app developer – mostly treating them as one and the same.  So, the ten man design agency engaged by Global Brand plc to build it a whizzy new mobile app is effectively treated as having the same compliance responsibilities as Global Brand, even though it will ultimately be Global Brand who publicly releases the app and exploits the data collected through it;

(*)  the Working Party considers EU data protection law to apply whenever a data collecting app is released into the European market, regardless of where the app developer itself is located globally.  So developers who are based outside of Europe but who enjoy global release of their app on Apple’s App Store or Google Play may unwittingly find themselves subjected to EU data protection requirements;

(*)  the Working Party takes the view that device identifiers like UDID, IMEI and IMSI numbers all qualify as personal data, and so should be afforded the full protection of European data protection law.  This has a particular impact on the mobile ad industry, who typically collect these numbers for ad serving and ad tracking purposes, but aim to mitigate regulatory exposure by carefully avoiding collection of “real world” identifiers;

(*)  the Working Party places a heavy emphasis on the need for user opt-in consent, and does not address situations where the very nature of the app may make it so obvious to the user what information the app will collect as to make consent unnecessary (or implied through user download); and

(*)  the Working Party does not address the issue of data exports.  Most apps are powered by cloud-based functionality and supported by global service providers meaning that, perhaps more than in any other context, the shortfalls of common data export solutions like model clauses and safe harbor become very apparent.

Designing for privacy
Mobile privacy is hard.  In her guidance on mobile apps, the California Attorney-General rightly acknowledged that: “Protecting consumer privacy is a team sport. The decisions and actions of many players, operating individually and jointly, determine privacy outcomes for users. Hardware manufacturers, operating system developers, mobile telecommunications carriers, advertising networks, and mobile app developers all play a part, and their collaboration is crucial to enabling consumers to enjoy mobile apps without having to sacrifice their privacy.
Building mobile apps that are truly privacy compliant requires a privacy by design approach from the outset.  But, for any mobile app build, there are some top tips that developers should be aware of:
  1. Always, always have a privacy policy.  The poor privacy policy has been much maligned in recent years but, whether or not it’s the best way to tell people what you do with their information (it’s not), it still remains an expected standard.  App developers need to make sure they have a privacy policy that accurately reflects how they will use and protect individuals’ personal information and make this available both prior to download (e.g. published on the app store download page) and in-app.  Not having this is a sure fire way to fall foul of privacy authorities – as evidenced in the ongoing Delta Airlines case.
  2. Surprise minimisation.  The Working Party emphasises the need for user consents and, in certain contexts, consent will of course be appropriate (e.g. when accessing real-time GPS data).  But, to my mind, the better standard is that proposed by the California Attorney-General of “surprise minimisation”, which she explains as the use of “enhanced measures to alert users and give them control over data practices that are not related to an app’s basic functionality or that involve sensitive information.” Just-in-time privacy notices combined with meaningful user controls are the way forward.
  3. Release “free” and “premium” versions.  The Working Party says that individuals must have real choice over whether or not apps collect personal information about them.  However, developers will commonly complain that real choice simply isn’t an option – if they’re going to provide an app for free, then they need to collect and monitise data through it (e.g. through in-app targeted advertising).  An obvious solution is to release two versions of the app – one for “free” that is funded by exploiting user data and one that is paid for, but which only collects user data necessary to operate the app.  That way, users that don’t want to have their data monitised can choose to download the paid for “premium” version instead – in other words, they have choice;
  4. Provide privacy menu settings.   It’s suprising how relatively few apps offer this, but privacy settings should be built into app menus as a matter of course – for example, offering users the ability to delete app usage histories, turn off social networking integration, restrict location data use etc.  Empowered users are happy users, and happy users means happy regulators; and
  5. Know Your Service Providers.  Apps serve as a gateway to user data for a wide variety of mobile ecosystem operators – and any one of those operators might, potentially, misuse the data it accesses.  Developers need to be particularly careful when integrating third party APIs into their apps, making sure that they properly understand their service providers’ data practices.  Failure to do proper due diligence will leave the developer exposed.

Any developer will tell you that you don’t build great products by designing to achieve compliance; instead, you build great products by designing a great user experience.  Fortunately, in privacy, both goals are aligned.  A great privacy experience is necessarily part and parcel of a great user experience, and developers need to address users’ privacy needs at the earliest stages of development, through to release and beyond.

2013 to be the year of mobile regulation?

Posted on January 4th, 2013 by



After a jolly festive period (considerably warmer, I’m led to understand, for me in Palo Alto than for my colleagues in the UK), the New Year is upon us and privacy professionals everywhere will no doubt be turning their minds to what 2013 has in store for them.  Certainly, there’s plenty of developments to keep abreast of, ranging from the ongoing EU regulatory reform process through to the recent formal recognition of Binding Corporate Rules for processors.  My partner, Eduardo Ustaran, has posted an excellent blog outlining his predictions here.

But one safe bet for greater regulatory attention this year is mobile apps and platforms.  Indeed, with all the excitement surrounding cookie consent and EU regulatory reform, mobile has remained largely overlooked by EU data protection authorities to date.  Sure, we’ve had the Article 29 Working Party opine on geolocation services and on facial recognition in mobile services.  The Norwegian Data Protection Inspectorate even published a report on mobile apps in 2011 (“What does your app know about you?“).  But really, that’s been about it.  Pretty uninspiring, not to mention surprising, when consumers are fast abandoning their creaky old desktop machines and accessing online services through shiny new smartphones and tablets: Forbes even reports that mobile access now accounts for 43% of total minutes spent on Facebook by its users.

Migration from traditional computing platforms to mobile computing is not, in and of itself, enough to guarantee regulator interest.  But there are plenty of other reasons to believe that mobile apps and platforms will come under increased scrutiny this year:

1.  First, meaningful regulatory guidance is long overdue.  Mobiles are inherently more privacy invasive than any other computing platform.  We entrust more data to our mobile devices (in my case, my photos, address books, social networking, banking and shopping account details, geolocation patterns, and private correspondence) than any other platform and generally with far less security – that 4 digit PIN really doesn’t pass muster.  We download apps from third parties we’ve often scarcely ever heard of, with no idea as to what information they’re going to collect or how they’re going to use it, and grant them all manner of permissions without even thinking – why, exactly, does that flashlight app need to know details of my real-time location?  Yet despite the huge potential for privacy invasion, there persists a broad lack of understanding as to who is accountable for compliance failures (the app store, the platform provider, the network provider or the app developer) and what measures they should be implementing to avoid privacy breaches in the first place.  This uncertainty and confusion makes regulatory involvement inevitable.

2.  Second, regulators are already beginning to get active in the mobile space – if this were not the case, the point above would otherwise be pure speculation.  It’s not, though.  On my side of the Pond, we’ve recently seen the California Attorney General file suit against Delta Air Lines for its failure to include a privacy policy within its mobile app (this action itself following letters sent by the AG to multiple app providers warning them to get their acts together).  Then, a few days later, the FTC launched a report on children’s data collection through mobile apps, in which it indicated that it was launching multiple investigations into potential violations of the Children’s Online Privacy Protection Act (COPPA) and the FTC Act’s unfair and deceptive practices regime.  The writing is on the wall, and it’s likely EU regulators will begin following the FTC’s lead.

3.  Third, the Article 29 Working Party intends to do just that.  In a press release in October, the Working Party announced that “Considering the rapid increase in the use of smartphones, the amount of downloaded apps worldwide and the existence of many small-sized app-developers, the Working Party… [will] publish guidance on mobile apps… early next year.” So guidance is coming and, bearing in mind that the Article 29 Working Party is made up of representatives from national EU data protection authorities, it’s safe to say that mobile privacy is riding high on the EU regulatory agenda.

In 2010, the Wall Street Journal reported: “An examination of 101 popular smartphone “apps”—games and other software applications for iPhone and Android phones—showed that 56 transmitted the phone’s unique device ID to other companies without users’ awareness or consent. Forty-seven apps transmitted the phone’s location in some way. Five sent age, gender and other personal details to outsiders… Many apps don’t offer even a basic form of consumer protection: written privacy policies. Forty-five of the 101 apps didn’t provide privacy policies on their websites or inside the apps at the time of testing.”  Since then, there hasn’t been a great deal of improvement.  My money’s on 2013 being the year that this will change.

Geolocation in the spotlight

Posted on May 23rd, 2011 by



No avid reader of Article 29 Working Party opinions would be surprised to see statements such as “location data from smart mobile devices are personal data” or “the combination of the unique MAC address and the calculated location of a WiFi access point should be treated as personal data”. However, when those statements appear alongside references to the night table next to someone’s bed, or the fact that specific locations reveal data about someone’s sex life, one can’t stop wondering whether an intended clarification of the applicable legal framework to geolocation services available on smart mobile devices is getting a bit sensationalistic.

Let’s get the basic facts right first: every electronic exchange of information is recorded somewhere - emails sent, web pages visited, telephone calls made, credit card transactions, etc. It is in the nature of the digital age. Smartphones and the like represent the latest form of communications technology and, as such, mobile communications leave behind some of the most sophisticated records that digital technology can generate. So a full assessment of the rules affecting the use of smartphones should go beyond a textbook interpretation of European data protection law and look at whether the collection and use of this information has an impact on people’s privacy and data security.

Some of the information generated by our day to day use of mobile communication devices will no doubt be very private. For example, the concepts of “traffic data” and “location data” are carefully defined by EU law and their use is strictly regulated because it is perceived as sufficiently sensitive. Although there are some subtle differences, in both cases the lawful use of such data normally involves obtaining the consent of the individual. However, in the case of location data, consent is not required if the data is anonymous.

This is a crucial point in the context of smartphones-generated data which the Working Party Opinion does not fully appreciate in its recent opinion on geolocation services. This is unfortunate because instead of acknowledging the different types of information that a smart mobile device may produce, all data is dumped into the same bucket. The assumption seems to be that all data collected through a smartphone device should be regarded as personal data despite the fact that some of the data does not identify the device’s user, or that the uses made of such data will never involve singling out an individual.

According to the Working Party, because location data from smart mobile devices reveals intimate details about the private life of their owner, the main applicable legitimate ground is prior informed consent. Again, this is a massive generalisation of the multiples modalities of geolocation services, many of which will rely on anonymous data or, at least, data which is not meant to identify or affect a particular user. Therefore, requiring consent from individuals may go further than what the EU legal framework intended.

For many human beings, life without a smart mobile device would be unimaginable. That is a slightly scary thought and regulators have a duty to scrutinise the data protection implications of new technologies that have the power to radically affect our lives. Clarifying how data protection law interacts with continuously evolving geolocation services is a laudable aim from which everyone can benefit. But unfortunately, a black and white approach to this issue conveys an unhealthy sense of panic and, even worse, distracts us from the fundamental challenge: spotting the real threats to our privacy and security that may be caused by rapid and imperfect technological development.

This article was first published in Data Protection Law & Policy in May 2011

Let’s not panic about smartphones

Posted on May 18th, 2011 by



Today’s Metro’s headline “Android phones all leak secrets” (placed next to a photo of a gloomy looking Arnie for added dramatic effect) was a fitting prelude to the publication of the latest Article 29 Working Party Opinion on geolocation services on smart mobile devices. The message of both pieces seemed to be very similar: enjoy your smartphone at your peril! Is it really that bad?

Let’s get the basic facts right first: every electronic exchange of information is recorded somewhere – emails sent, web pages visited, telephone calls made, credit card transactions, etc. It is in the nature of the digital age. Smartphones and the like represent the latest form of communications technology and, as such, mobile communications leave behind some of the most sophisticated records that digital technology can generate. The issue is whether the collection and use of this information has an impact on people’s privacy and data security.

The concepts of “traffic data” and “location data” are defined by EU law and their use is strictly regulated because it is perceived as sufficiently sensitive. Although there are some subtle differences, in both cases the lawful use of such data involves obtaining the consent of the individual. However, in the case of location data, consent is not required if the data is anonymous.

This is a crucial point in the context of smartphones-generated data that the Working Party Opinion does not fully address. According to the Working Party, because location data from smart mobile devices reveals intimate details about the private life of their owner, the main applicable legitimate ground is prior informed consent. This is a massive generalisation of the multiples modalities of geolocation services, many of which will rely on anonymous data or, at least, data which is not meant to identify or affect a particular user. Therefore, requiring consent may go further than what the EU legal framework intended.

Unfortunately, a black and white approach to this issue conveys an unhealthy sense of panic and, even worse, distracts us from the real challenge: spotting the real threats to our privacy and security that may be caused by rapid and imperfect technological development.