Last week I was in Brussels, and during the time when I wasn’t wandering around the city marvelling at the pretty architecture, I was sitting in a conference centre with people from the European Commission and stakeholders from around Europe to discuss the ePrivacy Directive.
Throughout the day, we were split into groups. Each group had a “host”, who led the discussion and who was required to submit a report to the European Commission within two weeks of the event.
I was the host in my group (yes, I did think about Trill all the way through 😉 ). This is my report.
The group I was in at the workshop was discussing consent to access information stored in smart devices and terminal equipment.
Throughout the day, the group seemed to be split into two distinct parties, regardless of who was taking part. One section of people were very pro-privacy, putting users’ choice above all else and advocating for more options that are both immediately available and comprehensible to the average internet user; the other section came down on the side of advertisers and other data collectors, stating that it is imperative to understand the potential financial impact of giving users more choice about what is done with their data.
The majority of our discussion focused on cookies, particularly in the morning sessions. We wrote bullet points about relevant topics of discussion throughout all the groups; these included the following concepts:
- State security and how it impacts on privacy regulations – how to define necessary exceptions to rules
- Defining ‘consent’ versus usability – whether users can truly consent to something they don’t understand, including problems with excessively long terms & conditions
- The differences between pseudonymous and anonymous data, and the importance of defining these differences in legislation
- Public versus private directories
- Advertising policies, including marketing communications and unsubscribe buttons
- Safety of people who have extra privacy needs, such as those who have been victims of serious crimes
- Theoretical definitions of privacy versus public understanding of privacy as a concept, and marrying these up with practicality
- Responsibility for security against leaks/hacking etc. – does it lie with the user? Companies? ISPs? Law enforcement? Someone else?
- The definition of “clear and comprehensive information” changes based on comprehension levels of users
Broadly speaking, these were the main topics we focused on throughout the day. An interesting point to note was that there seemed to be quite a generational divide: participants in the Millennial and Generation Y age ranges were much more likely to strongly advocate privacy and user choice. Delving into this a little deeper, it appeared that the (comparatively) older members of the group saw the internet as a tool, and therefore online privacy as a useful but not necessary condition, whereas younger generations saw the internet as the space where they lived their lives, and therefore prized privacy much more highly.
The priorities of the day were decided in the final session. The ones we came up with were the following.
- Defining what data / information we’re talking about in terms of implications, impacts and risks. Then defining consent and allowing users to give their informed consent at a granular or categorised level. Suggestions included media literacy training by data protection organisations, and more buy-in from ISPs, media providers and advertisers. Cookie categorisation was a big point of discussion which received enthusiastic agreement from many people in the groups.
- Context is important: for what purposes are data being used? Why are they needed? Balance between businesses running ads to survive and consumers’ right to privacy – possibly mitigated by payment models, especially if these could happen at an internet access level. For example, users who object to any kind of advertising tracking could pay a premium in order to not be tracked. The cost of advertising consumption must also be taken into consideration: a recent study from a Canadian university uncovered that 40% of data consumption was being used on advertising, making it very cost-effective for users to run ad blockers. There are lots of bad ads, and users shouldn’t have to pay (literally or figuratively) for these. There are issues with industry self-regulation. One solution could be to clearly communicate purposes, but sometimes those purposes change, so we need a dynamic consent model.
- Consistency with existing legislation. Argument that it’s all covered by the GDPR anyway. Do we need the ePrivacy Directive? In which circumstances is it overruled? We must work out how much legislation is strictly necessary, and simplify.
Following on from the workshop, I posted a call-out on my blog asking for any further comments and received the following from a reader:
“I have two primary interests:
* Preserving the right of anonymity. I should be able to surf to whatever site I like without worrying that my government can access my history without a warrant. This also covers data-retention laws that effectively make anonymising proxy services useless. In other words, I want my privacy to be protected by warrants again.
* The cookie law. It’s at best ridiculous and ineffective, at worst, dangerous and exploited by bad people. It shouldn’t exist at all.”
I‘ll be sending this early tomorrow morning, so you still have time to submit any further comments. You can email them through to me, or you can go to the European Commission’s website and submit your comments via their public consultation page until the 5th of July 2016.