I’ve run across the promotional material for a new book by David Wright and Paul De Hert, Privacy Impact Assessment, Springer, Dordrecht, 2012. They argue that the book ‘is timely as the European Commission’s proposal for a new Data Protection Regulation would make privacy impact assessments mandatory for any organisation processing “personal data where those processing operations are likely to present specific risks to the rights and freedoms of data subjects”. I find the whole idea of PIA to be far too uncritically accepted by far too many within the privacy community.
My own sense is that this sounds good, parallel to an ‘environmental impact assessment’ (EIA). But the history of EIA should clearly alert us to the risk that impact assessments are unlikely to prevent risks to privacy and data protection. To the contrary, they are likely to cover the backside of actors who can say they submitted a risk assessment, be limited to primarily a symbolic victory for privacy, and clearly raise the costs of all software and systems developments, creating a new set of businesses employed to write PIAs for organizations.
The concept of a privacy impact assessment is one of those initiatives that sounds good, and rings all the right bells to be politically popular, but that will not accomplish its intended aims and undoubtedly have negative, unintended consequences. I hope the privacy community takes a more critical look at the rhetoric in support of this bureaucratic silver bullet that carries its own risks.
Happy to receive comments, as I am sure my view is a minority opinion, but every discussion of the issue convinces me all the more that the PIA is a mistake. I hope some bright students begin to evaluate the actual impact of the PIA.
3 thoughts on “The Risk of ‘Privacy Impact Assessments’ – PIA in the Sky”
You can certainly see your skills within the work you write. The sector hopes for more passionate writers like you who aren’t afraid to mention how they believe. All the time follow your heart. “The only way most people recognize their limits is by trespassing on them.” by Tom Morris.
Wondering about this, and whether it is as easy as it appears. those doing the risk assessment have to make a guess as to what the risk is, obviously subjective. add to this that data will be transient in value. to the owner it may be of much more value than to those processing it (think sentimental value here as an analogy) there is also time factors to take into consideration as some data, or information will lose its value over time. Different subjective values to different people at different times
Another complication will be the aggregation of data, one piece may have little value, but when added to other snippets it can become valuable information. The value of this will depend on varying factors. What is being painted is a infinitely variable set of parameters with which to base your risk. it may be possible to assign quantitative values to some data to take for example the worst case scenarios, however this will likely require that the most restrictive controls be applied which will stiffle the working practices and place substantial costs on developing systems. this might lead to outsourcing the service to the cloud, in itself a danger as this will provide an out of sight mentality
I think what could be useful here is definitive guidelines, for example the current documentation from the ICO on data protection leads so many people who are not infosec experts asking for real life examples, as they simple do not know if an email address is personal data, or if a list of 1000 is, and lots more examples. the guidance really needs to be more prescriptive
I totally agree with assertion that data/privacy protection is a token gesture in most cases and is box ticking (a la pci) and that breach notification does work (in the initial stage at least my experience would suggest, after that it diminishes unless a serious breach occurs, minor breaches are just bau)
In information (cyber) security it is accepted practice to undertake a risk assessment, a key part of which is the potential impact of a security event. Unfortunately in both security and privacy those who feel the impact, the victims, are not necessarily those who perpetrate the loss. So there is insufficient economic incentive for those taking risks with our personal data to do it well. For me there are at least 3 arguments in favour of PIAs:
1) To redress this balance and encourage more investment in privacy protection;
2) To provide guidance and help to organisations;
3) To ensure ‘due diligence’ is performed;
Of course if ‘due diligence’ becomes box ticking, or PIAs are seen as bureaucratic obstacles then these benefits may not be realised. Ultimately what we want to encourage is ‘privacy by design’ and this can only be achieved if the right risk analysis is performed ab initio.