Efforts to handle privacy regulations such as GDPR and equivalents in the US may encourage wealth managers – in charge of considerable data on clients – to seek ways of making information anonymous or pseudonymous. However, these moves are full of traps for the unwary, so the author of this article argues.
Sorcha Lorimer, founder of data protection consultancy Trace, explains how both wealth managers and the tech companies serving them can fall prey to dangerous assumptions about data access and sharing – particularly when it comes to the subtle but crucial differences between pseudonymisation and true anonymisation. (More data on the author below.)
The editors of this news service are grateful for this useful and cautionary contribution to an important subject. The usual editorial disclaimers apply. If readers want to respond, they can email email@example.com
The state legislatures have the data brokers in their sights, with new laws enacted in Texas and Oregon and California. California’s Delete Act, signed into law in October, means that Californians can ask data brokers to delete their personal information or forbid them to sell or share it, building on the California Privacy Protection Agency (CPPA) as the US state leading the way with privacy regulations.
Despite state-by-state efforts in the US and a wave of global privacy laws in the last five years, most notably the GDPR – regulations which are fundamentally at odds with the commoditisation of personal data – the global trade in personal data remains lucrative. We continue to pay for online services and social platforms with our personal data, with big tech and data brokers reaping huge rewards from the digital data economy. There may have been a number of high-profile legal rulings recently which form part of an international drive to rein in sharp business practices such as rather loose interpretations of consent, yet the data playing field remains asymmetric: access to data is skewed and individuals lose out on their information’s true worth.
High reward; high risk
As privacy consciousness grows we could see a dramatic shift towards a “zero-party data” culture where individuals are effectively their own data brokers, actively giving rich information to those data fiduciaries that they trust. But for the present, companies are grappling with how they use first-, second- and third-party data which has not been actively volunteered by the individual too, with those in the wealth management space very much included. They are right to be enthusiastic about leveraging all manner of client data to improve their services (and their profitability), including rich metadata and techniques such as behavioural profiling where that is lawful, but they should not forget that the high value of HNW individuals’ data means elevated risks as well.
A range of risks will be apparent to anyone who has really thought about the volume and sensitivity of data a wealth manager holds on each and every client. Yet there is one huge risk which I suspect is almost a total blind spot for the industry – namely that both wealth managers and the tech companies serving them are very often misunderstanding foundational legal definitions when it comes to pseudonymisation and anonymity, and in the worst case could be unwittingly breaking the law.
Real anonymisation is a relative rarity
With the data economy still booming, and opportunities for the tech sector compelling, there are those seeking to tread the line between compliance and usability, making the argument that privacy and security can be balanced with access to data. And sometimes it can, but it’s nuanced and dynamic, and the technical details absolutely matter. There are still too many data brokers, insights companies and startups that blithely assure us that sensitive data can be shared or traded because it's “anonymous,” that it's safe and that time-consuming pesky data protection compliance doesn’t apply. That, I’m afraid, will hardly ever be the case from the brochures and press releases I have seen.
Things are progressing rapidly in this area of course and there are certainly high-tech methodologies coming on line. However, the fact remains that true anonymity is actually very difficult to achieve – so that it is impossible in practicable terms to identify an individual. What many in the data industry will call ‘anonymous’ is actually only pseudonymised or de-identified. The difference is crucial: truly anonymised data is not subject to GDPR whereas anything falling short of this bar absolutely is.
With anonymisation, masking or deletion is used and it’s irreversible. Pseudonymisation, meanwhile, sees personal identifiers replaced with artificial identifiers (such as client codes) and the information necessary to re-identify the data kept separately. One example of pseudonymisation is tokenization – although, perhaps inevitably, such solutions are very often shopped around as offering true anonymisation in the GDPR sense.
Another claim you might hear is that “Combining anonymised information doesn’t have compliance issues as it’s open data.” Here, it needs to be remembered that datasets which might be safe when used or kept in isolation can pose massive risks when combined with other information or – in the nightmare scenario – are leaked. There are many who believe that metadata can simply be re-used and combined without concern for compliance or privacy risk, and this is frequent practice. But the reality is that in practice anonymised big data sets can be subject to de-anonymisation attacks and expose privacy and confidentiality risks when combined with similar datasets. Given the incredible value and sensitivity of the data they hold, wealth managers should never underestimate the ingenuity and persistence of cybercriminals – nor the likely wrath of clients who are made to realise that ‘anonymised’ data in fact leads right back to them.
Regulatory responsibility remains squarely with wealth
This common misuse of terminology is no doubt overwhelmingly unintentional and down to confusion over what constitutes true anonymisation. Privacy is a sophisticated domain and there are divergences between regulations and terms across the Atlantic. There is also a great deal of enthusiasm for new technologies and the possibilities of big data which could be leading to wishful thinking around supposed compliance loopholes too.
However, I suspect there are also those whose ignorance is rather more wilful; data scientists and developers will of course know the difference between real anonymisation and its cousins, but whether that is reflected in the technology sales process (and sales literature) is another thing altogether.
Either way, wealth managers should always bear in mind that
invariably it is they, as “Data Controllers,” who bear the
brunt of regulatory responsibilities and not their providers.
They should therefore be particularly wary of solutions promising
anonymity and especially so if this is in any way framed as a
means of sidestepping regulatory obligations. Ask probing
questions both of your prospective vendors and your own data
protection team before proceeding down a potentially dangerous
path. True anonymisation of data is a wonderful security measure.
The trouble is, that is hardly ever what is really on offer – at
least as things currently stand. As ever, caveat
About the author:
Sorcha Lorimer is the founder of Trace, a global privacy and data security startup where she has extensive client, product and business building experience. Dual qualified in privacy and cybersecurity risk management, Sorcha takes a holistic and practical approach to the vCISO and vCPO roles to help her clients build trusted brands and data models, and leverage a by-design approach to privacy and security for competitive advantage. Her clients include Ooni, WWF, global drinks brands, and data driven tech startups.