News & Events

Iain Mitchell QC: Biometric tech & fundamental rights

The use of Biometric data is already a reality. There is a tendency to think of the use of such data as being simply a means (of greater or lesser reliability) of identifying people. In 2004, the Metropolitan Police Facial Recognition technology got it wrong 81% of the time. Though the performance of such systems has improved considerably since then, the technology remains far from perfect.

That said, it is but a short step from being able to identify people to full-blown surveillance – an ability to track people, to identify them as wanted criminals, etc. These uses give rise to potential infringements of fundamental rights, including, potentially, engaging issues of Legal professional Privilege/Professional Secrecy if used to track visits by individuals to their lawyers.

However, it must be borne in mind that biometric systems, chiefly voice pattern identification and facial recognition technologies are “black boxes,” AI systems educated upon datasets whose performance can be analysed only stochastically and, as such, present the same risks as other similar AI systems – for example, the tendency of Facial Recognition programs to identify black people as gorillas (see here), to confuse dogs with wolves, and to identify members of the US Congress as being convicted criminals (see here).

If the facial recognition technology interoperates with other data, then persons can be identified and actions follow – comparison with a database of wanted criminals; tracking customers through a store both to gather data on their browsing habits, and linking with other personal data held by the store so as to deliver personalised advertising. We are all familiar with the scene in Minority Report where Tom Cruise wanders into a Gap store and is assailed with personalised advertising (relating, as it happens to the unfortunate individual whose irises have been transplanted) (see here), but this is no longer science fiction. It is estimated that 59% of UK fashion retailers already employ facial recognition technology and, to take a Canadian example, the capacity for the aggressive use of such technology was built in to the new Saks store in Toronto when it opened in 2016 (see here).

This gives rise to important issues such as whether the data subject consents, or even knows about the use of the technology towards him or the use of his data in educating the system: one research paper suggests that biometric data of about half of all adults resident in the United States are stored on one or more police facial recognition databases (see here). Equally alarming is the practice of scraping websites for facial images in order to train AI systems, without informing the data subjects, let alone obtaining their consent, as happened when IBM scraped Flikr in 2019 (see here).

The deployment of Facial Recognition may effectively impose limitations on freedom of movement, for example, the use of Facial Recognition on persons in the streets around the White House (see here) where the only mitigation consisted in the official advice that: “Individuals who do not wish to be captured by … cameras involved in this pilot may choose to avoid the area”.

Many of these examples relate to jurisdictions which have a more cavalier attitude to data protection than the UK and the EU, where individuals enjoy the protection of the GDPR, but we should not be complacent. For example, how many of the Flikr subjects were resident in the EU or UK? And when people join up for a UK store loyalty card, do they read the small print to ascertain whether they are consenting to the use of facial recognition technology? Even if consent is not obtained, how many UK stores justify the intrusion on the basis of legitimate interest?

Just how far this can go in a country which places no value on privacy can be seen with the legal requirement for all persons resident in Xinjiang province in China (where there is a large Uighur community) to undergo a 3D scan of their faces to facilitate the use of FRT cameras to track persons engaging in “extremist” behaviour, such as travelling more than 300 metres from one’s home, going to the mosque too often or filling up at the petrol station several times a week. (see here) .

There is also a major concern over the capacity of all of the actors who may have our biometric information to keep it safe, and stop it from being hacked. It is, after all possible to change passwords, but not one’s biometrics (at any rate without adopting the extreme measures taken in Minority Report).

What, however, is wholly new, bringing its own significant challenges, is the use of biometrics for the purposes of “Affect Recognition”. It may be possible to determine sexual orientation from facial features (see here) and there are systems which are even now being marketed claiming to be able to determine from mass video recording of job applicants whether a particular applicant will make a good employee (see here). Chinese police are using photographs of drivers taken from roadside cameras to determine from their facial appearance whether to stop them for drunk driving (see here). There are systems which are asserted to be able to predict a person’s criminality (see here). Faception, a company based in Israel, is aggressively marketing a system which claims to be able to determine, from biometrics, whether a person is, (amongst other things) a “terrorist” (see here).

A lot of these claims are scientifically dubious, and, in any event, what is the programmer’s definition of a “terrorist”? In many respects, this is but a 21st century revisiting of discredited 19th century pseudo-scienctific theories such as phrenology, but because it is tricked up the the guise of hi-tech machines and “artificial intelligence”, the danger is that the credulous believe that what the computer says is true, and, as the Loomis case demonstrates, being a policeman, a prosecutor or even a state supreme court judge does not mean that you are not credulous.(for a critical discussion, see here). You must be a criminal because your eyes are too close together: the computer says so – echoes of Little Britain.

Legal constraints

Some of the more egregious examples given above relate to states such as China, which do not respect fundamental rights and others from states where there are inadequate data protection rules, such as most (though not all) of the states of the United States – as witness the finding of the Georgetown University report, cited above, which found that the biometrics of about one half of US adults are held on police databases.

There is, however, some protection afforded in the EU and the UK as a result of the GDPR, but although personal biometric data is covered, the GDPR is not specifically targetted at such data. There is some academic writing on the interplay between the GDPR and the processing of biometric data – see for example a paper published by Queen’s University, Belfast The Impact of EU Privacy Legislation on Biometric System Deployment: Protecting citizens but constraining applications.

Up until now, the approach of the EU has tended to be ad hoc, as with, for example, the EDPS Opinion on the use of a computerised system by the European Parliament for the digitalisation of the Plenary and central attendance registers through biometric technology, and there has been a tendency at national level to view the matter solely in terms of GDPR compliance (see for example ICO statement). However, the EU now has the opportunity to seize the nettle in the context of the Commission’s new Artificial Intelligence Act proposals.

In the Council of Europe, the issue of biometrics has been the subject of detailed high-level review, with the publication in 2013 of the Declaration of the Committee of Ministers on Risks to Fundamental Rights stemming from Digital Tracking and other Surveillance Technologies , followed by the 2018 Elsinore Convention for the Protection of Individuals with Regard to the Processing of Personal Data. The Consultative Committee on the Convention carried out work on the specific question of the processing of biometric data and has recently (28th January, 2021) published a set of Guidelines on Facial Recognition. The travaille préparatoire for the Guidelines included a paper, Facial Recognition: Current Situation and Challenges, (Azria & Wickert, 13th November 2019) from which many of the examples given above have been drawn.

A significant initiative in the UK is the publication on 18th June, 2021 of the Information Commissioner’s Opinion on the use of live facial recognition technology in public places.

The CCBE approach

Up until now. The CCBE has not undertaken a full review of Facial Recognition and other biometric technology. However, it has from time to time made comment in the course of making representations in relation to AI more generally. In particular, in the CCBE Response to the consultation on the European Commission’s White Paper on Artificial Intelligence, (05.06.2020) the CCBE stated:

“Biometric identification systems technologies tend to have serious flaws that endanger civil rights. For example, facial recognition technology has been proven in multiple studies to be inaccurate at identifying people of different races. Also, there are grave concerns that the trigger words which are used by national security agencies are not sufficiently refined and thus the phone conversations and email correspondence of millions of people are monitored without a legal basis.

“Further, the widespread use of facial recognition may pose severe risks for an open and pluralistic society if not used proportionately with a proportionate intended aim such as ensuring public safety. In many situations, anonymity is the most important safeguard of freedom, and facial recognition techniques that cover major areas in the public space endanger this freedom. The more accurate they are and the more widespread their use, the more dangerous they become.”

With the increasing momentum in the adoption of biometric technology and its increasing sophistication, the challenges to fundamental rights have become more concerning. Xinjiang province gives us, in the present, a warning of a dystopian future. With initiatives such as the proposed AI Act, the time is now right for the CCBE to undertake a thorough examination of the issue. Accordingly, the Surveillance Working Party has now started work on a new study of Biometric technology with a view to developing recommendations which it is hoped will prove to be as influential as its previous Recommendations on the Protection of Client Confidentiality within the context of Surveillance Activities and Recommendations on the Protection of Fundamental Rights in the Context of ‘National Security’.

I look forward to leading that process.

Iain G. Mitchell QC

Iain G. Mitchell QC is the Faculty of Advocates representative on the UK Delegation to the CCBE, a member of the CCBE IT Committee and has just stepped down as Chair of the Surveillance Working Group on the conclusion of the six-year maximum term of office. He has now been appointed Vice-Chair of the Group. He is Chair of the Scottish Society for Computers and Law, a member of the IT Panel of the Bar Council of England & Wales and a Liveryman of the Worshipful Company of Information Technologists

*First published 24th June 2021 in The Brussels Newsletter published by UK Law Societies’ Joint Brussels office.