Facial recognition technologies (FRT) once seemed to be something from science fiction. However, they are now being used across the globe and seem destined to stay. They could be a game-changer for keeping the UK’s streets safe, but they have also sparked a legitimate debate about the acceptable balance between privacy and security.

Facial recognition is biometric technology that maps facial features from a photograph or video in three steps: detection, capture and matching. Therefore, FRT can either be used to identify a person or to authenticate an identity. The key benefit is that it doesn’t require physical interaction like other biometric technologies such as fingerprint recognition and is consequently easy to deploy. This, of course, is one reason why there is a lively public debate about the technology.

The potential applications are vast, offering law enforcement agencies the opportunity to detect and deter criminals and spot missing people; as a result, this can lessen the burden on limited law enforcement resources. It can also be used at airports and major events to provide a speedy identity verification service, improving the customer experience and strengthening security. It is also being repurposed to help identify genetic disorders and support the visually impaired. Nonetheless, the technology is not perfect. Analysing the human face is enormously complex, so FRT rely on machine learning algorithms to improve their accuracy. However, systems are only as good as their training data and the risks of inaccuracy and bias must be managed.

It would oversimplify this debate to turn it into an either/or choice – technology and security versus individual rights. In fact, if done correctly technological innovation can be used to safeguard individual rights. For instance, during the Live Facial Recognition trials that the Metropolitan Police conducted only faces that matched the police watch list were retained (and only for 30 days), with the rest being deleted immediately. Software can also be used to anonymise the facial images, so that it becomes impossible to determine who the image links to once it is logged into a database, or to pseudonymise it, so that you can only identify the person by referring to a separate database.

Ultimately, this debate will rumble on until Parliament and the Government steps in. For instance, the list of oversight bodies confusingly stretches to four different independent commissioners. The technology offers huge benefits and these should not be passed over lightly, but the public must have confidence in its usage. There is therefore an urgent need for a new governance framework, so that the public, law enforcement and the biometrics industry can all move forward with a common understanding about the appropriate development and deployment of this significant technology.