The panoptic nature of biometric technology

John Xavier

The panoptic nature of biometric technology
Without proper laws protecting digital privacy, inappropriate use of facial recognition technology will enable mass surveillance Facial recognition technology (FRT) has been deployed by social media platforms to tag a person in a photo or video...
Without proper laws protecting digital privacy, inappropriate use of facial recognition technology will enable mass surveillance

Facial recognition technology (FRT) has been deployed by social media platforms to tag a person in a photo or video, and by law enforcement agencies to nab criminals. Rights activists have for long questioned its use citing algorithmic bias and mass surveillance.

Facebook, now renamed as Meta Platforms Inc, said earlier this month it is shutting down its facial recognition technology (FRT) on its platform. It would no longer auto-recognise faces from images and videos and as a result, the company hit the delete button on over a billion users’ faceprint templates it holds.

THE GIST
  • Facebook, now renamed as Meta Platforms Inc, said earlier this month it is shutting down its facial recognition technology (FRT) on its platform. This comes after a class action lawsuit against them for failing to perform necessary disclosures related to handling of its users’ biometric data.
  • A growing body of research shows that biometric scanning technologies coupled with AI have an inherent bias.
  • India lacks a robust legal framework to address the use of biometric technology. In such a scenario, the National Crime Records Bureau has already requested for proposals to create a National Automated Facial Recognition System.

Facebook built this large dataset pixel by pixel since it launched the photo-tagging feature in 2010. The controversial automatic face-tagging feature scans faces and suggests who the person might be. Although users were given an option to switch off the biometric feature, there was no explicit consent to use the feature in the first place.

In mid-2015, Facebook was sued in Illnois for failing to perform necessary disclosures related to handling of its users’ biometric data. The case was approved to become a class action lawsuit in 2018.

According to an Illnois state law, the Biometric Information Privacy Act, it is illegal to gather or use “face geometry” data without disclosing methods, intentions and guarantees regarding the data.

In February, Facebook settled the long-running dispute and agreed to pay $650 million to the 1.6 million members of the class in Illnois.

Facebook isn’t the first company to be accused of using biometric technology inappropriately, and it won’t be the last as several large tech firms build and sell FRT tools to government agencies.

An FRT software made it possible to identify perpetrators of the infamous U.S. Capitol siege in January. Pictures of the attackers were run through a facial recognition software to find similar faces in a database of social media headshots and other images scraped from the internet at large.

Using a software, built by Clearview AI, the police were able to identify perpetrators and submit their names in court. The technology has been hailed as a great tool that helps law enforcement agencies nab criminals.

Algorithmic bias

But a growing body of research shows that biometric scanning technologies coupled with AI have an inherent bias. A report by the U.S. National Institute of Standards and Technology (NIST) noted that facial recognition technology found Black, Brown and Asian individuals to be 100 times more likely to be misidentified than white male faces.

A 2018 research paper, co-authored by ex-Google top AI scientist Timnit Gebru and MIT Media Labs’ Joy Buolamwini, found that machine learning algorithms discriminate based on classes like race and gender.

Also read | Amazon's palm print recognition raises concern among U.S. senators

Their study titled ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, evaluated automated facial analysis systems and datasets with respect to phenotypic subgroups and concluded that the database was overwhelmingly composed of lighter-skinned individuals.

To curb the ill-effects of FRT, some U.S. lawmakers introduced the Facial Recognition and Biometric Technology Moratorium Act last year.

A legal bulwark needed

India lacks a robust legal framework to address the use of biometric technology even as the Union Government deployed over a dozen different FRT systems across the country that collect and use biometric data.

The National Crime Records Bureau (NCRB) has requested for proposals to create a National Automated Facial Recognition System (NAFRS) to build a national database of photographs to identify criminals.

In the U.S. several states have banned its use, but there is a no sweeping legislation that bars law enforcement agencies in the country to stop using FRT.

In the EU, the General Data Protection Regulation (GDPR) provides the bloc a first line of defence against FRT infringing on individual’s privacy. Article 9 of GDPR prohibits processing of personal biometric data for the purposes of identifying an individual.

Also read | EU data watchdogs want ban on AI facial recognition

India still does not have a personal data protection law. A draft Personal Data Protection Bill, 2019 was introduced in the Lok Sabha two years ago, and the Select Committee of Parliament is yet to finalise its report on it.

Scanning technology and biometric tracking pose a grave threat to freedom of expression. Its use by law enforcement agencies in India during protests against the Citizenship (Amendment) Act, 2019, and in Hong Kong against pro-democracy protesters, highlights how this technology can be used by law enforcement agencies for mass surveillance.

A start

While Facebook’s decision to shut down FRT sounds like a blanket ban, it has grey areas. For instance, the embattled social media firm said it will keep its facial recognition services open “to help people gain access to a locked account, verify their identity in financial products or unlock a personal device.”

But according to the digital rights organisation, Electronic Frontier Foundation,“ Facebook’s step is just one very large domino in the continued fight against face recognition technology.”

You may like