The Orwellian State: Surveying Systemic Bias in Facial Recognition Technologies

Saloni Mishra is a law student and Editor at Vantage. Her primary areas of interest include theorizing about Surveillance, Gender, Tech Law, Socialist Politics and Literature.
- Mon September 6 2021

Introduction

Facial Recognition Technology (hereinafter, “FRT”) has aided in maintaining and further concretizing the prevailing status quo, i.e. the historically racist and anti-activist surveillance systems. The contention surrounding the use of FRTs by legal and administrative institutions stems from a spectrum of valid concerns regarding its technological aspects and its infringement of basic human and political rights. 

This article seeks to study the rapid increase in the adoption of FRT systems by the Indian State. It puts forth a comprehensive analysis of the logistics and workings of facial recognition surveillance. Lastly, it delineates how FRTs violate the principles imperative to ensure people's civil liberties and rights. 

Working of FRTs

FRT is a method of identification and confirmation of identity using an individual’s face. It can be utilized to identify persons in images, video clips or even in real life. The most commonplace example of FRTs is the Face ID technology used to unlock our smartphones and other devices. Usually, FRTs formulate a “template” of a person’s facial features and then run this template through a database of pre-existing pictures. This database includes images from government identification records such as driving licenses, census information, social security schemes, etc. 

Such technologies are used for various purposes like finding missing persons, identifying perpetrators, tracking criminals, improving the user experience at airports, banks, shopping complexes, and others. Most primary concerns regarding FRTs stem from their process of detection, analysis, and recognition since they reproduce human biases. Though FRT has yielded efficient results in small populations and controlled settings, it has been argued that the present version of FRTs is unsuitable for use in uncontrolled and crowd settings. 

Legislative Backing

It is important to note that no specific laws and guidelines have been established to govern the working of this potentially invasive technology. Many legal experts assert that it challenges fundamental and civil rights around privacy and expression as it fails to satisfy the qualifications set by the Supreme Court in the landmark judgment of Justice K.S. Puttaswamy vs Union of India (hereinafter, “Aadhar Judgement”). The Aadhaar Judgment also stated that enforcing a restriction on the population without any evident wrongdoing is a disproportionate solution. Under the IT Act, 2000, biometric data is categorised as sensitive private data, with specific rules for its collation, disclosure, and dissemination

Vidushi Marda, a lawyer, and researcher commented that The Puttaswamy judgment ruled that privacy is a fundamental right even in public spaces... upon any violation of the same, the State must prove that the action in question was sanctioned by law, was proportionate to the necessary intervention, and in pursuit of a legitimate aim.” Following this, it can be seen that the Delhi police has failed to demonstrate how their adoption of FRT systems since 2018 satisfy the standards set in the Aadhaar Judgment. 
    
Big Brother is Always Watching

Despite the lack of legal backing, there are 16 FRT systems in active use by state governments in India. Though Delhi police became India's first law enforcement agency to adopt this FRT in 2018, it failed to confirm its legal validity. It declined to answer a RTI enquiry on whether they performed a ‘privacy impact assessment’ before employing FRTs. A privacy impact assessment is used to identify the dangers and potential risks of collating and maintaining personal information and to assess alternative methods for handling such information to mitigate potential privacy risks. Currently, the adoption of FRTs isn’t unique to Delhi alone and has spread across Hyderabad, Lucknow, Mumbai, Coimbatore, and Patiala. Since the last five years, the increase in surveillance by the Indian State can be seen through the 40 government-financed projects tracked by a research organization, AI Observatory. 

The approval of one such project among the 40 is the recent adoption of 500 facial recognition cameras by the Indian Railways to monitor daily commuters, Divij Joshi, the creator of AI Observatory states that such projects prove that surveillance comes first before privacy. He further claims that the absence of appropriate safeguards could easily make FRTs a tool for moral policing and concludes that their accuracy in the Indian demographic is hugely concerning.

When Vidhi filed Right to Information applications with some of these police departments asking for data on CCTV cameras and separately about the procedure used to implement FRT systems, the organization either received evasive replies or none at all.

The government authorities' evasive and ambiguous RTI responses have also not instilled any confidence in the state’s increasing use of FRTs. In an RTI enquiry regarding the standards and guidelines regulating the use of FRTs, the Delhi Police ambiguously responded, “The FRS (Facial Recognition System) technology may be used in investigation in the interest of safety and security of the general public”. Further, upon replying to a RTI enquiry in relation to the use of FRTs, several police departments evaded inquiries into the procedure used to implement such technologies. Such surveillance of civilians is ethically insensitive as it necessitates the deployment of FRTs without their consent.

The Misuse Pandemic

There are several instances of the misuse of FRTs. The recent one being the arrests made in the aftermath of the Delhi riots. After a devastating pogrom in North-East Delhi, arbitrary arrests were made on the basis of FRTs resulting from closed-circuit television (hereinafter, “CCTV”) data and open-sourced videos.[1]  It has also been used to track down protestors during the anti-CAA demonstrations. Another concern regarding FRTs has been voiced by Anushka Jain, an associate counsel and member of Transparency & Right to Information. She commented that the Delhi police was allowed to use FRTs based on a Delhi Court order concerning tracking missing children. 

However, the Delhi police are erroneously using it for investigative purposes, thus facilitating a function creep that occurs when utilization of information happens for a definite purpose that is not the originally sanctioned purpose. According to Ms Jain, “…this might lead to problems where certain minorities are targeted without any legal backing or any oversight. Another problem that may arise is of mass surveillance, wherein the police are using the FRT system during protests.” Moreover, by logging people’s actions, it seeks to challenge a citizen’s right to expression and dissent, which is highlighted in the current regime of disproportionate arrests of activists and protestors. Further, there are several other possibilities of misuse of FRTs, as argued by the Internet Freedom Foundation (IFF), like the threat of state-sponsored surveillance, violation of freedom of speech and expression, among others.[2]  

It has been deduced that any technological formality that increases the degree of surveillance and policing has the potential to aggravate historical systemic prejudices. Apart from serious concerns regarding the right to privacy, FRT brings into play several issues regarding discrimination, criminalization, and scrutiny. Since FRTs in Delhi are likely to use data from CCTV footage, areas with relatively more CCTV cameras would be over-surveilled, over-policed and thus subject to more errors than other areas. Hence, Muslim communities that reside in areas subject to over-policing may end up receiving the sour end of the stick as they are over-represented in CCTVs combined with police biases. Following the same argument, Muslims inhabiting places like Nizamuddin and Old Delhi would be at a greater disadvantage, more so in the context of their blatant criminalization narrative pedalled by the State. This claim is further strengthened by a recent study that proved the empirical basis to comprehend the probable discrimination resulting from the adoption of FRTs by Delhi Police authorities. Upon mapping the jurisdictions covered by police stations, it was concluded that Muslims are most likely to be the potential targets if police forces implement FRTs strictly.

The existence of surveillance bias in Old Delhi and Nizamuddin has been proved through two prominent reasons, namely, the disproportionate positioning of police stations and the uneven placing of CCTV cameras. Consequently, facial recognition surveillance will disproportionately disadvantage the Muslim population residing in regions under heavy monitoring. Therefore, due to uneven placements of CCTVs, police stations and pre-existing prejudices, FRTs dangerously hamper the scope of their right to equality. However, it should not be assumed that an even and equal spatial distribution of FRTs would benefit the public as it can still negatively affect people's freedom, mobility, and privacy. 

The adoption of surveillance technologies like FRTs and fingerprint recognition by police forces has increased tremendously. However, the excessive fallibility rates and prejudices of coders result in misidentification, causing erroneous incarcerations of innocent civilians. Such intrusive surveillance is ethically controversial as it transforms people into subjects of doubt and suspicion without any scope of consent. This creation of subjects makes individuals more vulnerable to State monitoring and thus brings India very close to Martin Moore’s concept of surveillance democracy

FRTs and Global Concerns

Various concerns surrounding the misuse of FRTs are not limited to India. The American Civil Liberties Union stated that such technologies should be applied in a passive way so that the issue of consent and knowledge of the subject is removed. Such concerns mainly stem from the fact that FRTs lack a critical legal framework to govern their usage and adoption. Thus, there are no accountability mechanisms or standards in place to keep its operations in check. Other civil society organizations like the Electronic Frontier Foundation, Algorithmic Justice League, and Amnesty International have also called for a ban on the use of this technology, citing similar concerns. 

Though the employment of FRTs by police forces is increasing, it has been proven that such technologies are rarely free of error. For instance, due to the over-representation of specific communities, FRTs tend to recognize similar faces, thus facilitating a training bias. Apart from this, it can also misidentify facial structures leading to innocent people being cast as suspects. FRTs may also result in unequal targeting of disenfranchised communities due to increased surveillance which forms a perverse distribution of policing. 

The use of FRTs by law enforcement has been historically known to notoriously weaken socially backward communities. As per Crockford, facial surveillance is a serious threat to Black people in three primary ways. Firstly, as per the seminal research by Black scholars Joy Buolamwini and Timnit Gebru, technology can also be racially prejudiced. The research deduced that facial analysis algorithms misidentified Black women by 35 per cent; however, it was always precise for white men. Moreover, a report by the National Institute of Standards and Technology also found that face identifying algorithms work unfavourably towards people of colour when compared to middle-aged white men. Secondly, since the database containing mugshots is usually utilized by the police for facial recognition, the historical racial bias gets embedded in monitoring systems. To contextualise the same with an example, it is important to point out that despite the fact that Black and white people are arrested for marijuana possession at similar rates, Black people are more likely to be arrested for the same. An arrest follows an entry in the database, and due to the higher probability of Black people being arrested, the database becomes a cog in the systemic machinery of racism. Lastly, Crockford urges that even if an unbiased database is adopted, FRTs would still replicate oppressive mechanisms as “the entire system is racist”. 

In the background of this comparison, claims by human rights activists gain more urgency. One such claim voiced by Matt Mahmoudi states, “Facial recognition risks being weaponized by law enforcement against marginalized communities around the world. From New Delhi to New York, this invasive technology turns our identities against us and undermines human rights.” The adverse repercussions of FRTs have ignited immense outrage throughout the world, including international organizations like Amnesty that have called for a ‘Ban the Scan’ campaign to resist legal surveillance. Accordingly, it becomes increasingly evident that the use of FRTs is yet another divisionary and oppressive tactic employed by states globally under the garb of growth, development and efficiency while it seeks to wreck the already minimal upward mobility exercised by stigmatised communities or communities that lack social capital. 

Minimizing Big Brother’s Influence

Considering the ramifications of FRTs, there exists a grave necessity to develop and substantiate legal mechanisms concerning digital surveillance and data protection. Studying different legal approaches will better equip India to develop a personalized concept to govern the technologies in question. Moreover, since cyber security is a fundamental facet of FRTs, the databases containing personal identification information should be rigidly secured to protect the rights and privacy of people. By striking a balance between protecting national security (internally and externally) and protecting the rights and liberties of the general public, India can potentially create a beneficial FRT model

For instance, by considering seminal developments posed by Convention 108[3], India can build a solid legal framework that addresses the concerns of privacy and violation of civil liberties. Convention 108 aims to rectify and improve data protection, i.e., legally safeguarding civilians with respect to automatic processing of their personal information. It recognizes the influence of information stored in computerized data systems and seeks to ensure that it doesn’t negatively impact or weaken the position of the persons on whom data are stored. Therefore, the makers of the Convention made it compulsory for authorities that hold “information power” to, inter alia, maintain the standard of information stores, refrain from storing excess information that does not fulfil the given purpose, and protect against misuse of information. Indian policymakers must also address the issues of diversity and representation in training databases by adopting strategies that promote inclusion and accountability, such as regular and ethical auditing, especially considering intersecting identities.

Furthermore, as a first step, policymakers need to formulate meticulous rules governing FRTs, including mechanisms to provide citizens with accountability. To minimise undue representation of certain communities, the State should be actively engaged in providing diverse and holistic databases, thus, eliminating inherent biases and improving accuracy levels of the algorithm. Changes addressing the technological and underlying framework of FRTs, especially the nefarious biases in the same, can potentially make the technology extensively beneficial for law enforcement to safely and justly monitor and prevent crime.

 

 

The views expressed above are solely of the author's. 
 

 

 

 

 

Sources:

[1] Open-source videos refer to videos that are publicly accessible.

[2] Refer to Internet Freedom Foundation’s Project Panoptic for more information.

[3] Convention 108 is a legally binding instrument that was opened for signature on January 28 of 1981, by the Council of Europe. It’s the only legally binding international instrument in the field of data protection.