Are Automated Recruitment Systems Just?

Parnika Sharma is a penultimate year B.A. LL.B student at Jindal Global Law School. She has developed a keen explorative interest in the areas of ADR, Environment Law, Human Rights Law, and Family Law.
- Mon July 12 2021

Introduction

Traditional recruitment processes are perceived to be marred by prejudices due to the involvement of human decision makers whose judgments are governed by apparent or latent subjective biases. To counter this issue of bias and to handle the enormous data from recruitment application-processes, newer systems of automated recruitment are being increasingly deployed by organizations these days. Such automated decision-making through the use of algorithms facilitates recruitment processes by streamlining huge sets of applicant related information.

Different algorithms are used at different stages like that of directing advertisements towards applicants, parsing-scoring resumes, assessing proficiencies etc. These algorithms are often trained using machine learning to predict outcomes on the basis of existing employees’ information or previous decision-making processes, thereby making recruitment less arduous and more expeditious. This article is a preliminary attempt to bring to light the socio-legal ramifications of algorithmic bias, especially prevalent in the recruitment realm, and to also initiate the exercise of deliberating over novel justified perspectives. Further, I attempt to place algorithmic bias perpetrated by automated recruitment, within the human rights framework, before offering concluding remarks regarding their just or unjust nature.   

Where do Algorithms Pose Problems?   

Whilst these algorithms simplify various aspects of recruitment processes and make hiring efficient, they do have certain socio-legal ramifications. Algorithms are apparently used to reduce instances of bias resulting out of human discretion, however, they arguably end up intensifying old-new biases due to their inherent rational and statistical nature. Their outcomes by default are found to amplify and replicate institutional biases since they are reflective of the previous patterns which were observed when human employers were involved in decision-making. For example, if an organization’s recruiters have previously never recruited people from an underrepresented caste or community, then it’s implausible to expect the algorithm to learn to evaluate those applicants in the present day.

The limited nature of the training dataset used to train these algorithms for decision-making, leads to a replication of the previous biased patterns. For example, resume scanning algorithms like the one used by Amazon scored females and people of color differently from white male candidates because the prevailing training data that was fed to the algorithm, primarily portrayed European male employees to denote success. Further, some algorithmic advertisements have found to replicate stereotypes from the real world, without any intended or direct human instructions, by directing certain job positions towards a particular gender or race. According to a US study, advertisements for shop cashier openings went to 85% women, for taxi jobs went to 75% black people, whereas ads for jobs in the lumber industry reached 72% white and 90% male population.

Moreover, since such algorithmic models are protected under intellectual property laws, companies may not be required to reveal their details regarding data sets used to train such algorithms. Also, not only are these algorithms often implemented without consent of the applicant but also the algorithm developers are unaware and can’t predict the effective logic of an algorithm’s operations due to its black-boxed nature. If such is the level of unpredictability, it is reasonable to question whether any legal regulation or measure for that matter can completely govern algorithmic recruitment with certainty. 

Legal & Theoretical Perspectives      

As established earlier, biased automated decisions don’t directly result out of employer interference rather due to proxy discrimination. For instance, while a dataset may not entail any details on applicants’ race/gender, the algorithm can infer it on the basis of postal codes or other proxy variables, thereby resulting in inadvertent discriminatory outcomes based on purely statistical predictions. But it doesn’t seem logically legally reasonable to leave these types of algorithmic recruitment processes unaccountable when the biased outcomes do not result out of human intervention and intention. Though algorithmic outcomes can clear the test of an intentionalist interpretation, the consequential mischief positions them similarly to the discrimination which arises out of human decision-making, thereby questioning their initial purpose. While algorithms don’t contribute to direct discrimination and are better placed than human judgments, that doesn’t negate the disparate impact caused by indirect biased recruitment on potential applicants.   

Such algorithms shape individuals’ accessibility to recruitment and subsequently employment opportunities. Therefore, they do have a direct nexus with an individual’s livelihood which constitutes an essential human right. Human rights law acknowledges every individual’s inalienable rights to not be subjected to any form of disparate treatment or any disparate impacts, especially in recruitment processes. Further, the Universal Declaration of Human Rights nowhere restricts protection to only direct-human-to-human discrimination. Therefore, the indirect discrimination caused by algorithmic hiring processes against applicants could be brought under the purview of human rights framework through creative legal reasoning as indirect discrimination is also prohibited as per recent judicial precedents.  

In the Indian landscape, in one sense this can affect an individuals’ right to not be discriminated against, during recruitment according to Section 5 of Equal Remuneration Act 1976. One can contest that this right is premised on a human employer acting discriminatingly, wherein the employer is defined as a person or authority not including automated authority. However, it’s also plausible to rebut the same because recruitment processes are ultimately automated for and by humans, whereby it’s possible to attribute the algorithmic discrimination to human employers in one sense. In a similar way, such situations of disparate impacts are open to be constitutionally challenged under Article 15 of the Indian Constitution which prohibits discrimination against citizens on the basis of race, sex, caste, birthplace etc. Having said this, it is still imperative for us to acknowledge and remedy the existing ambiguities in the prevailing legal regulations.    

Innovating Alternative Just Perspectives 

Viewing these algorithmic recruitment processes normatively based on Amartya Sen’s approach to justice, it’s vital to not hold a quest to derive a single set of just principles/policies or social arrangements for these institutional hiring practices. Rather we should engage in the scrutiny of evidences and evaluate this process on the basis of its desirability and how it actually realizes social behavior, because different intellectual reasonings can never make us reach a shared-consensual understanding of justice. Therefore, whilst this proposition of automating the recruitment process is a fairly advanced idea which is emerging now, one of the approaches to achieving justice could be focusing on the actual realization of capabilities for all individuals.

While algorithmic analysis of huge datasets is enabling recruiters to expand the pool of candidates, it is implicitly increasing more individuals’ capabilities in terms of availing recruitment options. However, algorithmic bias is disparately impacting certain sets of individuals as it reduces their accessibility to recruitment, thereby disproportionately affecting the actual realization of some individuals’ capabilities. Automation and the resulting algorithmic bias place certain individuals inequitably and perceivably don’t augment actual opportunities to employment/recruitment in comparison to the traditional recruitment processes. Further, favorable subjective determinations on several relevant factors can currently only be derived on the basis of human discretion but that is discounted when employers start paying over-reliance on algorithmic decision-making.       

It then becomes necessary to deliberate through a cost-benefit analysis whether presently it’s reasonable to expect humans to overcome their direct/latent biases through rigorous training-education or to continue with these automated recruitment systems. Several scholars argue in favor of the latter by placing their confidence in data protection regulations, corporate transparency measures, bias detection mechanisms etc. Candidates can request explanations for algorithmic decisions, compelling developers to create justifiable algorithms and have companies analyze datasets-outcomes thoroughly. However, not only is there sheer absence of legal policies for regulating algorithms in countries like India, it’s also averred that in principle no algorithms can be found to be completely explainable as of now, even though efforts are being made to make them customizable/controllable. Further, de-biasing an algorithm is arguably a mere band-aid solution, leaving it open for the recruiters to use the same algorithm later at the cost of justifying biased outcomes if it helps achieve commercial interests better. 

In conclusion, the limitations of present-day technological development inculcating algorithm bias cannot be done away with for it trains itself to give biased results (mostly inadvertently). However, multitudinous alternatives can still be deliberated upon to achieve the final objective of non-prejudicial, convenient and just recruitment. Further from another alternative perspective, it seems logical to infer that algorithms reproduce a technical form of human bias since they train themselves on a data set that more often than not entails implicit/explicit patterns of previously biased decision-making. Consequently, it can be argued that algorithmic functioning is closely related to human functioning. What humans do, algorithms replicate it.

Therefore, remedial action should lie not only at the stage of technology but also at the human level. While it’s essential to figure out ways for perfecting algorithmic application that doesn’t perpetuate the same biases that the society is trying to tackle, it’s equally important to reform the existing state of human affairs and actions. Since there is no way we can counter technology’s rationality due to its inherent characteristics, major reform can be brought at the level of humans to improve their decision-making by training/educating them to uncover and look beyond their latent biases. This will certainly improve the training dataset algorithms use for producing automated decisions. Lastly, algorithms are currently operating as unregulated decision makers, therefore, an immediate solution could be to draft guidelines for possibly tackling this advanced form of discrimination. 

Views expressed above are solely of the author.