Deconstructing The AI ‘Black Box’ mystery and its legal and ethical dilemmas 

Imagine a world where your resume is automatically deemed inferior due to your gender, you are a convicted criminal predicted to re-offend at a significantly higher probability due to your race, and you are given a lower credit limit because you are a woman, despite having a better credit score than your husband.

Unfortunately, these are not hypothetical scenarios, but rather real-life cases, involving trillion-dollar companies and even national courts, such as Amazon,[1] US courts[2] and Apple,[3] and the one common factor, the root of all these issues, is AI, more specifically the AI black box problem.

This article aims to define the AI Black Box problem, provide detailed insights into real-life cases of ongoing issues born from this mystery, set the scene of the current legal provisions relating to it and finally, explore possible routes for eliminating this issue in the future.

The AI Black Box dilemma arises due to the complexity and convoluted nature of certain AI systems, to the point where their internal workings are not fully comprehensible or explainable, even by those who created those very systems.[4]

In other terms, there is a clear input being fed into the AI systems and a clear output being generated by the AI, however, “black box predictive models can be such complicated functions of the variables that no human can understand how the variables are jointly related to each other to reach a final prediction.” [5]

This means that the mystery lies in the actual processing of input to make the output, and this can be extremely detrimental when the uncomprehensible processing, without the developers realizing, picks up biases in the input data and produces unreliable outputs which can be biased against certain groups, taking the Amazon case as an example.

In this case, Amazon used an AI algorithm as a recruitment tool (scrapped after 3 years of use), which was trained on data submitted by applicants over a 10-year period.

According to those who worked on the system, candidates were not being evaluated in a gender-neutral method as the AI recruitment system was built by feeding it data that was accumulated from CVs submitted to the firm, mainly from males.[6] Thus, the system had effectively taught itself that male candidates were preferable, and it was penalizing CVs with the word "women", which is disadvantageous to those who may have gone to women’s colleges or were part of women’s societies.

This is a clear example of how human biases can slip into data and then ultimately bias AIs, which is not ideal, as according to the BBC, 42% of UK tech firms are using AI to screen and recruit candidates in 2024,[7] which can harm groups being biased against.

As for the legal aspects of black box AIs, on 1st August 2024, the European Artificial Intelligence Act (EU AI Act) came into force and it has been described by the EU as ‘the world's first comprehensive AI law’,[8] which specifically mentions that “high-risk AI systems such as AI-based medical software or AI systems used for recruitment must comply with strict requirements, including risk-mitigation systems, high-quality of data sets, clear user information, human oversight, etc.” [9]

So currently, legal provisions are being specifically made to avoid instances such as the Amazon recruitment bias, and other similar cases. Additionally, AI discrimination lawsuits have already begun to emerge, for instance, iTutorGroup, a tutoring company based in China, was made to pay $365,000 to over 200 applicants, for using hiring software that automatically rejected older applicants, as this violated the US’s Age Discrimination in Employment Act (ADEA).[10]

Notably, the EU AI laws also plan to adopt a more human-centric approach to AI, to ensure AI applications comply with fundamental rights legislation. The laws also plan on “integrating accountability and transparency requirements into the development of high-risk AI systems, and improving enforcement capabilities”, so that it can be ensured that these systems are designed to be legally compliant right from the start. Thus, in the event of an occurrence of a breach, such requirements being asked from the beginning will be useful for national authorities as they will have access to the information needed to investigate whether the use of AI complied with EU law. [11]

Additionally, the EU AI act has classified self-driving cars as a high-risk AI system,[12] and the ethical and legal dilemmas of self-driving cars serve as another real-life example of complications arising due to black box AI. If you are the sole passenger of an automated vehicle, and its sensors detect that it is presented with only two options, where the AI can either guide the car to the barrier, killing you, or swerve to save you and kill a passerby.

What should the AI do? Does it owe a duty of care to you as the passenger? If so, who should bear liability? Who should have a say in what should be done? The car manufacturers? The AI programmers? Ethical experts? The driver? The government’s laws?

(Photo credit: Moral Machine, accessed on https://www.moralmachine.net/)

What if data is fed in hopes of a particular outcome but due to incomprehensible black box processing systems, a faulty outcome is produced unbeknownst to its programmers? Who should be held responsible for an accident occurring? According to a Tesla engineer it is nearly impossible to see what went wrong when the AI misbehaves and causes an accident, in a statement according to Reuters.[13]

Another scenario that can be considered is one where the AI of an autonomous vehicle is faced with the choice of either saving a young person or an elderly one, and according to Massachusetts Institute of Technology, researchers have noticed that countries’ preferences “correlate highly with culture and economics.” [14]

For example, participants from collectivist cultures like China and Japan are more likely to spare the old over the young in situations where the choice to save either of the two is presented, and the researchers hypothesized, that it may be because of a greater emphasis on respecting the elderly in those two cultures.[15] Another example is how the researchers noted that participants from poorer countries with weaker institutions show greater tolerance to jaywalkers.[16]

The rule-makers and regulators of each country may have to take note of the cultural and economic factors which will affect their citizens’ choices, thus, these preferences could play a major role in shaping the design and regulation of such vehicles.

It is important to note that these scenarios which tackle who to save in the case of multiple situations, are not mere hypotheticals but actual perplexities being faced by engineers at firms such as Tesla,[17] and an example of a platform that delves deeper into this moral dilemma is the 2014 MIT Media Lab experiment, Moral Machine.

Moral Machine has been defined as “a game-like platform that would crowdsource people’s decisions on how self-driving cars should prioritize lives in different variations of the “trolley problem””, which after 4 years of being live, has had over 40 million decisions being made by people in 233 countries, making it “one of the largest studies ever done on global moral preference.”[18]

So, whilst these scenarios are mainly found on playable platforms, soon they will be playing out in reality more often, as according to the UK government, self-driving vehicles could be on British roads by 2026, proceeding the government’s Automated Vehicles (AV) Act, which became law on 20 May 2024.[19]

Additionally, regarding automated vehicles, the complicated concept of legal foreseeability arises often and has presented itself as a challenge. Legal foreseeability deals with whether or not it was reasonable for the defendant to foresee that their actions would cause harm or not. Since it is possible for the AI to reach counter-intuitive solutions due to obscure patterns it picks up from the data and engage in conduct which is impossible to be undertaken by humans, this reinforces the fact that AI’s decisions or conducts are legally unforeseeable by the creator or user of the AI.[20]

According to a Harvard journal, “if the creator of AI cannot necessarily foresee how the AI will make decisions, what conduct it will engage in, or the nature of the patterns it will find in data, what can be said about the reasonable person in such a situation?”[21]

(Photo credit: Kathleen Fu, accessed on https://www.kathleenfu.com/)

As for the future of AI, the EU AI Act has set a legal framework that according to the European Commission is “responsive to new developments, easy and quick to adapt and allows for frequent evaluation.”[22] So clearly measures are being put in place in order for the legal system to quickly adapt to one of the fastest developing things on this planet, as the EU AI Act is set to have constant evaluations so that “any need for revision and amendments is identified.”[23]

Whilst some appreciate the EU AI Act, such as Christina Montgomery, Vice President and Chief Privacy & Trust Officer at IBM, who commended “the EU for its leadership in passing comprehensive, smart AI legislation”, [24] some are a bit more skeptical and believe the UK’s approach, which contains a more incremental regulation, is a better choice.[25]

However, the fact remains that it is crucial to make regulations that safeguard humanity, and the true challenge lies in balancing this noble objective without limiting AI’s full potential.

In conclusion, whilst there are still certain areas of uncertainty regarding the legalities and ethical dimensions of the AI black box problem, measures are currently being implemented and are planned to be implemented to navigate them, aligning with Abraham Lincoln’s famous words, “the best way to predict the future is to create it.”[26]

[1] BBC, ‘Amazon Scrapped “Sexist AI” Tool’ BBC News (10 October 2018) <https://www.bbc.co.uk/news/technology-45809919>

[2] Jeff Larson and others, ‘How We Analyzed the COMPAS Recidivism Algorithm’ (ProPublica23 May 2016) <https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm>.

[3] BBC, ‘Apple’s “Sexist” Credit Card Investigated by US Regulator’ BBC News (11 November 2019) <https://www.bbc.co.uk/news/business-50365609>.

[4] Nguyen Quan, ‘Black Box AI: What Is It and How Does It Work?’ (Eastgate Software18 August 2023) <https://eastgate-software.com/black-box-ai-what-is-it-and-how-does-it-work/>.  

[5] Cynthia Rudin and Joanna Radin, ‘Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson from an Explainable AI Competition’ (2019) 1 Harvard Data Science Review <https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/8>.

[6] BBC, ‘Amazon Scrapped “Sexist AI” Tool’ BBC News (10 October 2018) <https://www.bbc.co.uk/news/technology-45809919>.

[7] Charlotte Lytton, ‘AI Hiring Tools May Be Filtering out the Best Job Applicants’ (BBC16 February 2024)<https://www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination>

[8] Brussels, ‘Press Corner’ (European Commission - European Commission1 August 2024) <https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683>.

[9] ibid.

[10] Kara Dennison, ‘Could Lawsuits against AI Lead to a Shift in Job Searching?’ Forbes (22 March 2024) <https://www.forbes.com/sites/karadennison/2024/03/21/could-lawsuits-against-ai-lead-to-a-shift-in-job-searching/>.

[11] ibid.

[12] Tambiama Madiega, ‘BRIEFING EU Legislation in Progress Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts Ordinary Legislative Procedure (COD) (Parliament and Council on Equal Footing -Formerly “Co-Decision”) next Steps Expected: Publication of Draft Report’ (2024) <https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf>.

[13] Norihiko Shirouzu and Chris Kirkham, ‘Tesla’s Robotaxi Push Hinges on “Black Box” AI Gamble’ (Reuters10 October 2024) <https://www.reuters.com/technology/tesla-gambles-black-box-ai-tech-robotaxis-2024-10-10/>.

[14] Karen Hao, ‘Should a Self-Driving Car Kill the Baby or the Grandma? Depends on Where You’re From.’ (MIT Technology Review24 October 2018) <https://www.technologyreview.com/2018/10/24/139313/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/>.

[15] ibid.

[16] ibid.

[17] Faiz Siddiqui, ‘How Elon Musk Knocked Tesla’s “Full Self-Driving” off Course’ Washington Post (19 March 2023) <https://www.washingtonpost.com/technology/2023/03/19/elon-musk-tesla-driving/>.  

[18] ibid.

 [19] GOV.UK, ‘Self-Driving Vehicles Set to Be on Roads by 2026 as Automated Vehicles Act Becomes Law’ (GOV.UK2024) <https://www.gov.uk/government/news/self-driving-vehicles-set-to-be-on-roads-by-2026-as-automated-vehicles-act-becomes-law>.

[20] Yavar Bathaee, ‘THE ARTIFICIAL INTELLIGENCE BLACK BOX and the FAILURE of INTENT and CAUSATION’ (2018) 31 Harvard Journal of Law & Technology <https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf>.

[21] ibid.  

[22] Brussels, ‘Press Corner’ (European Commission - European Commission1 August 2024) <https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683>.

[23] ibid.

[24] Isabel Gottlieb, ‘EU AI Act’s Passage Starts the Clock for US Companies to Comply’ (news.bloomberglaw.com13 March 2024) <https://news.bloomberglaw.com/artificial-intelligence/the-eu-parliament-just-voted-to-regulate-ai-what-happens-next>.

[25] Asress Adimi Gikay (Oup.com2024) <https://academic.oup.com/ijlit/article/32/1/eaae013/7701544?login=false> accessed 22 November 2024.  

[26] ‘December 2016 News’ (National University1 December 2016) <https://www.nu.edu/chancellors-page/december-2016/>.

Previous
Previous

A Race Against Time: The Role of Emergency Arbitration in International Sports

Next
Next

Exposing the cannibalistic underbelly of UK for-profit prisons – a case against the privatisation of prisons and the exploitation of UK prison labour