Thursday, June 15, 2023

How the UK is getting AI regulation right

Regulation must protect AI innovation while addressing risks, but what’s the right balance? ra2 studio / Shutterstock
Asress Adimi Gikay, Brunel University London

The latest generation of artificial intelligence (AI), such as ChatGPT, will revolutionise the way we live and work. AI technologies could significantly improve education, healthcare, transport and welfare. But there are downsides, too: jobs automated out of existence, surveillance abuses, and discrimination, including in healthcare and policing.

There’s general agreement that AI needs to be regulated, given its awesome potential for good and harm. The EU has proposed one approach, based on potential problems. The UK is proposing a different, pro-business, approach.

This year, the UK government published a white paper (a policy document setting out plans for future legislation) unveiling how it intends to regulate AI, with an emphasis on flexibility to avoid stifling innovation. The document favours voluntary compliance, with five principles meant to tackle AI risks.

Strict enforcement of these principles by regulators could be added later if it’s required. But is such an approach too lenient given the risks?

Crucial components

The UK approach differs from the EU’s risk-based regulation. The EU’s proposed AI Act prohibits certain AI uses, such as live facial recognition technology, where people shown on a camera feed are compared against police “watch lists”, in public spaces.

The EU approach creates stringent standards for so-called high-risk AI systems. These include systems used to evaluate job applications, student admissions, eligibility for loans and public services.

I believe the UK’s approach better balances AI’s risks and benefits, fostering innovation that benefits the economy and society. However, critical challenges need to be addressed.

Facial recognition in a crowd.
The EU’s AI Act would prohibit live face recognition by police forces in public spaces. Gorodenkoff / Shutterstock

The UK approach to AI regulation has three crucial components. First, it relies on existing legal frameworks such as privacy, data protection and product liability laws, rather than implementing new AI-centred legislation.

Second, five general principles – each consisting of several components – would be applied by regulators in conjunction with existing laws. These principles are (1) “safety, security and robustness”, (2) “appropriate transparency and explainability”, (3) “fairness”, (4) “accountability and governance”, and (5) “contestability and redress”.

During initial implementation, regulators would not be legally required to enforce the principles. A statute imposing these obligations would be enacted later, if considered necessary. Organisations would therefore be expected to comply with the principles voluntarily in the first instance.

Third, regulators could adapt the five principles to the subjects they cover, with support from a central coordinating body. So, there will not be a single enforcement authority.

Promising approach?

The UK’s regime is promising for three reasons. First, it promises to use evidence about AI in its correct context, rather than applying an example from one area to another inappropriately.

Second, it is designed so that rules can be easily tailored to the requirements of AI used in different areas of everyday life. Third, there are advantages to its decentralised approach. For example, a single regulatory organisation, were it to underperform, would affect AI use across the board.

Let’s look at how it would use evidence about AI. As AI’s risks are yet to be fully understood, predicting future problems involves guesswork. To fill the gap, evidence with no relevance to a specific use of AI could be appropriated to propose drastic and inappropriate regulatory solutions.

For instance, some US internet companies use algorithms to determine a person’s sex based on facial features. These showed poor performance when presented with photos of darker-skinned women.

This finding has been cited in support of a ban on law enforcement use of face recognition technology in the UK. However, the two areas are quite different and problems with gender classification do not imply a similar issue with facial recognition in law enforcement.

These US gender algorithms work under relatively lower legal standards. Face recognition used by UK law enforcement undergoes rigorous testing, and is deployed under strict legal requirements.

Driverless car.
Some AI applications, such as driverless cars, could fall under more than one regulatory regime. riopatuca / Shutterstock

Another advantage of the UK approach is its adaptability. It can be difficult to predict potential risks, particularly with AI that could be appropriated for purposes other than the ones foreseen by its developers and machine learning systems, which improve in their performance over time.

The framework allows regulators to quickly address risks as they arise, avoiding lengthy debates in parliament. Responsibilities would be spread between different organisations. Centralising AI oversight under a single national regulator could lead to inefficient enforcement.

Regulators with expertise in specific areas such as transport, aviation, and financial markets are better suited to regulate the use of AI within their fields of interest.

This decentralised approach could minimise the effects of corruption, of regulators becoming preoccupied with concerns other than the public interest and differing approaches to enforcement. It also avoids a single point of enforcement failure.

Enforcement and coordination

Some businesses could resist voluntary standards, so, if and when regulators are granted enforcement powers, they should be able to issue fines. The public should also have the right to seek compensation for harms caused by AI systems.

Enforcement needn’t undermine flexibility. Regulators can still tighten or loosen standards as required. However, the UK framework could encounter difficulties where AI systems fall under the jurisdiction of multiple regulators, resulting in overlaps. For example, transport, insurance, and data protection authorities could all issue conflicting guidelines for self-driving cars.

To tackle this, the white paper suggests establishing a central body, which would ensure the harmonious implementation of guidance. It’s vital to compel the different regulators to consult this organisation rather than leaving the decision up to them.

The UK approach shows promise for fostering innovation and addressing risks. But to strengthen the country’s position as a leader in the area, the framework must be aligned with regulation elsewhere, especially the EU.

Fine-tuning the framework can enhance legal certainty for businesses and bolster public trust. It will also foster international confidence in the UK’s system of regulation for this transformative technology.The Conversation

Asress Adimi Gikay, Senior Lecturer in AI, Disruptive Innovation and Law, Brunel University London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How the UK data protection authority gives free pass to big tech giants


Asress Adimi Gikay (PhD)

In the online space, one of the most empty promises is “we value your privacy.“ Businesses promise to preserve our privacy rights but they neither have the carrot, nor the stick to make them respect data protection rules. So, they  flout data privacy laws, as regulators either struggle to adequately enforce the law or wilfully ignore infractions.

The UK’s data protection authority— the Information Commissioner's Office (ICO)— has succumbed the most to its ambition of promoting innovation and economic growth while simultaneously protecting the public’s personal data. The authority's enforcement defies its primary objective of protecting the public's data privacy rights.

The ICO’s enforcement track record—the numbers don’t lie

During the 2021-2022 fiscal year, the ICO reported receiving 35,558  data privacy violation complaints. The complaints were diverse including companies refusing to delete individuals’ personal data or processing their data without consent. Sometimes, organizations infringed the individual’s right to access their own personal data, contrary to what the data protection legislation requires.

Similarly, in the 2022-2023 financial year, a total of 27,130  complaints were filed with the ICO, excluding data from the most recent financial quarter that the authority is yet to report. Out of the 62,688 complaints filed over a span of two years, the authority levied only 59 monetary penalties. Only approximately 0.094% of the complaints led to organizations being sanctioned for breaching data protection rules.

The ICO closed most of the complaints alleging insufficient information to proceed with the complaints or lack of evidence of infraction. It resolved numerous cases through discussions with infringing companies. In such cases, the authority recognises the presence of infringement by the organization but encourages the organization to rectify the violation, including addressing the underlying complaint.

Due to the ICO’s practice of not disclosing comprehensive details about these cases, except for summaries, the public tends to perceive the authority as prioritizing business interests over safeguarding data privacy rights.  Interestingly, this public perception aligns with the available evidence.

The broader context

The enforcement of the GDPR has been unsatisfactory across the EU, since the implementation of what has been described as a breakthrough law, that promised to empower people in the digital world, through giving more control to citizens on their personal data. Even when applying a more forgiving standard, the ICO's enforcement record remains unsatisfactory. Between 2018 and 2022, it levied around 50 monetary penalties, while German and the Italian authorities imposed 606 and 228 penalties between 2018 and 2021.

The ICO is generally passive compared to its European counterparts. In a notable case, the French authority, Commission Nationale de l’Informatique et des Liberté  (CNIL) fined Meta and Google €60 million and €150 million respectively in 2021 for their illegal use of cookies. Despite engaging in similar unlawful data collection practices in the UK, the companies made changes to their cookie-based data collection practices in the UK only while complying with the French ruling. They faced no threat of sanction in the UK.

The ICO's consistently poor enforcement record clearly undermines public confidence in the authority. In its 2022 annual report, the authority itself acknowledged getting the lowest score in complaint resolution in a 2021 customer survey it backed. An independent review—Trustpilot— rates the authority at 1.1 out of 5. This is based on self-initiated reviews conducted by members of the public, some claiming that the ICO prioritizes business interests rather than protecting privacy rights.

Unfit enforcement policy— corporate free pass

The ICO’s risk-based approach enforcement prioritizes a softer approach to ensuring compliance, reserving enforcement actions to violations that are likely to pose the highest risk and harm to the public. Enforcement action includes requiring an offending organization to end violations and comply with relevant rules through  so-called enforcement notice and issuing penalty.

The ICO considers several factors in determining whether imposing a penalty is appropriate, including the intentional or repeated nature of the breach, the degree of harm to the public, and the number of people impacted. In practice however, it uses discretion even in cases of intentional and repeat violations.

In one fiscal  year(2022/2023), Google UK violated the law more than 25 times,  as acknowledged by the ICO in separate complaints, but the authority only advised the company to comply.

Google UK's infractions include refusal or delaying to delete personal data upon request by individuals exercising their right to be forgotten. Meta Platform(formerly Facebook Inc.) received 20 compliance suggestions, after evidence of its infringement has been found, while Microsoft and Twitter each received the same soft compliance advices 8 times, in the same year.

In all these cases, taxpayer's data protection rights were violated and evidence of infringement by big tech companies have been found, yet the ICO consistently chose to give the offenders a free pass, rather than standing up for citizens and upholding the law.

 The need for policy change

The ICO's enforcement policy relies on collaborating with regulated entities rather than effectively sanctioning them to deter repeat violations. This approach aims to support the digital economy by avoiding excessive enforcement of data protection rights and fostering data innovation. In theory, it should attract businesses to the UK, create jobs, and stimulate economic growth. However, the policy is currently being applied to serve the interest of big tech companies.

The companies repeatedly violating data protection laws don’t necessarily contribute to digital innovation in the UK, while most of them are not strategically positioned to provide job opportunities in the country. But the UK remains their crucial consumer market. As such, sanctioning them is unlikely to change their business decisions and behaviour to the detriment of the UK economy. 

The ICO’s failure to effectively enforce data privacy laws erodes public trust. It could also discourage data innovation, as the public might refuse to provide data for research and innovation, which could in turn negatively affect the digital economy. 



I am a Senior Lecture in AI, Disruptive Innovation and Law (Brunel University London). If you are interested in occasional updates like this, follow me on Twitter or LinkedIn.


 


Tuesday, June 6, 2023

If we’re going to label AI an ‘extinction risk’, we need to clarify how it could happen

This is not the first time that AI has been described as an existential threat. Nouskrabs/Shutterstock
Nello Cristianini, University of Bath

This week a group of well-known and reputable AI researchers signed a statement consisting of 22 words:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

As a professor of AI, I am also in favour of reducing any risk, and prepared to work on it personally. But any statement worded in such a way is bound to create alarm, so its authors should probably be more specific and clarify their concerns.

As defined by Encyclopedia Britannica, extinction is “the dying out or extermination of a species”. I have met many of the statement’s signatories, who are among the most reputable and solid scientists in the field – and they certainly mean well. However, they have given us no tangible scenario for how such an extreme event might occur.

It is not the first time we have been in this position. On March 22 this year, a petition signed by a different set of entrepreneurs and researchers requested a pause in AI deployment of six months. In the petition, on the website of the Future of Life Institute, they set out as their reasoning: “Profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs” – and accompanied their request with a list of rhetorical questions:

Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation?

A generic sense of alarm

It is certainly true that, along with many benefits, this technology comes with risks that we need to take seriously. But none of the aforementioned scenarios seem to outline a specific pathway to extinction. This means we are left with a generic sense of alarm, without any possible actions we can take.

The website of the Centre for AI Safety, where the latest statement appeared, outlines in a separate section eight broad risk categories. These include the “weaponisation” of AI, its use to manipulate the news system, the possibility of humans eventually becoming unable to self-govern, the facilitation of oppressive regimes, and so on.

Except for weaponisation, it is unclear how the other – still awful – risks could lead to the extinction of our species, and the burden of spelling it out is on those who claim it.

Weaponisation is a real concern, of course, but what is meant by this should also be clarified. On its website, the Centre for AI Safety’s main worry appears to be the use of AI systems to design chemical weapons. This should be prevented at all costs – but chemical weapons are already banned. Extinction is a very specific event which calls for very specific explanations.

On May 16, at his US Senate hearing, Sam Altman, the CEO of OpenAI – which developed the ChatGPT AI chatbot – was twice asked to spell out his worst-case scenario. He finally replied:

My worst fears are that we – the field, the technology, the industry – cause significant harm to the world … It’s why we started the company [to avert that future] … I think if this technology goes wrong, it can go quite wrong.

But while I am strongly in favour of being as careful as we possibly can be, and have been saying so publicly for the past ten years, it is important to maintain a sense of proportion – particularly when discussing the extinction of a species of eight billion individuals.

AI can create social problems that must really be averted. As scientists, we have a duty to understand them and then do our best to solve them. But the first step is to name and describe them – and to be specific.

Nello Cristianini, Professor of Artificial Intelligence, University of Bath

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Friday, June 2, 2023

The UK's pro-innovation AI regulatory framework is a step in the right direction

Asress Adimi Gikay (PhD), Senior Lecturer in AI, Disruptive Innovation, and Law (Brunel University London) 

Twitter @DrAsressGikay


The Essence of the UK's pro-innovation regulatory approach  

After several years of evaluating the available options to regulate AI technologies, and the publication of the  National AI Strategy in 2021 setting out a regulatory plan, the UK government finally set out its pro-innovation regulatory framework in a white paper published in March of this year. The government is currently collecting responses to consultation questions. 

The white paper specifies that the country is not ready to enact a statutory law in the foreseeable future governing AI. Instead, regulators will issue guidelines implementing five principles outlined in the white paper. According to the white paper, following the initial period of implementation, and when parliamentary time allows, 'introducing a statutory duty on regulators requiring them to have due regard to the principles' is anticipated. So, an obligation to enforce the identified principles will imposed on regulators, if it is deemed necessary based on the lessons learned from the non-statutory compliance experience. But this will most likely not take place in the coming 2 to 3 years, if not more

The UK's pro-innovation regime starkly contrasts with the upcoming European Union(EU) AI Act's risk-based regulation, applying different legal standards to AI systems based on the risk they pose.  The EU's proposed regulation bans specific AI uses, such as facial recognition technology(FRT), in publicly accessible spaces while imposing strict standards for developing and deploying the so-called high risk AI systems, including detailed safety and security, fairness, transparency and accountability. The EU's regulatory effort aims to tackle AI risks through a single legislative instrument overseen by a single national authority of member states. 

Undoubtedly, AI poses many risks ranging from discrimination in healthcare to reinforcing structural inequalities or perpetuating systemic racism in policing tools that could utilize (il)literacy, race, and social background to predict a person's likelihood to commit crimes. Certain AI uses also pose risks to privacy and other fundamental rights, as well as democratic values. However, the technology also holds tremendous potential for improving human welfare through enhancing the  efficient delivery of public services such as education, healthcare, transportation, and welfare. 

But is the UK's self-proclaimed pro-innovation framework, that uses a non-statutory regulatory approach to tackle the potential risks of AI technologies appropriate?  

I contend that with additional fine-tuning, the approach taken by the UK better balances the risks and benefits of the technology, while also promoting socio-economically beneficial innovation.

Key components of the envisioned framework

The UK approach to AI regulation has three crucial components.  First, it relies on existing legal frameworks relevant to each sector such as privacy, data protection, consumer protection, and product liability laws, rather than implementing comprehensive AI-specific legislation. It assumes that many of the existing legislations being technology neutral would apply to AI technologies. 

Second, the white paper establishes five principles to be applied by each regulator in conjunction with the existing regulatory framework relevant to the sector. These principles are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Third, rather than a single regulatory authority, each regulator would implement the regulatory framework supported by a central coordinating body that among others, facilitates consistent cross-sectoral implementation. As such, it is up to individual regulators to determine how they apply the fundamental principles in their sectors. This could be called a semi-sectoral approach as the principles apply to all sectors, but their implementation may differ across sectors.

Although the white paper does not envision prohibition of certain AI technologies, some of the principles could be used to effectively prohibit certain use cases, for example unexplainable AI with potentially harmful societal impact.  Regulators are given a leeway, as a natural consequence of the flexibility offered by the approach adopted.

There will not be a single regulatory authority comparable to, for example, the Information Commissioner's Office that enforces data protection law in all areas. Initially, a statute will not require regulators to implement the principles. Actors in the AI supply chain will also have no legal obligation to comply with the principles unless the relevant principle is part of an existing legal framework. 

For instance, the principle of fairness requires developing and deploying AI systems that do not discriminate against persons based on any protected characteristics. This means that a public authority must fulfil its  Public Sector Equality Duty(PSED) under the Equality Act by assessing how the technology could impact different demographics. On the other hand, a private entity has no PSED as this obligation applies only to public authorities. Thus, private actors may avoid the obligation to comply with this particular aspect of the fairness principle unless they voluntarily choose to comply.

Why is the UK's overall approach appropriate? 

The UK’s flexible framework is generally a suitable approach to the governance of an evolving technology. Three key reasons can be provided for this.

   It allows evidence-based regulation.

Sweeping regulation gives the sense of preventing and addressing risks comprehensively. However, as the technology and its potential risks are yet to be understood reasonably, most AI risks today are a product of guesswork. 

This is a significant issue in AI regulation, as insufficient and non-contextualised evidence is increasingly used to advocate for specific regulatory solutions. For instance, risks of inaccuracy and bias identified in gender classification AI systems are frequently cited to support a total ban on law enforcement use of FRT in the UK. 

Although FRT has been used by law enforcement authorities in the UK several times, no considerable risk of inaccuracy has been reported because the context of law enforcement of FRT, especially in the UK, is different from online gender classification AI systems. Law enforcement use of FRT is highly regulated, so the technology deployed is also more stringently tested for accuracy, unlike an online commercial gender classification algorithm that operates in less regulated environments. Ensuring that relevant and context-sensitive evidence is used in proposing regulatory solutions is crucial.

By augmenting existing legal frameworks with flexible principles, the UK's approach enables regulators to develop tailored frameworks in response to context-sensitive evidence of harm emerging from the real-world implementation of AI, rather than relying on mere speculation

 Better enforcement of sectoral regulation 

Scholars have debated for a while on whether sector-specific regulations enforced by a sectoral regulator are suitable in algorithmic governance. In a seminal piece, 'An FDA for Algorithm,’ Andrew Tutt advocated for creating a central regulatory authority for algorithms in the US comparable to the Federal Drug Administration. The EU has adopted this approach by proposing a cross-sectoral AI Act, enforceable by a single national supervisory authority. The UK chose a different path, which is likely the more sensible way forward.

Entrusting AI oversight to a single regulator across multiple sectors could result in an inefficient enforcement system, lacking public trust. Different regulatory agencies possessing expertise in specific fields, such as transportation, aviation, drug administration, and financial oversight, are better placed to regulate AI systems used in their sectors. Centralising regulation may lead to corruption, regulatory capture, or misaligned enforcement objectives, impacting multiple sectors. In contrast, a decentralised approach allows specific regulators to set enforcement policies, goals, and strategies, preventing major enforcement failures and promoting accountability.

The ICO can provide a good example.  Its track record in enforcing data protection legislation is exceptionally poor, despite having the opportunity to bring together all the required resources and expertise needed to perform its tasks. The failed miserably, and its failure impacts data protection in all sectors.

As the Centre for Data Innovation asserted, “If it would be ill-advised to have one government agency regulate all human decision-making, then it would be equally ill-advised to have one agency regulate all algorithmic decision-making."

The UK's proposed sectoral approach avoids the risk of having a single inefficient regulatory authority by distributing regulatory power across sectors.

Non-statutory approach and flexibility to address new risks

The non-statutory regulatory framework allows regulators to swiftly respond to unknown AI risks, avoiding lengthy parliamentary procedures. AI technology's rapid advancement makes it difficult to fully comprehend real-world harm without concrete evidence. 

Predicting emerging risks is also challenging, particularly regarding "AI systems that have a wide range of possible uses, both intended and unintended by the developers"(known as general purpose AI) and machine learning systems. Implementing a flexible regulatory framework allows the framework to be easily adapted to the evolving nature of the technology and the resulting new risks.

But two challenges need to be addressed   

The UK's iterative, flexible, and sectoral approach could successfully balance the risks and benefits of AI technologies only, if the government implements additional appropriate measures.

Serious enforcement  

The iterative regulatory approach would be effective only if the relevant principles are enforceable by regulators. There must be a legally binding obligation for relevant regulators to incorporate these principles in their regulatory remit and create a reasonable framework for enforcement. This means that regulators should have the power to take administrative actions while individuals should be empowered to seek redress for the violation of their rights or to compel compliance with existing guidelines.  If no such mechanism is implemented, the envisioned framework will not address the risks posed by AI technologies. 

Without effective enforcement tools, companies like Google, Facebook, or Clearview AI that develop and/or use AI will have no incentive to comply with non-enforceable guidelines. There is no evidence to support this, and there will never be.

Enforcing the principles does not require changing the flexible nature of the UK’s envisioned approach, as how the principles are implemented is still left to regulators.  The flexibility remains largely in the fact that the overall principles can be amended without a parliamentary process. So, regulators can tighten or loosen their standards depending on the context. However, a statute that says the essence of those principles should be implemented and enforced by the relevant regulators is necessary.

Defining the Role of the central coordinating body

The white paper emphasizes the need for a  central function to ensure consistent implementation and interpretation of the principles, identify opportunities and risks, and monitor developments. But regulators must consult this office when implementing the framework and issuing guidelines. 

Although the power to issue binding decisions may not need to be conferred, the central office should be mandated to issue non-binding opinions on essential issues, similar to the European Data Protection Board. Regulators should also be required to initiate a request for an opinion on certain matters formally. This would facilitate cross-sectoral consistency in implementing the envisioned framework and enable early intervention in tackling potential challenges.

Conclusion 

The UK has taken a step in the right direction in adopting a flexible AI regulation that fosters innovation and mitigates the risks of AI technologies. However, the regulatory framework needs to be enhanced to maintain the UK's leadership in AI. The lack of a credible enforcement system and solid coordination mechanism may undermine the objective of the envisioned framework, including deterring innovation and undermining public trust and international confidence in the UK regulatory regime.









 

 

 

Monday, May 22, 2023

AI is already being used in the legal system - we need to pay more attention to how we use it

shutterstock.
Morgiane Noel, Trinity College Dublin

Artificial intelligence (AI) has become such a part of our daily lives that it’s hard to avoid – even if we might not recognise it.

While ChatGPT and the use of algorithms in social media get lots of attention, an important area where AI promises to have an impact is law.

The idea of AI deciding guilt in legal proceedings may seem far-fetched, but it’s one we now need to give serious consideration to.

That’s because it raises questions about the compatibility of AI with conducting fair trials. The EU has enacted legislation designed to govern how AI can and can’t be used in criminal law.

In North America, algorithms designed to support fair trials are already in use. These include Compas, the Public Safety Assessment (PSA) and the Pre-Trial Risk Assessment Instrument (PTRA). In November 2022, the House of Lords published a report which considered the use of AI technologies in the UK criminal justice system.

Supportive algorithms

On the one hand, it would be fascinating to see how AI can significantly facilitate justice in the long term, such as reducing costs in court services or handling judicial proceedings for minor offences. AI systems can avoid the typical fallacies of human psychology and can be subject to rigorous controls. For some, they might even be more impartial than human judges.

Also, algorithms can generate data to help lawyers identify precedents in case law, come up with ways of streamlining judicial procedures, and support judges.

On the other hand, repetitive automated decisions from algorithms could lead to a lack of creativity in the interpretation of the law, which could result slow down or halt development in the legal system.

Handcuffed man in prison.
In the US, algorithms have been used to calculate the risk of recidivism, continuing to commit crimes after previous sentencing. Brian A Jackson / Shutterstock

The AI tools designed to be used in a trial must comply with a number of European legal instruments, which set out standards for the respect of human rights. These include the Procedural European Commission for the Efficiency of Justice, the European Ethical Charter on the use of Artificial Intelligence in Judicial Systems and their Environment (2018), and other legislation enacted in past years to shape an effective framework on the use and limits of AI in criminal justice. However, we also need efficient mechanisms for oversight, such as human judges and committees.

Controlling and governing AI is challenging and encompasses different fields of law, such as data protection law, consumer protection law, and competition law, as well as several other domains such as labour law. For example, decisions taken by machine are directly subject to the GDPR, the General Data Protection Regulation, including the core requirement for fairness and accountability.

There are provisions in GDPR to prevent people being subject solely to automated decisions, without human intervention. And there has been discussion about this principle in other areas of law.

The issue is already with us: in the US, “risk-assessment” tools have been used to assist pre-trial assessments that determine whether a defendant should be released on bail or held pending the trial.

One example is the Compas algorithm in the US, which was designed to calculate the risk of recidivism – the risk of continuing to commit crimes even after being punished. However, there have been accusations – strongly denied by the company behind it - that Compas’s algorithm had unintentional racial biases.

In 2017, a man from Wisconsin was sentenced to six years in prison in a judgment based in part on his Compas score. The private company that owns Compas considers its algorithm to be a trade secret. Neither the courts nor the defendants are therefore allowed to examine the mathematical formula used.

Towards societal changes?

As the law is considered a human science, it is relevant that the AI tools help judges and legal practitioners rather than replace them. As in modern democracies, justice follows the separation of powers. This is the principle whereby state institutions such as the legislature, which makes law, and the judiciary, the system of courts that apply the law, are clearly divided. This is designed to safeguard civil liberties and guard against tyranny.

The use of AI for trial decisions could shake the balance of power between the legislature and the judiciary by challenging human laws and the decision-making process. Consequently, AI could lead to a change in our values.

And since all kinds of personal data can be used to analyse, forecast and influence human actions, the use of AI could redefine what is considered wrong and right behaviour – perhaps with no nuances.

It’s also easy to imagine how AI will become a collective intelligence. Collective AI has quietly appeared in the field of robotics. Drones, for example, can communicate with each other to fly in formation. In the future, we could imagine more and more machines communicating with each other to accomplish all kinds of tasks.

The creation of an algorithm for the impartiality of justice could signify that we consider an algorithm more capable than a human judge. We may even be prepared to trust this tool with the fate of our own lives. Maybe one day, we will evolve into a society similar to that depicted in the science fiction novel series The Robot Cycle, by Isaac Asimov, where robots have similar intelligence to humans and take control of different aspects of society.

A world where key decisions are delegated to new technology strikes fear into many people, perhaps because they worry that it could erase what fundamentally makes us human. Yet, at the same time, AI is a powerful potential tool for making our daily lives easier.

In human reasoning, intelligence does not represent a state of perfection or infallible logic. For example, errors play an important role in human behaviour. They allow us to evolve towards concrete solutions that help us improve what we do. If we wish to extend the use of AI in our daily lives, it would be wise to continue applying human reasoning to govern it.The Conversation

Morgiane Noel, PhD Candidate, Environmental Law, Human Rights, European Law., Trinity College Dublin

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, May 21, 2023

On Facial Recognition Technology in Schools, Power Imbalance and Consent: European Data Protection Authorities Should Reexamine their Approach

 Asress Adimi Gikay, Brunel University London

Peder Severin Krøyer, via Wikimedia Commons
Facial Recognition Technology in Schools

In today's world, our privacy and personal data are controlled by Big Tech companies such as Facebook, Google, Twitter, Apple, Instagram and many others. They  know  almost everything about us— our location, addresses, phone numbers, private email conversations and messages, food preferences, financial conditions and many other intimate details that we would otherwise not divulge even to our close friends. Children are not immune from this overarching surveillance power of Big Tech companies. 

In the UK, children as young as 13 years old can give their consent to processing of their personal data including using some kind of Facial Recognition Technology (FRT) by these Big Tech companies that provide online services. Many of these companies likely know our children's preferences for movies, music, food and other details more than we do. But society has no meaningful way to influence these companies whose God-like presence in our lives epitomizes the dystopia of technology-driven world.  Surveillance is the rule than the exception and we have little tools to protect ourselves from pervasive privacy intrusion.    

But the advent of FRT in schools in Europe has alarmed citizens, advocacy groups, Data Protection Authorities (DPAs) far more than the pervasive presence of Big Tech companies in our lives. It has prompted a strong response from DPAs who have consistently blocked the deployment of the technology in schools on the grounds of privacy intrusion and the breach of the GDPR.

Facial recognition is a process by which a person can be identified or recognized by Artificial Intelligence (AI) software using their facial image or video.  The software compares the individual’s digital image or video captured by a camera to an existing biometric image to estimate the degree of similarity between two facial templates to identify a match. 

There have been multiple instances of use of this technology in schools — for attendance monitoring in Sweden, access control in France and taking payment in canteens in the UK.  According to Swedish Municipal School Board, monitoring attendance using FRT would save 17280 hours  per year at the school concerned. UK Schools wanted to reduce a queue in canteen by taking payment faster and safer.  But DPAs, and in case of France, the Administrative Court of Marseille stepped in to block the technology due, primarily, to privacy related concerns regardless of the appreciable benefits.  

While the school authorities relied on explicit consent of students and/or their legal representatives to use the technology, DPAs rejected that explicit consent is a valid ground for processing personal data using FRT due to the imbalance of power between the school authorities on the one hand and students and their guardians on the other. This raises a question whether public institutions including schools could ever use FRT with the explicit consent of the data subject and if not, whether that is an outcome society should aim for.  

The Concerns about FRTs in Schools and the Fix

Scholars and advocacy groups point out that FRTs poses certain risks, especially in the context of processing children's data ranging from misuse of biometric data by the companies involved in providing or using the technology as well as bad actors such as hackers to the normalization of surveillance culture, directly stemming from individuals giving up their privacy right. 

More generally, it is argued that FRT is “an unnecessary and disproportionate interference with the students’ right to privacy.” As such, DPAs call for the deployment of less privacy intrusive technology alterative and take strict approach in whether there is a valid legal basis for using the technology including a freely obtained consent.

In its 2019 decision to fine the secondary school board of  Skellefteå Municipality, the Swedish DPA argued that although FRT was employed to monitor student attendance based on explicit consent, consent cannot be a valid legal basis given the clear imbalance of power between the data subject and the controller. In France, the Administrative Court of Marseille in agreeing with the French DPA(CNIL), concluded that the school councils have not provided sufficient guarantees to obtain free and informed consent of students to use FRT to control access, despite the fact that specific written consent has been obtained. 

In October 2021 as nine schools in North Ayrshire (UK) were preparing to replace their method of taking payment in canteens from fingerprint to facial recognition, the Information Commissioner’s Office(ICO) wrote a letter urging the schools to use  a "less intrusive" tool. The School Councils were forced to pause rolling out the technology.  The content of the ICO’s letter is not public and the ICO has not responded to the author’s Freedom of Information (FOI) Request to access the letter.

But these decisions evidently suggest that the mere presence of a power relationship between data controller and the data subject renders explicit consent as the basis for processing biometric data invalid. The UK schools’ suspension of implementing FRT for the mere fact of receiving a letter from the ICO signals that the presumed power imbalance alone would defeat explicit consent — at the very least, schools are not willing to engage in the process of obtaining consents as that would likely be regarded as insufficient and entail sanctions for the breach of the GDPR.  

While documents obtained from North Ayrshire Council under FOR request do suggest there were flaws in obtaining consent (for instance attempting to obtain consent  directly from a child of 12 years old), the overall effort of the Council seemed reasonable in terms of complying with data protection law. 

If the Council wishes to obtain valid consent, it should not be effectively prohibited ex ante. But the ICO’s letter clearly had that effect. Subsequently, on November 4, 2021, the House Lords held a debate sponsored by Lord Clement-Jones who expressed his opposition to the use of FRT in schools stating that “we should not use children as guinea pigs.” 

There is overwhelming evidence of the pressure to categorically ban the use of FRT in schools and indeed it is now effectively banned in Europe, albeit there is no legislation to that effect.

Imbalance of Power under the GDPR

Despite banning the processing of the so-called special categories of personal data, including biometric data such as facial images as a rule, the GDPR provides exceptions under which such data can be processed. Under one of the exceptions, it allows processing of biometric data to uniquely identify a natural person, if the data subject has given explicit consent to the processing of such personal data for one or more specified purposes. 

Consent should be given by a clear affirmative act establishing a freely given, specific, informed, and unambiguous indication of the data subject's agreement to the processing of personal data relating to her.

Where there is a power relationship, it is challenging to prove that consent has been obtained freely – the requirement which DPAs concluded was not met in Sweden and France. In this regard, the GDPR makes it clear that “consent should not provide a valid legal ground for the processing of personal data in a specific case where there is a clear imbalance between the data subject and the controller, in particular where the controller is a public authority and it is therefore unlikely that consent was freely given in all the circumstances of that specific situation.”

The GDPR allows DPAs and courts to consider an imbalance of power in assessing whether the consent has been obtained freely. But this is not a blanket prohibition of using explicit consent to process personal data by public authorities. This is consistent with the European Data Protection Board’s Guideline which states that “Without prejudice to these general considerations, the use of consent as a lawful basis for data processing by public authorities is not totally excluded under the legal framework of the GDPR.” 

Regulators and courts can, ex post facto, scrutinize if the explicit consent was obtained freely and in an informed manner but have no power to invalidate validly given consent based on the mere existence of imbalance of power that did not have actual effect. 

If a Member State of the European Union wishes to exclude consent as a basis for processing special categories of personal data, the GDPR allows such Member State to legislate that the prohibition of the processing of special categories of personal data may not be lifted based on explicit consent under any circumstance. Absent such legislation, the validity of consent in the context of the existence of power relationship can only be examined on case-by-case basis rather than in categorical terms. 

Thus, schools should be able to demonstrate that the presumed imbalance of power has not played a role in obtaining consent. 

The Current Approach should be Reexamined

The concerns raised by scholars and privacy advocates about the intrusive nature of FRT should be seen in the light of data protection and privacy safeguards provided by the GDPR which has a series of provisions guaranteeing that personal data is not used for a purpose different than originally intended, and that personal data be kept confidentially and securely

Furthermore, data controllers and processor have no right to share personal data with third parties unless consented to by the data subject. In the presence of these and a number of other safeguards, what seems a blanket prohibition of the use FRT in schools on the basis of unreasonable privacy anxiety and an irrebuttable presumption that power imbalance per se leads to a vitiated consent is not sensible.  There are several reasons this approach needs to be reexamined.

First and foremost, it puts small companies and public institutions at a disadvantage with regard to the use of FRT.  Big Tech companies can do almost as they please with our or our children’s personal data. Facebook’s opaque data sharing practice has frequently been exposed, but there is still no meaningful way to control what Facebook does. The same is true other Big Tech companies in the business of monetizing our personal data. Schools and companies providing FRTs should be the least of our concern. 

It is not difficult to make them abide by the GDPR, whereas Big Tech companies can hide behind complex legal and technical black boxes to get away with grossly illegal use of our personal data. The blanket prohibition of using FRTs by small institutions creates a system that unfairly disadvantages small data controllers.

Furthermore, data innovation would be deterred by over-zealous DPAs and courts who see superficial power imbalance without examining how it plays in reality while the real power imbalance society suffers from vis-à-vis Big Tech companies remain inadequately challenged.   

The future depends on innovating with data and the use of FRT would be an essential component of it. To satisfy our excessive anxiety about privacy intrusion, we cannot prevent small companies and institutions from benefiting from AI technologies and data-driven innovation while we let Big Tech companies take control of our lives. Data innovation should be by all, for all, not just for the Big Tech.


This post was first published in EU Law Analysis