Thursday, June 15, 2023

How the UK is getting AI regulation right

Regulation must protect AI innovation while addressing risks, but what’s the right balance? ra2 studio / Shutterstock
Asress Adimi Gikay, Brunel University London

The latest generation of artificial intelligence (AI), such as ChatGPT, will revolutionise the way we live and work. AI technologies could significantly improve education, healthcare, transport and welfare. But there are downsides, too: jobs automated out of existence, surveillance abuses, and discrimination, including in healthcare and policing.

There’s general agreement that AI needs to be regulated, given its awesome potential for good and harm. The EU has proposed one approach, based on potential problems. The UK is proposing a different, pro-business, approach.

This year, the UK government published a white paper (a policy document setting out plans for future legislation) unveiling how it intends to regulate AI, with an emphasis on flexibility to avoid stifling innovation. The document favours voluntary compliance, with five principles meant to tackle AI risks.

Strict enforcement of these principles by regulators could be added later if it’s required. But is such an approach too lenient given the risks?

Crucial components

The UK approach differs from the EU’s risk-based regulation. The EU’s proposed AI Act prohibits certain AI uses, such as live facial recognition technology, where people shown on a camera feed are compared against police “watch lists”, in public spaces.

The EU approach creates stringent standards for so-called high-risk AI systems. These include systems used to evaluate job applications, student admissions, eligibility for loans and public services.

I believe the UK’s approach better balances AI’s risks and benefits, fostering innovation that benefits the economy and society. However, critical challenges need to be addressed.

Facial recognition in a crowd.
The EU’s AI Act would prohibit live face recognition by police forces in public spaces. Gorodenkoff / Shutterstock

The UK approach to AI regulation has three crucial components. First, it relies on existing legal frameworks such as privacy, data protection and product liability laws, rather than implementing new AI-centred legislation.

Second, five general principles – each consisting of several components – would be applied by regulators in conjunction with existing laws. These principles are (1) “safety, security and robustness”, (2) “appropriate transparency and explainability”, (3) “fairness”, (4) “accountability and governance”, and (5) “contestability and redress”.

During initial implementation, regulators would not be legally required to enforce the principles. A statute imposing these obligations would be enacted later, if considered necessary. Organisations would therefore be expected to comply with the principles voluntarily in the first instance.

Third, regulators could adapt the five principles to the subjects they cover, with support from a central coordinating body. So, there will not be a single enforcement authority.

Promising approach?

The UK’s regime is promising for three reasons. First, it promises to use evidence about AI in its correct context, rather than applying an example from one area to another inappropriately.

Second, it is designed so that rules can be easily tailored to the requirements of AI used in different areas of everyday life. Third, there are advantages to its decentralised approach. For example, a single regulatory organisation, were it to underperform, would affect AI use across the board.

Let’s look at how it would use evidence about AI. As AI’s risks are yet to be fully understood, predicting future problems involves guesswork. To fill the gap, evidence with no relevance to a specific use of AI could be appropriated to propose drastic and inappropriate regulatory solutions.

For instance, some US internet companies use algorithms to determine a person’s sex based on facial features. These showed poor performance when presented with photos of darker-skinned women.

This finding has been cited in support of a ban on law enforcement use of face recognition technology in the UK. However, the two areas are quite different and problems with gender classification do not imply a similar issue with facial recognition in law enforcement.

These US gender algorithms work under relatively lower legal standards. Face recognition used by UK law enforcement undergoes rigorous testing, and is deployed under strict legal requirements.

Driverless car.
Some AI applications, such as driverless cars, could fall under more than one regulatory regime. riopatuca / Shutterstock

Another advantage of the UK approach is its adaptability. It can be difficult to predict potential risks, particularly with AI that could be appropriated for purposes other than the ones foreseen by its developers and machine learning systems, which improve in their performance over time.

The framework allows regulators to quickly address risks as they arise, avoiding lengthy debates in parliament. Responsibilities would be spread between different organisations. Centralising AI oversight under a single national regulator could lead to inefficient enforcement.

Regulators with expertise in specific areas such as transport, aviation, and financial markets are better suited to regulate the use of AI within their fields of interest.

This decentralised approach could minimise the effects of corruption, of regulators becoming preoccupied with concerns other than the public interest and differing approaches to enforcement. It also avoids a single point of enforcement failure.

Enforcement and coordination

Some businesses could resist voluntary standards, so, if and when regulators are granted enforcement powers, they should be able to issue fines. The public should also have the right to seek compensation for harms caused by AI systems.

Enforcement needn’t undermine flexibility. Regulators can still tighten or loosen standards as required. However, the UK framework could encounter difficulties where AI systems fall under the jurisdiction of multiple regulators, resulting in overlaps. For example, transport, insurance, and data protection authorities could all issue conflicting guidelines for self-driving cars.

To tackle this, the white paper suggests establishing a central body, which would ensure the harmonious implementation of guidance. It’s vital to compel the different regulators to consult this organisation rather than leaving the decision up to them.

The UK approach shows promise for fostering innovation and addressing risks. But to strengthen the country’s position as a leader in the area, the framework must be aligned with regulation elsewhere, especially the EU.

Fine-tuning the framework can enhance legal certainty for businesses and bolster public trust. It will also foster international confidence in the UK’s system of regulation for this transformative technology.The Conversation

Asress Adimi Gikay, Senior Lecturer in AI, Disruptive Innovation and Law, Brunel University London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How the UK data protection authority gives free pass to big tech giants


Asress Adimi Gikay (PhD)

In the online space, one of the most empty promises is “we value your privacy.“ Businesses promise to preserve our privacy rights but they neither have the carrot, nor the stick to make them respect data protection rules. So, they  flout data privacy laws, as regulators either struggle to adequately enforce the law or wilfully ignore infractions.

The UK’s data protection authority— the Information Commissioner's Office (ICO)— has succumbed the most to its ambition of promoting innovation and economic growth while simultaneously protecting the public’s personal data. The authority's enforcement defies its primary objective of protecting the public's data privacy rights.

The ICO’s enforcement track record—the numbers don’t lie

During the 2021-2022 fiscal year, the ICO reported receiving 35,558  data privacy violation complaints. The complaints were diverse including companies refusing to delete individuals’ personal data or processing their data without consent. Sometimes, organizations infringed the individual’s right to access their own personal data, contrary to what the data protection legislation requires.

Similarly, in the 2022-2023 financial year, a total of 27,130  complaints were filed with the ICO, excluding data from the most recent financial quarter that the authority is yet to report. Out of the 62,688 complaints filed over a span of two years, the authority levied only 59 monetary penalties. Only approximately 0.094% of the complaints led to organizations being sanctioned for breaching data protection rules.

The ICO closed most of the complaints alleging insufficient information to proceed with the complaints or lack of evidence of infraction. It resolved numerous cases through discussions with infringing companies. In such cases, the authority recognises the presence of infringement by the organization but encourages the organization to rectify the violation, including addressing the underlying complaint.

Due to the ICO’s practice of not disclosing comprehensive details about these cases, except for summaries, the public tends to perceive the authority as prioritizing business interests over safeguarding data privacy rights.  Interestingly, this public perception aligns with the available evidence.

The broader context

The enforcement of the GDPR has been unsatisfactory across the EU, since the implementation of what has been described as a breakthrough law, that promised to empower people in the digital world, through giving more control to citizens on their personal data. Even when applying a more forgiving standard, the ICO's enforcement record remains unsatisfactory. Between 2018 and 2022, it levied around 50 monetary penalties, while German and the Italian authorities imposed 606 and 228 penalties between 2018 and 2021.

The ICO is generally passive compared to its European counterparts. In a notable case, the French authority, Commission Nationale de l’Informatique et des Liberté  (CNIL) fined Meta and Google €60 million and €150 million respectively in 2021 for their illegal use of cookies. Despite engaging in similar unlawful data collection practices in the UK, the companies made changes to their cookie-based data collection practices in the UK only while complying with the French ruling. They faced no threat of sanction in the UK.

The ICO's consistently poor enforcement record clearly undermines public confidence in the authority. In its 2022 annual report, the authority itself acknowledged getting the lowest score in complaint resolution in a 2021 customer survey it backed. An independent review—Trustpilot— rates the authority at 1.1 out of 5. This is based on self-initiated reviews conducted by members of the public, some claiming that the ICO prioritizes business interests rather than protecting privacy rights.

Unfit enforcement policy— corporate free pass

The ICO’s risk-based approach enforcement prioritizes a softer approach to ensuring compliance, reserving enforcement actions to violations that are likely to pose the highest risk and harm to the public. Enforcement action includes requiring an offending organization to end violations and comply with relevant rules through  so-called enforcement notice and issuing penalty.

The ICO considers several factors in determining whether imposing a penalty is appropriate, including the intentional or repeated nature of the breach, the degree of harm to the public, and the number of people impacted. In practice however, it uses discretion even in cases of intentional and repeat violations.

In one fiscal  year(2022/2023), Google UK violated the law more than 25 times,  as acknowledged by the ICO in separate complaints, but the authority only advised the company to comply.

Google UK's infractions include refusal or delaying to delete personal data upon request by individuals exercising their right to be forgotten. Meta Platform(formerly Facebook Inc.) received 20 compliance suggestions, after evidence of its infringement has been found, while Microsoft and Twitter each received the same soft compliance advices 8 times, in the same year.

In all these cases, taxpayer's data protection rights were violated and evidence of infringement by big tech companies have been found, yet the ICO consistently chose to give the offenders a free pass, rather than standing up for citizens and upholding the law.

 The need for policy change

The ICO's enforcement policy relies on collaborating with regulated entities rather than effectively sanctioning them to deter repeat violations. This approach aims to support the digital economy by avoiding excessive enforcement of data protection rights and fostering data innovation. In theory, it should attract businesses to the UK, create jobs, and stimulate economic growth. However, the policy is currently being applied to serve the interest of big tech companies.

The companies repeatedly violating data protection laws don’t necessarily contribute to digital innovation in the UK, while most of them are not strategically positioned to provide job opportunities in the country. But the UK remains their crucial consumer market. As such, sanctioning them is unlikely to change their business decisions and behaviour to the detriment of the UK economy. 

The ICO’s failure to effectively enforce data privacy laws erodes public trust. It could also discourage data innovation, as the public might refuse to provide data for research and innovation, which could in turn negatively affect the digital economy. 



I am a Senior Lecture in AI, Disruptive Innovation and Law (Brunel University London). If you are interested in occasional updates like this, follow me on Twitter or LinkedIn.


 


Tuesday, June 6, 2023

If we’re going to label AI an ‘extinction risk’, we need to clarify how it could happen

This is not the first time that AI has been described as an existential threat. Nouskrabs/Shutterstock
Nello Cristianini, University of Bath

This week a group of well-known and reputable AI researchers signed a statement consisting of 22 words:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

As a professor of AI, I am also in favour of reducing any risk, and prepared to work on it personally. But any statement worded in such a way is bound to create alarm, so its authors should probably be more specific and clarify their concerns.

As defined by Encyclopedia Britannica, extinction is “the dying out or extermination of a species”. I have met many of the statement’s signatories, who are among the most reputable and solid scientists in the field – and they certainly mean well. However, they have given us no tangible scenario for how such an extreme event might occur.

It is not the first time we have been in this position. On March 22 this year, a petition signed by a different set of entrepreneurs and researchers requested a pause in AI deployment of six months. In the petition, on the website of the Future of Life Institute, they set out as their reasoning: “Profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs” – and accompanied their request with a list of rhetorical questions:

Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation?

A generic sense of alarm

It is certainly true that, along with many benefits, this technology comes with risks that we need to take seriously. But none of the aforementioned scenarios seem to outline a specific pathway to extinction. This means we are left with a generic sense of alarm, without any possible actions we can take.

The website of the Centre for AI Safety, where the latest statement appeared, outlines in a separate section eight broad risk categories. These include the “weaponisation” of AI, its use to manipulate the news system, the possibility of humans eventually becoming unable to self-govern, the facilitation of oppressive regimes, and so on.

Except for weaponisation, it is unclear how the other – still awful – risks could lead to the extinction of our species, and the burden of spelling it out is on those who claim it.

Weaponisation is a real concern, of course, but what is meant by this should also be clarified. On its website, the Centre for AI Safety’s main worry appears to be the use of AI systems to design chemical weapons. This should be prevented at all costs – but chemical weapons are already banned. Extinction is a very specific event which calls for very specific explanations.

On May 16, at his US Senate hearing, Sam Altman, the CEO of OpenAI – which developed the ChatGPT AI chatbot – was twice asked to spell out his worst-case scenario. He finally replied:

My worst fears are that we – the field, the technology, the industry – cause significant harm to the world … It’s why we started the company [to avert that future] … I think if this technology goes wrong, it can go quite wrong.

But while I am strongly in favour of being as careful as we possibly can be, and have been saying so publicly for the past ten years, it is important to maintain a sense of proportion – particularly when discussing the extinction of a species of eight billion individuals.

AI can create social problems that must really be averted. As scientists, we have a duty to understand them and then do our best to solve them. But the first step is to name and describe them – and to be specific.

Nello Cristianini, Professor of Artificial Intelligence, University of Bath

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Friday, June 2, 2023

The UK's pro-innovation AI regulatory framework is a step in the right direction

Asress Adimi Gikay (PhD), Senior Lecturer in AI, Disruptive Innovation, and Law (Brunel University London) 

Twitter @DrAsressGikay


The Essence of the UK's pro-innovation regulatory approach  

After several years of evaluating the available options to regulate AI technologies, and the publication of the  National AI Strategy in 2021 setting out a regulatory plan, the UK government finally set out its pro-innovation regulatory framework in a white paper published in March of this year. The government is currently collecting responses to consultation questions. 

The white paper specifies that the country is not ready to enact a statutory law in the foreseeable future governing AI. Instead, regulators will issue guidelines implementing five principles outlined in the white paper. According to the white paper, following the initial period of implementation, and when parliamentary time allows, 'introducing a statutory duty on regulators requiring them to have due regard to the principles' is anticipated. So, an obligation to enforce the identified principles will imposed on regulators, if it is deemed necessary based on the lessons learned from the non-statutory compliance experience. But this will most likely not take place in the coming 2 to 3 years, if not more

The UK's pro-innovation regime starkly contrasts with the upcoming European Union(EU) AI Act's risk-based regulation, applying different legal standards to AI systems based on the risk they pose.  The EU's proposed regulation bans specific AI uses, such as facial recognition technology(FRT), in publicly accessible spaces while imposing strict standards for developing and deploying the so-called high risk AI systems, including detailed safety and security, fairness, transparency and accountability. The EU's regulatory effort aims to tackle AI risks through a single legislative instrument overseen by a single national authority of member states. 

Undoubtedly, AI poses many risks ranging from discrimination in healthcare to reinforcing structural inequalities or perpetuating systemic racism in policing tools that could utilize (il)literacy, race, and social background to predict a person's likelihood to commit crimes. Certain AI uses also pose risks to privacy and other fundamental rights, as well as democratic values. However, the technology also holds tremendous potential for improving human welfare through enhancing the  efficient delivery of public services such as education, healthcare, transportation, and welfare. 

But is the UK's self-proclaimed pro-innovation framework, that uses a non-statutory regulatory approach to tackle the potential risks of AI technologies appropriate?  

I contend that with additional fine-tuning, the approach taken by the UK better balances the risks and benefits of the technology, while also promoting socio-economically beneficial innovation.

Key components of the envisioned framework

The UK approach to AI regulation has three crucial components.  First, it relies on existing legal frameworks relevant to each sector such as privacy, data protection, consumer protection, and product liability laws, rather than implementing comprehensive AI-specific legislation. It assumes that many of the existing legislations being technology neutral would apply to AI technologies. 

Second, the white paper establishes five principles to be applied by each regulator in conjunction with the existing regulatory framework relevant to the sector. These principles are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Third, rather than a single regulatory authority, each regulator would implement the regulatory framework supported by a central coordinating body that among others, facilitates consistent cross-sectoral implementation. As such, it is up to individual regulators to determine how they apply the fundamental principles in their sectors. This could be called a semi-sectoral approach as the principles apply to all sectors, but their implementation may differ across sectors.

Although the white paper does not envision prohibition of certain AI technologies, some of the principles could be used to effectively prohibit certain use cases, for example unexplainable AI with potentially harmful societal impact.  Regulators are given a leeway, as a natural consequence of the flexibility offered by the approach adopted.

There will not be a single regulatory authority comparable to, for example, the Information Commissioner's Office that enforces data protection law in all areas. Initially, a statute will not require regulators to implement the principles. Actors in the AI supply chain will also have no legal obligation to comply with the principles unless the relevant principle is part of an existing legal framework. 

For instance, the principle of fairness requires developing and deploying AI systems that do not discriminate against persons based on any protected characteristics. This means that a public authority must fulfil its  Public Sector Equality Duty(PSED) under the Equality Act by assessing how the technology could impact different demographics. On the other hand, a private entity has no PSED as this obligation applies only to public authorities. Thus, private actors may avoid the obligation to comply with this particular aspect of the fairness principle unless they voluntarily choose to comply.

Why is the UK's overall approach appropriate? 

The UK’s flexible framework is generally a suitable approach to the governance of an evolving technology. Three key reasons can be provided for this.

   It allows evidence-based regulation.

Sweeping regulation gives the sense of preventing and addressing risks comprehensively. However, as the technology and its potential risks are yet to be understood reasonably, most AI risks today are a product of guesswork. 

This is a significant issue in AI regulation, as insufficient and non-contextualised evidence is increasingly used to advocate for specific regulatory solutions. For instance, risks of inaccuracy and bias identified in gender classification AI systems are frequently cited to support a total ban on law enforcement use of FRT in the UK. 

Although FRT has been used by law enforcement authorities in the UK several times, no considerable risk of inaccuracy has been reported because the context of law enforcement of FRT, especially in the UK, is different from online gender classification AI systems. Law enforcement use of FRT is highly regulated, so the technology deployed is also more stringently tested for accuracy, unlike an online commercial gender classification algorithm that operates in less regulated environments. Ensuring that relevant and context-sensitive evidence is used in proposing regulatory solutions is crucial.

By augmenting existing legal frameworks with flexible principles, the UK's approach enables regulators to develop tailored frameworks in response to context-sensitive evidence of harm emerging from the real-world implementation of AI, rather than relying on mere speculation

 Better enforcement of sectoral regulation 

Scholars have debated for a while on whether sector-specific regulations enforced by a sectoral regulator are suitable in algorithmic governance. In a seminal piece, 'An FDA for Algorithm,’ Andrew Tutt advocated for creating a central regulatory authority for algorithms in the US comparable to the Federal Drug Administration. The EU has adopted this approach by proposing a cross-sectoral AI Act, enforceable by a single national supervisory authority. The UK chose a different path, which is likely the more sensible way forward.

Entrusting AI oversight to a single regulator across multiple sectors could result in an inefficient enforcement system, lacking public trust. Different regulatory agencies possessing expertise in specific fields, such as transportation, aviation, drug administration, and financial oversight, are better placed to regulate AI systems used in their sectors. Centralising regulation may lead to corruption, regulatory capture, or misaligned enforcement objectives, impacting multiple sectors. In contrast, a decentralised approach allows specific regulators to set enforcement policies, goals, and strategies, preventing major enforcement failures and promoting accountability.

The ICO can provide a good example.  Its track record in enforcing data protection legislation is exceptionally poor, despite having the opportunity to bring together all the required resources and expertise needed to perform its tasks. The failed miserably, and its failure impacts data protection in all sectors.

As the Centre for Data Innovation asserted, “If it would be ill-advised to have one government agency regulate all human decision-making, then it would be equally ill-advised to have one agency regulate all algorithmic decision-making."

The UK's proposed sectoral approach avoids the risk of having a single inefficient regulatory authority by distributing regulatory power across sectors.

Non-statutory approach and flexibility to address new risks

The non-statutory regulatory framework allows regulators to swiftly respond to unknown AI risks, avoiding lengthy parliamentary procedures. AI technology's rapid advancement makes it difficult to fully comprehend real-world harm without concrete evidence. 

Predicting emerging risks is also challenging, particularly regarding "AI systems that have a wide range of possible uses, both intended and unintended by the developers"(known as general purpose AI) and machine learning systems. Implementing a flexible regulatory framework allows the framework to be easily adapted to the evolving nature of the technology and the resulting new risks.

But two challenges need to be addressed   

The UK's iterative, flexible, and sectoral approach could successfully balance the risks and benefits of AI technologies only, if the government implements additional appropriate measures.

Serious enforcement  

The iterative regulatory approach would be effective only if the relevant principles are enforceable by regulators. There must be a legally binding obligation for relevant regulators to incorporate these principles in their regulatory remit and create a reasonable framework for enforcement. This means that regulators should have the power to take administrative actions while individuals should be empowered to seek redress for the violation of their rights or to compel compliance with existing guidelines.  If no such mechanism is implemented, the envisioned framework will not address the risks posed by AI technologies. 

Without effective enforcement tools, companies like Google, Facebook, or Clearview AI that develop and/or use AI will have no incentive to comply with non-enforceable guidelines. There is no evidence to support this, and there will never be.

Enforcing the principles does not require changing the flexible nature of the UK’s envisioned approach, as how the principles are implemented is still left to regulators.  The flexibility remains largely in the fact that the overall principles can be amended without a parliamentary process. So, regulators can tighten or loosen their standards depending on the context. However, a statute that says the essence of those principles should be implemented and enforced by the relevant regulators is necessary.

Defining the Role of the central coordinating body

The white paper emphasizes the need for a  central function to ensure consistent implementation and interpretation of the principles, identify opportunities and risks, and monitor developments. But regulators must consult this office when implementing the framework and issuing guidelines. 

Although the power to issue binding decisions may not need to be conferred, the central office should be mandated to issue non-binding opinions on essential issues, similar to the European Data Protection Board. Regulators should also be required to initiate a request for an opinion on certain matters formally. This would facilitate cross-sectoral consistency in implementing the envisioned framework and enable early intervention in tackling potential challenges.

Conclusion 

The UK has taken a step in the right direction in adopting a flexible AI regulation that fosters innovation and mitigates the risks of AI technologies. However, the regulatory framework needs to be enhanced to maintain the UK's leadership in AI. The lack of a credible enforcement system and solid coordination mechanism may undermine the objective of the envisioned framework, including deterring innovation and undermining public trust and international confidence in the UK regulatory regime.