Asress Adimi Gikay (PhD), Senior Lecturer in AI, Disruptive Innovation, and
Law (Brunel University London)
Twitter @DrAsressGikay
The
Essence of the UK's pro-innovation regulatory approach
After
several years of evaluating the available options to regulate AI technologies,
and the publication of the National AI
Strategy in 2021 setting out a regulatory plan, the UK
government finally set out its pro-innovation regulatory framework in a white paper published
in March of this year. The government is currently collecting responses to
consultation questions.
The
white paper specifies that the country is not ready to enact a statutory law in
the foreseeable future governing AI. Instead, regulators will issue guidelines
implementing five principles outlined in the white paper. According to the white paper, following the initial period of implementation, and when parliamentary time
allows, 'introducing a statutory duty on regulators requiring them to have due
regard to the principles' is anticipated. So, an obligation
to enforce the identified principles will imposed on regulators, if it is
deemed necessary based on the lessons learned from the non-statutory compliance
experience. But this will most likely not take place in the
coming 2 to 3 years, if not more
The
UK's pro-innovation regime starkly contrasts with the upcoming European Union(EU)
AI Act's risk-based regulation, applying different legal standards
to AI systems based on the risk they pose. The EU's proposed regulation
bans specific AI uses, such as facial
recognition technology(FRT), in publicly accessible spaces while
imposing strict standards for developing and deploying the so-called high risk AI systems, including detailed
safety and security, fairness, transparency and accountability. The EU's
regulatory effort aims to tackle AI risks through a single legislative
instrument overseen by a single national authority of member states.
Undoubtedly,
AI poses many risks ranging from discrimination in
healthcare to reinforcing
structural inequalities or perpetuating
systemic racism in policing tools that could utilize
(il)literacy, race, and social background to predict a person's
likelihood to commit crimes. Certain AI uses also pose risks
to privacy and other fundamental rights, as well as democratic
values. However, the technology also holds tremendous potential for
improving human welfare through enhancing the efficient delivery
of public services such as education, healthcare,
transportation, and welfare.
But
is the UK's self-proclaimed pro-innovation framework, that uses a non-statutory
regulatory approach to tackle the potential risks of AI technologies
appropriate?
I
contend that with additional fine-tuning, the approach taken by the UK
better balances the risks and benefits of the technology, while also
promoting socio-economically beneficial innovation.
Key
components of the envisioned framework
The
UK approach to AI regulation has three crucial components. First, it
relies on existing legal frameworks relevant to each sector such as privacy,
data protection, consumer protection, and product liability laws, rather than
implementing comprehensive AI-specific
legislation. It assumes that many of the existing legislations being
technology neutral would apply to AI technologies.
Second,
the white paper establishes five principles to be applied by each regulator in
conjunction with the existing regulatory framework relevant to the sector. These principles are
safety, security and robustness, appropriate transparency and explainability,
fairness, accountability and governance, and contestability and redress.
Third,
rather than a single regulatory authority, each regulator would implement the
regulatory framework supported by a central coordinating body that
among others, facilitates consistent cross-sectoral implementation. As such, it
is up to individual regulators to determine how they apply the fundamental
principles in their sectors. This could be called a semi-sectoral approach as
the principles apply to all sectors, but their implementation may differ across
sectors.
Although
the white paper does not envision prohibition of certain AI technologies, some
of the principles could be used to effectively prohibit certain use cases, for
example unexplainable AI with potentially harmful societal impact.
Regulators are given a leeway, as a natural consequence of the flexibility
offered by the approach adopted.
There
will not be a single regulatory authority comparable to, for example, the Information
Commissioner's Office that enforces data protection law in all
areas. Initially, a statute will not require regulators to implement the
principles. Actors in the AI supply chain will also have no legal
obligation to comply with the principles unless the relevant principle is part
of an existing legal framework.
For
instance, the principle of fairness requires developing and deploying AI
systems that do not discriminate against persons based on any protected
characteristics. This means that a public authority must fulfil its Public Sector
Equality Duty(PSED) under the Equality Act by assessing how the
technology could impact different demographics. On the other hand, a private
entity has no PSED as this obligation applies only to public authorities. Thus,
private actors may avoid the obligation to comply with this particular
aspect of the fairness principle unless they voluntarily choose to comply.
Why
is the UK's overall approach appropriate?
The
UK’s flexible framework is generally a suitable approach to the governance of
an evolving technology. Three key reasons can be provided for this.
It
allows evidence-based regulation.
Sweeping
regulation gives the sense of preventing and addressing risks comprehensively.
However, as the technology and its potential risks are yet to be understood
reasonably, most AI risks today are a product of guesswork.
This
is a significant issue in AI regulation, as insufficient and non-contextualised
evidence is increasingly used to advocate for specific regulatory solutions.
For instance, risks of inaccuracy and bias identified in gender
classification AI systems are frequently cited to support a total ban on law
enforcement use of FRT in the UK.
Although
FRT has been used by law enforcement
authorities in the UK several times, no considerable risk of
inaccuracy has been reported because the context of law enforcement of FRT,
especially in the UK, is different from online gender classification AI
systems. Law enforcement use of FRT is highly regulated, so the technology
deployed is also more stringently tested for accuracy, unlike an online
commercial gender classification algorithm that operates in less regulated
environments. Ensuring that relevant and context-sensitive evidence is used in
proposing regulatory solutions is crucial.
By
augmenting existing legal frameworks with flexible principles, the UK's
approach enables regulators to develop tailored frameworks in response to
context-sensitive evidence of harm emerging from the real-world implementation
of AI, rather than relying on mere speculation.
Better enforcement of sectoral regulation
Scholars
have debated for a while on whether sector-specific regulations enforced by a
sectoral regulator are suitable in algorithmic governance. In a seminal piece, 'An FDA for Algorithm,’
Andrew Tutt advocated for creating a central regulatory authority for
algorithms in the US comparable to the Federal Drug Administration. The EU has
adopted this approach by proposing a cross-sectoral AI Act, enforceable by a
single national supervisory authority. The UK chose a different path, which is
likely the more sensible way forward.
Entrusting
AI oversight to a single regulator across multiple sectors could result in an
inefficient enforcement system, lacking public trust. Different regulatory
agencies possessing expertise in specific fields, such as transportation,
aviation, drug administration, and financial oversight, are better placed to
regulate AI systems used in their sectors. Centralising regulation may lead to
corruption, regulatory capture, or misaligned enforcement objectives, impacting
multiple sectors. In contrast, a decentralised approach allows specific
regulators to set enforcement policies, goals, and strategies, preventing major
enforcement failures and promoting accountability.
The
ICO can provide a good example. Its track record in enforcing data
protection legislation is exceptionally poor, despite having the opportunity to
bring together all the required resources and expertise needed to perform its
tasks. The failed miserably, and its failure impacts data protection in all
sectors.
As
the Centre for Data
Innovation asserted, “If it would be ill-advised to have one
government agency regulate all human decision-making, then it would be equally
ill-advised to have one agency regulate all algorithmic decision-making."
The
UK's proposed sectoral approach avoids the risk of having a single inefficient
regulatory authority by distributing regulatory power across sectors.
Non-statutory
approach and flexibility to address new risks
The
non-statutory regulatory framework allows regulators to swiftly respond to
unknown AI risks, avoiding lengthy parliamentary procedures. AI technology's
rapid advancement makes it difficult to fully comprehend real-world harm
without concrete evidence.
Predicting
emerging risks is also challenging, particularly regarding "AI systems
that have a wide range of possible uses, both intended and unintended by the
developers"(known as general purpose AI)
and machine learning systems. Implementing a flexible regulatory framework
allows the framework to be easily adapted to the evolving nature of the
technology and the resulting new risks.
But
two challenges need to be addressed
The
UK's iterative, flexible, and sectoral approach could successfully balance the
risks and benefits of AI technologies only, if the government implements
additional appropriate measures.
Serious
enforcement
The
iterative regulatory approach would be effective only if the relevant
principles are enforceable by regulators. There must be a legally binding
obligation for relevant regulators to incorporate these principles in their
regulatory remit and create a reasonable framework for enforcement. This means
that regulators should have the power to take administrative actions while
individuals should be empowered to seek redress for the violation of their
rights or to compel compliance with existing guidelines. If no such
mechanism is implemented, the envisioned framework will not address the risks
posed by AI technologies.
Without
effective enforcement tools, companies like Google, Facebook, or Clearview AI
that develop and/or use AI will have no incentive to comply with
non-enforceable guidelines. There is no evidence to support this, and there
will never be.
Enforcing
the principles does not require changing the flexible nature of the UK’s
envisioned approach, as how the principles are implemented is still left to
regulators. The flexibility remains largely in the fact that the overall
principles can be amended without a parliamentary process. So, regulators can
tighten or loosen their standards depending on the context. However, a statute
that says the essence of those principles should be implemented and enforced by
the relevant regulators is necessary.
Defining
the Role of the central coordinating body
The
white paper emphasizes the need for a central function to
ensure consistent implementation and interpretation of the principles, identify
opportunities and risks, and monitor developments. But regulators must consult
this office when implementing the framework and issuing guidelines.
Although
the power to issue binding decisions may not need to be conferred, the central
office should be mandated to issue non-binding opinions on essential issues,
similar to the European Data Protection Board. Regulators
should also be required to initiate a request for an opinion on certain matters
formally. This would facilitate cross-sectoral consistency in implementing the
envisioned framework and enable early intervention in tackling potential
challenges.
Conclusion
The
UK has taken a step in the right direction in adopting a flexible AI regulation
that fosters innovation and mitigates the risks of AI technologies. However,
the regulatory framework needs to be enhanced to maintain the UK's leadership
in AI. The lack of a credible enforcement system and solid coordination
mechanism may undermine the objective of the envisioned framework, including
deterring innovation and undermining public trust and international confidence
in the UK regulatory regime.
No comments:
Post a Comment