Monday, May 22, 2023

AI is already being used in the legal system - we need to pay more attention to how we use it

shutterstock.
Morgiane Noel, Trinity College Dublin

Artificial intelligence (AI) has become such a part of our daily lives that it’s hard to avoid – even if we might not recognise it.

While ChatGPT and the use of algorithms in social media get lots of attention, an important area where AI promises to have an impact is law.

The idea of AI deciding guilt in legal proceedings may seem far-fetched, but it’s one we now need to give serious consideration to.

That’s because it raises questions about the compatibility of AI with conducting fair trials. The EU has enacted legislation designed to govern how AI can and can’t be used in criminal law.

In North America, algorithms designed to support fair trials are already in use. These include Compas, the Public Safety Assessment (PSA) and the Pre-Trial Risk Assessment Instrument (PTRA). In November 2022, the House of Lords published a report which considered the use of AI technologies in the UK criminal justice system.

Supportive algorithms

On the one hand, it would be fascinating to see how AI can significantly facilitate justice in the long term, such as reducing costs in court services or handling judicial proceedings for minor offences. AI systems can avoid the typical fallacies of human psychology and can be subject to rigorous controls. For some, they might even be more impartial than human judges.

Also, algorithms can generate data to help lawyers identify precedents in case law, come up with ways of streamlining judicial procedures, and support judges.

On the other hand, repetitive automated decisions from algorithms could lead to a lack of creativity in the interpretation of the law, which could result slow down or halt development in the legal system.

Handcuffed man in prison.
In the US, algorithms have been used to calculate the risk of recidivism, continuing to commit crimes after previous sentencing. Brian A Jackson / Shutterstock

The AI tools designed to be used in a trial must comply with a number of European legal instruments, which set out standards for the respect of human rights. These include the Procedural European Commission for the Efficiency of Justice, the European Ethical Charter on the use of Artificial Intelligence in Judicial Systems and their Environment (2018), and other legislation enacted in past years to shape an effective framework on the use and limits of AI in criminal justice. However, we also need efficient mechanisms for oversight, such as human judges and committees.

Controlling and governing AI is challenging and encompasses different fields of law, such as data protection law, consumer protection law, and competition law, as well as several other domains such as labour law. For example, decisions taken by machine are directly subject to the GDPR, the General Data Protection Regulation, including the core requirement for fairness and accountability.

There are provisions in GDPR to prevent people being subject solely to automated decisions, without human intervention. And there has been discussion about this principle in other areas of law.

The issue is already with us: in the US, “risk-assessment” tools have been used to assist pre-trial assessments that determine whether a defendant should be released on bail or held pending the trial.

One example is the Compas algorithm in the US, which was designed to calculate the risk of recidivism – the risk of continuing to commit crimes even after being punished. However, there have been accusations – strongly denied by the company behind it - that Compas’s algorithm had unintentional racial biases.

In 2017, a man from Wisconsin was sentenced to six years in prison in a judgment based in part on his Compas score. The private company that owns Compas considers its algorithm to be a trade secret. Neither the courts nor the defendants are therefore allowed to examine the mathematical formula used.

Towards societal changes?

As the law is considered a human science, it is relevant that the AI tools help judges and legal practitioners rather than replace them. As in modern democracies, justice follows the separation of powers. This is the principle whereby state institutions such as the legislature, which makes law, and the judiciary, the system of courts that apply the law, are clearly divided. This is designed to safeguard civil liberties and guard against tyranny.

The use of AI for trial decisions could shake the balance of power between the legislature and the judiciary by challenging human laws and the decision-making process. Consequently, AI could lead to a change in our values.

And since all kinds of personal data can be used to analyse, forecast and influence human actions, the use of AI could redefine what is considered wrong and right behaviour – perhaps with no nuances.

It’s also easy to imagine how AI will become a collective intelligence. Collective AI has quietly appeared in the field of robotics. Drones, for example, can communicate with each other to fly in formation. In the future, we could imagine more and more machines communicating with each other to accomplish all kinds of tasks.

The creation of an algorithm for the impartiality of justice could signify that we consider an algorithm more capable than a human judge. We may even be prepared to trust this tool with the fate of our own lives. Maybe one day, we will evolve into a society similar to that depicted in the science fiction novel series The Robot Cycle, by Isaac Asimov, where robots have similar intelligence to humans and take control of different aspects of society.

A world where key decisions are delegated to new technology strikes fear into many people, perhaps because they worry that it could erase what fundamentally makes us human. Yet, at the same time, AI is a powerful potential tool for making our daily lives easier.

In human reasoning, intelligence does not represent a state of perfection or infallible logic. For example, errors play an important role in human behaviour. They allow us to evolve towards concrete solutions that help us improve what we do. If we wish to extend the use of AI in our daily lives, it would be wise to continue applying human reasoning to govern it.The Conversation

Morgiane Noel, PhD Candidate, Environmental Law, Human Rights, European Law., Trinity College Dublin

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, May 21, 2023

On Facial Recognition Technology in Schools, Power Imbalance and Consent: European Data Protection Authorities Should Reexamine their Approach

 Asress Adimi Gikay, Brunel University London

Peder Severin Krøyer, via Wikimedia Commons
Facial Recognition Technology in Schools

In today's world, our privacy and personal data are controlled by Big Tech companies such as Facebook, Google, Twitter, Apple, Instagram and many others. They  know  almost everything about us— our location, addresses, phone numbers, private email conversations and messages, food preferences, financial conditions and many other intimate details that we would otherwise not divulge even to our close friends. Children are not immune from this overarching surveillance power of Big Tech companies. 

In the UK, children as young as 13 years old can give their consent to processing of their personal data including using some kind of Facial Recognition Technology (FRT) by these Big Tech companies that provide online services. Many of these companies likely know our children's preferences for movies, music, food and other details more than we do. But society has no meaningful way to influence these companies whose God-like presence in our lives epitomizes the dystopia of technology-driven world.  Surveillance is the rule than the exception and we have little tools to protect ourselves from pervasive privacy intrusion.    

But the advent of FRT in schools in Europe has alarmed citizens, advocacy groups, Data Protection Authorities (DPAs) far more than the pervasive presence of Big Tech companies in our lives. It has prompted a strong response from DPAs who have consistently blocked the deployment of the technology in schools on the grounds of privacy intrusion and the breach of the GDPR.

Facial recognition is a process by which a person can be identified or recognized by Artificial Intelligence (AI) software using their facial image or video.  The software compares the individual’s digital image or video captured by a camera to an existing biometric image to estimate the degree of similarity between two facial templates to identify a match. 

There have been multiple instances of use of this technology in schools — for attendance monitoring in Sweden, access control in France and taking payment in canteens in the UK.  According to Swedish Municipal School Board, monitoring attendance using FRT would save 17280 hours  per year at the school concerned. UK Schools wanted to reduce a queue in canteen by taking payment faster and safer.  But DPAs, and in case of France, the Administrative Court of Marseille stepped in to block the technology due, primarily, to privacy related concerns regardless of the appreciable benefits.  

While the school authorities relied on explicit consent of students and/or their legal representatives to use the technology, DPAs rejected that explicit consent is a valid ground for processing personal data using FRT due to the imbalance of power between the school authorities on the one hand and students and their guardians on the other. This raises a question whether public institutions including schools could ever use FRT with the explicit consent of the data subject and if not, whether that is an outcome society should aim for.  

The Concerns about FRTs in Schools and the Fix

Scholars and advocacy groups point out that FRTs poses certain risks, especially in the context of processing children's data ranging from misuse of biometric data by the companies involved in providing or using the technology as well as bad actors such as hackers to the normalization of surveillance culture, directly stemming from individuals giving up their privacy right. 

More generally, it is argued that FRT is “an unnecessary and disproportionate interference with the students’ right to privacy.” As such, DPAs call for the deployment of less privacy intrusive technology alterative and take strict approach in whether there is a valid legal basis for using the technology including a freely obtained consent.

In its 2019 decision to fine the secondary school board of  Skellefteå Municipality, the Swedish DPA argued that although FRT was employed to monitor student attendance based on explicit consent, consent cannot be a valid legal basis given the clear imbalance of power between the data subject and the controller. In France, the Administrative Court of Marseille in agreeing with the French DPA(CNIL), concluded that the school councils have not provided sufficient guarantees to obtain free and informed consent of students to use FRT to control access, despite the fact that specific written consent has been obtained. 

In October 2021 as nine schools in North Ayrshire (UK) were preparing to replace their method of taking payment in canteens from fingerprint to facial recognition, the Information Commissioner’s Office(ICO) wrote a letter urging the schools to use  a "less intrusive" tool. The School Councils were forced to pause rolling out the technology.  The content of the ICO’s letter is not public and the ICO has not responded to the author’s Freedom of Information (FOI) Request to access the letter.

But these decisions evidently suggest that the mere presence of a power relationship between data controller and the data subject renders explicit consent as the basis for processing biometric data invalid. The UK schools’ suspension of implementing FRT for the mere fact of receiving a letter from the ICO signals that the presumed power imbalance alone would defeat explicit consent — at the very least, schools are not willing to engage in the process of obtaining consents as that would likely be regarded as insufficient and entail sanctions for the breach of the GDPR.  

While documents obtained from North Ayrshire Council under FOR request do suggest there were flaws in obtaining consent (for instance attempting to obtain consent  directly from a child of 12 years old), the overall effort of the Council seemed reasonable in terms of complying with data protection law. 

If the Council wishes to obtain valid consent, it should not be effectively prohibited ex ante. But the ICO’s letter clearly had that effect. Subsequently, on November 4, 2021, the House Lords held a debate sponsored by Lord Clement-Jones who expressed his opposition to the use of FRT in schools stating that “we should not use children as guinea pigs.” 

There is overwhelming evidence of the pressure to categorically ban the use of FRT in schools and indeed it is now effectively banned in Europe, albeit there is no legislation to that effect.

Imbalance of Power under the GDPR

Despite banning the processing of the so-called special categories of personal data, including biometric data such as facial images as a rule, the GDPR provides exceptions under which such data can be processed. Under one of the exceptions, it allows processing of biometric data to uniquely identify a natural person, if the data subject has given explicit consent to the processing of such personal data for one or more specified purposes. 

Consent should be given by a clear affirmative act establishing a freely given, specific, informed, and unambiguous indication of the data subject's agreement to the processing of personal data relating to her.

Where there is a power relationship, it is challenging to prove that consent has been obtained freely – the requirement which DPAs concluded was not met in Sweden and France. In this regard, the GDPR makes it clear that “consent should not provide a valid legal ground for the processing of personal data in a specific case where there is a clear imbalance between the data subject and the controller, in particular where the controller is a public authority and it is therefore unlikely that consent was freely given in all the circumstances of that specific situation.”

The GDPR allows DPAs and courts to consider an imbalance of power in assessing whether the consent has been obtained freely. But this is not a blanket prohibition of using explicit consent to process personal data by public authorities. This is consistent with the European Data Protection Board’s Guideline which states that “Without prejudice to these general considerations, the use of consent as a lawful basis for data processing by public authorities is not totally excluded under the legal framework of the GDPR.” 

Regulators and courts can, ex post facto, scrutinize if the explicit consent was obtained freely and in an informed manner but have no power to invalidate validly given consent based on the mere existence of imbalance of power that did not have actual effect. 

If a Member State of the European Union wishes to exclude consent as a basis for processing special categories of personal data, the GDPR allows such Member State to legislate that the prohibition of the processing of special categories of personal data may not be lifted based on explicit consent under any circumstance. Absent such legislation, the validity of consent in the context of the existence of power relationship can only be examined on case-by-case basis rather than in categorical terms. 

Thus, schools should be able to demonstrate that the presumed imbalance of power has not played a role in obtaining consent. 

The Current Approach should be Reexamined

The concerns raised by scholars and privacy advocates about the intrusive nature of FRT should be seen in the light of data protection and privacy safeguards provided by the GDPR which has a series of provisions guaranteeing that personal data is not used for a purpose different than originally intended, and that personal data be kept confidentially and securely

Furthermore, data controllers and processor have no right to share personal data with third parties unless consented to by the data subject. In the presence of these and a number of other safeguards, what seems a blanket prohibition of the use FRT in schools on the basis of unreasonable privacy anxiety and an irrebuttable presumption that power imbalance per se leads to a vitiated consent is not sensible.  There are several reasons this approach needs to be reexamined.

First and foremost, it puts small companies and public institutions at a disadvantage with regard to the use of FRT.  Big Tech companies can do almost as they please with our or our children’s personal data. Facebook’s opaque data sharing practice has frequently been exposed, but there is still no meaningful way to control what Facebook does. The same is true other Big Tech companies in the business of monetizing our personal data. Schools and companies providing FRTs should be the least of our concern. 

It is not difficult to make them abide by the GDPR, whereas Big Tech companies can hide behind complex legal and technical black boxes to get away with grossly illegal use of our personal data. The blanket prohibition of using FRTs by small institutions creates a system that unfairly disadvantages small data controllers.

Furthermore, data innovation would be deterred by over-zealous DPAs and courts who see superficial power imbalance without examining how it plays in reality while the real power imbalance society suffers from vis-à-vis Big Tech companies remain inadequately challenged.   

The future depends on innovating with data and the use of FRT would be an essential component of it. To satisfy our excessive anxiety about privacy intrusion, we cannot prevent small companies and institutions from benefiting from AI technologies and data-driven innovation while we let Big Tech companies take control of our lives. Data innovation should be by all, for all, not just for the Big Tech.


This post was first published in EU Law Analysis

 

Saturday, May 20, 2023

Algorithmic Consumer Creditworthiness Assessment in the European Union and the United States

Shutterstock.com 

 Asress Adimi Gikay(PhD), Brunel University London 

Over the years, creditworthiness assessment has evolved from interview-based evaluation   and decisions making by loan officers, to automated decision-making(ADM) with minimal or no human intervention. ADM in financial services presents opportunities and potentials risks including biases and unfairness against individuals and groups. The European Union’s General Data Protection Regulation(GDPR) contains provisions regulating ADM including in the consumer credit industry while the United States lacks specific law in the field, leading some to propose GDPR  as a model for the regulation of algorithmic consumer credit risk assessment in the US.  In my forthcoming article, ‘The American Way —Until Machine Learning Beats the Law, I argue that consumers in both jurisdictions are protected similarly despite the lack of special law in the US.

 On many levels, the GDPR provisions governing ADM lack the desired efficacy both in terms of consumer protection and encouraging data innovation. The GDPR prohibits solely ADM with legal effect or similar significant effect on the consumer by creating three exceptions to the prohibition. First, the data controller can make fully automated decision with the consumer’s consent, subject to implementing suitable measures to safeguard the rights, freedoms, and legitimate interest of the consumer. While consent based decision should protect the consumer from adverse automated decisions, evidence shows that the majority of European consumers do utilize consent as a tool of consumer protection as they do not read privacy policies adequately  to guard themselves from potential unfair algorithmic decisions. In the second exception, the GDPR allows EU Member States to authorize solely ADM by law.  The implementation of the relevant provision Member States can have adverse effect on data innovation and consumer protection.

Germany has used the exception to permit solely ADM in cases of insurance service contracts where the request of the consumer, for instance for reimbursement is granted. The German approach is unnecessarily restrictive of ADM even in cases where the harm to the consumers is appreciably low or non-existent.  The UK’s Data Protection Act (2018) has taken the opposite approach by permitting fully automated decisions across all sectors subject to ex post facto procedural safeguards, including notice to the consumer that the decision in question was fully automated. In the UK, the consumer has the right to request for a new decision which is not fully automated.  The data controller should comply with the request, and notify the consumer of the steps taken as well as the outcome. UK’s approach permits solely ADM even in cases that could be considered high risk (for instance visa process). The ex post facto procedural safeguards could be abused by a non-compliant data controller while the procedure may put a burden on the consumer wanting to challenge adverse decisions.

In the US, automated consumer creditworthiness assessment is governed by old consumer credit laws the most relevant federal statutes being the Financial Services Modernization Act of 1999(the Gramm-Leach-Bliley Act), the Fair Credit Reporting Act (FCRA) and the Equal Credit Opportunity Act).  While incremental changes to update these laws in line with technological advances and data innovation are being made, the core of these statutes remain unchanged and are applicable to algorithmic credit risk assessment. These statutes, inter alia, prohibit discrimination in consumer credit provision, require accurate credit reporting and impose transparency requirements.  

In 2017 the Consumer Financial Protection Bureau(CFBC) fined Conduent LLC $1.1 Million for inaccurate consumer credit reporting using an automated process, under the FCRA. Conduent supplied automated auto loan consumer credit reporting to lenders and credit reporting agencies, containing various categories of errors in the files of over 1 million consumers. Similarly, in  2018 the Federal Trade Commission imposed a large fine on Realpage for inaccurate algorithmic credit reporting related to rental home applicants. These cases illustrate that with technology neutral interpretation of legal rules, algorithmic decisions could be tackled without having tailored legal regime.

ML decisions require a significant regulatory change on both sides of the Atlantic. While  GDPR’s general approach to ADM fails to strike a balance between encouraging innovation and consumer protection, it’s provisions requiring transparency in ADM including granting the right to explanation are considered to be unfit for ML decisions. The European Commission’s White Paper on Artificial Intelligence(AI) acknowledges some of the flaws in the GDPR and envisions some changes. The white paper adopts a risk-based approach to AI regulation. It proposes two step analysis —identifying certain AI applications that are generally regarded as high risk and determining whether a given application within the identified sector is likely to pose a significant risk. If implemented appropriately, the risk-based approach to AI regulation protects fundamental rights, safeguards individuals from risky and unexplainable AI driven decisions and strikes a balance between the protection of ethical values and innovation.

The evidence undoubtedly demonstrates that the call for GDPR-Inspired legal rules for automated consumer creditworthiness assessment in the US is based on an unwarranted assumption of the efficient functioning of the GDPR.

 

 

 

 

 

 

 

 


ChatGPT can’t think – consciousness is something entirely different to today’s AI

Illus_man / Shutterstock
Philip Goff, Durham University

There has been shock around the world at the rapid rate of progress with ChatGPT and other artificial intelligence created with what’s known as large language models (LLMs). These systems can produce text that seems to display thought, understanding and even creativity.

But can these systems really think and understand? This is not a question that can be answered through technological advance, but careful philosophical analysis and argument tells us the answer is no. And without working through these philosophical issues, we will never fully comprehend the dangers and benefits of the AI revolution.

In 1950, the father of modern computing, Alan Turing, published a paper which laid out a way of determining whether a computer thinks. This is now called “the Turing test”. Turing imagined a human being engaged in conversation with two interlocutors hidden from view: one another human being, the other a computer. The game is to work out which is which.

If a computer can fool 70% of judges in a five-minute conversation into thinking it’s a person, the computer passes the test. Would passing the Turing test – something which now seems imminent – show that an AI has achieved thought and understanding?

Chess challenge

Turing dismissed this question as hopelessly vague, and replaced it with a pragmatic definition of “thought”, whereby to think just means passing the test.

Turing was wrong, however, when he said the only clear notion of “understanding” is the purely behavioural one of passing his test. Although this way of thinking now dominates cognitive science, there is also a clear, everyday notion of “understanding” that’s tied to consciousness. To understand in this sense is to consciously grasp some truth about reality.

In 1997, the Deep Blue AI beat chess grandmaster Garry Kasparov. On a purely behavioural conception of understanding, Deep Blue had knowledge of chess strategy that surpasses any human being. But it was not conscious: it didn’t have any feelings or experiences.

Humans consciously understand the rules of chess and the rationale of a strategy. Deep Blue, in contrast, was an unfeeling mechanism that had been trained to perform well at the game. Likewise, ChatGPT is an unfeeling mechanism that has been trained on huge amounts of human-made data to generate content that seems like it was written by a person.

It doesn’t consciously understand the meaning of the words it’s spitting out. If “thought” means the act of conscious reflection, then ChatGPT has no thoughts about anything.

Time to pay up

How can I be so sure that ChatGPT isn’t conscious? In the 1990s, neuroscientist Christof Koch bet philosopher David Chalmers a case of fine wine that scientists would have entirely pinned down the “neural correlates of consciousness” in 25 years.

By this, he meant they would have identified the forms of brain activity necessary and sufficient for conscious experience. It’s about time Koch paid up, as there is zero consensus that this has happened.

This is because consciousness can’t be observed by looking inside your head. In their attempts to find a connection between brain activity and experience, neuroscientists must rely on their subjects’ testimony, or on external markers of consciousness. But there are multiple ways of interpreting the data. Unlike computers, humans consciously understand the rules of chess and the underlying strategy. 

Chess player
LightField Studios / Shutterstock

Some scientists believe there is a close connection between consciousness and reflective cognition – the brain’s ability to access and use information to make decisions. This leads them to think that the brain’s prefrontal cortex – where the high-level processes of acquiring knowledge take place – is essentially involved in all conscious experience. Others deny this, arguing instead that it happens in whichever local brain region that the relevant sensory processing takes place.

Scientists have good understanding of the brain’s basic chemistry. We have also made progress in understanding the high-level functions of various bits of the brain. But we are almost clueless about the bit in-between: how the high-level functioning of the brain is realised at the cellular level.

People get very excited about the potential of scans to reveal the workings of the brain. But fMRI (functional magnetic resonance imaging) has a very low resolution: every pixel on a brain scan corresponds to 5.5 million neurons, which means there’s a limit to how much detail these scans are able to show.

I believe progress on consciousness will come when we understand better how the brain works.

Pause in development

As I argue in my forthcoming book “Why? The Purpose of the Universe”, consciousness must have evolved because it made a behavioural difference. Systems with consciousness must behave differently, and hence survive better, than systems without consciousness.

If all behaviour was determined by underlying chemistry and physics, natural selection would have no motivation for making organisms conscious; we would have evolved as unfeeling survival mechanisms.

My bet, then, is that as we learn more about the brain’s detailed workings, we will precisely identify which areas of the brain embody consciousness. This is because those regions will exhibit behaviour that can’t be explained by currently known chemistry and physics. Already, some neuroscientists are seeking potential new explanations for consciousness to supplement the basic equations of physics.

While the processing of LLMs is now too complex for us to fully understand, we know that it could in principle be predicted from known physics. On this basis, we can confidently assert that ChatGPT is not conscious.

There are many dangers posed by AI, and I fully support the recent call by tens of thousands of people, including tech leaders Steve Wozniak and Elon Musk, to pause development to address safety concerns. The potential for fraud, for example, is immense. However, the argument that near-term descendants of current AI systems will be super-intelligent, and hence a major threat to humanity, is premature.

This doesn’t mean current AI systems aren’t dangerous. But we can’t correctly assess a threat unless we accurately categorise it. LLMs aren’t intelligent. They are systems trained to give the outward appearance of human intelligence. Scary, but not that scary.The Conversation

Philip Goff, Associate Professor of Philosophy, Durham University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Thursday, May 18, 2023

If AI is to become a key tool in education, access has to be equal

Sam Illingworth, Edinburgh Napier University

The pandemic forced many educational institutions to move to online learning. Could the rise of chatbots, including OpenAI’s ChatGPT and Google’s Bard, now further improve the accessibility of learning and make education more obtainable for everyone?

Chatbots are computer programmes that use artificial intelligence to simulate conversation with human users. They work by analysing the context of a conversation and generating responses they believe to be relevant. They have been trained on massive data sets of human language, allowing them to generate responses to a wide range of questions.

Chatbots like ChatGPT and Bard can be used in a variety of educational settings, from primary and secondary schools to universities and adult education courses. One of their greatest strengths is in promoting individualised learning.

For example, they can support students in research and writing tasks, while also promoting the development of critical thinking and problem-solving abilities. They can generate text summaries and outlines, aiding with comprehension and organising thoughts for writing. They can also provide students with resources and information about specific topics, highlighting unexplored areas and current research topics, thus enhancing research skills and encouraging agency in learning.

Similarly, research has shown that chatbots can help to maintain students’ motivation and involvement, in part by promoting self-directed learning and autonomy. This means they they can potentially be used to help address low engagement in education that has been made worse by COVID-19 and the move to remote online learning.

Digital poverty

While chatbots have the potential to enhance learning, it’s important to acknowledge the dangers they might also pose in relation to digital poverty and the digital divide. Students who lack reliable internet access or other resources needed to participate in online classes may not have access to chatbots or other digital learning tools.

Results from the 2021 census show that in January to February 2020, 96% of households in Great Britain had internet access, up from 93% in 2019 and 57% in 2006 when comparable records began. However, these statistics do not tell the whole story.

A 2020 Ofcom Survey found that before COVID-19, 9% of UK households with children lacked a laptop, desktop or tablet, and 4% had only smartphone access. A higher percentage of children in lower-income households were affected by lack of access to digital devices. Specifically, 21% of households where the main earner held a semi-skilled or unskilled occupation had no access to a laptop, desktop or tablet for their children’s education at home.

This situation is clearly worse in countries where access to any form of internet provision is much lower than it is in the UK. Recent statistics from the US Central Intelligence Agency (CIA) for example, highlight that in many African countries, less than 10% of the total population has access to the internet at any speed.

Likewise, while ChatGPT is a publicly available tool that users do not need to pay to use, there is a paid version which unlocks privileged access. Similarly BARD, also free to use, is currently only available in certain countries. Put simply, like any other technology, chatbots have the potential to worsen pre-existing inequalities if they are not implemented carefully.

Fixing the problem

To address this, educational institutions must take proactive measures to ensure that all students have equal access to chatbots and other digital resources. Another challenge is ensuring that students understand that not everyone has the same access to digital tools as they do. Educators can help to promote this understanding by incorporating lessons on digital poverty and equal access into their curriculum.

Here are five tips for educators to ensure equity in the use of chatbots in educational settings:

1. Provide equal access to chatbots Educational institutions should ensure that all students have the same access to digital resources by providing loaner laptops, offering free or discounted internet access, or providing offline options for students with limited internet access.

2. Partner with community organisations Universities and schools can link up with community organisations that provide internet access or lend computers to students in need.

3. Offer technology training Some students may not be familiar with using chatbots or other technology tools, so schools and universities should offer technology training to help students develop the skills they need.

4. Provide support for students with disabilities Students with disabilities may face unique challenges when it comes to accessing and using chatbots. For instance, visually impaired students may face difficulties reading chatbot text, while students with cognitive disabilities may require additional support to understand and use chatbots effectively. Educators should ensure support is available for students who require extra help.

5. Raise awareness of digital equity Educators can also help ensure equity in the use of chatbots by educating students to understand that not everyone has the same access and privileges in a digital setting. By encouraging empathy and awareness of digital poverty, students can learn to be mindful of their peers who may face challenges in accessing and using chatbots. This can be done through class discussions, assignments and activities that encourage students to think critically about digital equity and social justice.

Chatbots have the potential to revolutionise learning. However, educational institutions must address the potential dangers posed by chatbots with regards to further deepening the digital divide, and instead foster a culture of empathy and understanding for those who need training and supported access to the technology.The Conversation

Sam Illingworth, Associate Professor, Edinburgh Napier University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Reasons for South Omo People’s Demand for Statehood

 | by 

A. The demand for Statehood in Ethiopia

The government of Ethiopia under the Premiership of Abiy Ahmed has recorded outstanding achievements and faced multiple political challenges. One of the most memorable political events that occurred under Abiy Ahmed’s premiership is the conceding of statehood to the people of Sidama Zone in 2019, in response to a decades-long sustained struggle.

The granting of statehood to Sidama Zone has opened a Pandora’s box of political conundrums and a potential for instability in the Southern Nations, Nationalities and People’s Region (SNNPR). As Hawassa city, which has been the administrative capital of the SNNPR, becomes the administrative capital of Sidama State, other Zones are in the process of determining a new administrative capital. This process has a disproportionate impact on the South Omo Zone, which is located in the southern periphery of the SNNPR.

Due to a combination of historical, socio-political, economic, and cultural reasons, the people of South Omo have continually demanded statehood for some time now. The government of Ethiopia, in a typical fashion, has delayed tackling the question through condescending political discussions. If the federal government does not address the demand for statehood from one of the most peaceful zones in the whole country, the consequences may be far greater than one might imagine. There will be a sense of marginalization, lack of belongingness and possibly sustained protests and violence all of which are inimical to economic development and prosperity.

B. Five Reasons in Support of South Omo’s Statehood

Today it appears that reasoned political discourse has no place in the current political environment of Ethiopia. Legitimate pleas for a better and more efficient governance structure are conveniently branded as ethno-nationalism. For three decades, under the government of Ethiopian Peoples’ Democratic Revolutionary Front (EPRDF), run by the Tigray People’s Liberation Front (TPLF), demanding statehood was labelled as an attempt to disintegrate the country. Now, this political strategy of misbranding, aimed at suppressing legitimate popular demand is still prevalent among political elites who over-zealously want to create a large government wherever it is possible. They ignore historical, socio-economic, cultural and local political contexts to achieve their ambition of big, corrupt and inefficient government structures. The time has come to reject such anachronistic political strategy and to examine alternative political ideas with reasonable and civil discourse.

In that spirit, this short memorandum invites the federal government to closely examine South Omo Zone’s demand for statehood rather than arrogantly dismissing it. The zone’s plea for statehood is a measured and appropriate response to an unjust political system and decades of marginalization. It is aimed at creating a more stable and sustainable self-administration. The memorandum provides five key reasons why South Omo Zone’s demand for statehood warrants an affirmative response from the federal government.

1.    Historically, South Omo has been a State

South Omo was directly accountable to the central government from 1891 to 1937 E.C. From 1938 to 1979 E.C, it was under the administration of Gamo Gofa Region (Kifle Hager). After the establishment of the Southern Region in 1980 E.C, it was renamed Region 10 (Kilil 10), when it resumed its status as a regional state. Subsequent to the 1984 regional council election, the Southern Nations, Nationalities and Peoples Region (SNNPR) was formed with the merger of Regions 7-11(Kilil 7-11) including South Omo (Kilil 10).

The historical evidence suggests that South Omo witnessed stronger economic development and prosperity when it enjoyed the status of state. On the contrary, it witnessed significant economic disadvantages and marginalization under the SNNPR. The people of South Omo should not be denied their historical right to self-administration recognized by the Constitution of the Federal Democratic Republic of Ethiopia. According to Article 47(3), any Ethiopian Nation, Nationality or People has the right to form its own state. This right of the Nations, Nationalities and Peoples of South Omo that has existed throughout history and is recognized by the constitution should be restored to them.

2.             Unproductive Cycles of State Restructuring

When South Omo Zone became a part of Gamo Gofa Region, its people made significant contributions to the development of the city of Arbaminch and its surrounding areas. Subsequent to the restructuring, which made South Omo part of SNNPR, South Omo integrated with SSNPR, yet its investment of resources into the development of Arbaminch have not been recognized.

Over the past several decades, the people of South Omo have similarly contributed massively to the development and prosperity of Hawassa. Hawassa, as the administrative capital of SNNPR, has for decades been a symbol of hope and unity for the people of South Omo. Subsequent to the granting of statehood to Sidama Zone, South Omo is once again on the verge of losing its hope and access to invaluable infrastructure in which it has invested for decades.

During these cycles of state restructuring, Jinka, the administrative capital of South Omo, has suffered from severe economic disadvantages as resources were directed to building the capacity of the regional capitals. Universities, technical training institutions, regional government offices, private investments, were all concentrated in those cities.

South Omo should no longer be required to endure this cycle of unproductive state restructuring that unfairly consumes its resources while undermining the people’s historical and constitutional right to self governance and economic prosperity.

3.             Lack of Strong Cultural Ties with the Rest of Nations and Nationalities in the Region

The various ethnicities in South Omo Zone do not share linguistic and cultural similarities with Gamo Gofa, Konso and other ethnicities in the Sothern region. They also do not share common cultural heritage and psychological makeup with these neighboring nations and nationalities. In
terms of economic development, South Omo is one of the Zones that still lags behind. The people of South Omo legitimately believe that it is unfair to
organize people with significant differences in language, culture, psychological makeup, and economic development under the same state structure. The current state structure will perpetuate the existing gap in the distribution of infrastructure and overall economic development and perpetuate South Omo’s slow economic development.

4.             Robust Capacity to Self-Administer

South Omo Zone is endowed with natural resources including minerals, agricultural and grazing lands, water resources, forests, national parks, fisheries, and a young labor force. The Zone is also one of the top tourist destinations in the country. The efficient and sustainable utilization of these
resources for local and national development and prosperity requires a strong state of self-governing nations, nationalities, and peoples. The granting of statehood to South Omo would ensure that a relatively smaller regional government mobilizes these resources without mismanagement and maladministration.
South Omo zone is well situated to be a state in terms of population size (estimated to be over 1 Million People), natural resources, a strong sense of patriotism and Ethiopianism, history of longstanding peaceful co-existence of various ethnicities and commitment to local and national development.

5. Unfair Distribution of infrastructure and Public Services

In South Omo Zone, youth unemployment is increasing at an alarming rate due to the lack of specialized training institutions for youth who drop out of school or who are unable to attend universities and thus seek vocational training. This has created a gap in employability where new job posts are filled by youths coming from other regions while those born and raised in South Omo are considered unqualified and remain unemployed. In addition to the above challenges, the zone has generally poor infrastructure and public services including electricity, road, health services, financial services, roads and many other essential services. The granting of statehood to South Omo would create an opportunity to invest in various infrastructure and youth employment programs.

C. A Call for More Transparent and All-inclusive Dialogue
South Omo peoples’ demand for statehood should not be easily dismissed. The question is raised by the younger generation. It will define the future of politics in the Southern region. It is a justifiable demand, not only from the perspective of the right to self-administration but also from the standpoint of efficient administration of natural resources, good governance and democracy. It is proven in other countries that smaller governments manage their resources better fight against corruption more effectively and deliver public services, including responding to epidemics, more quickly and effectively.

South Omo Zone can be considered one of the most peaceful zones in Ethiopia where different ethnic groups co-exist. It has also had a long- standing friendly relationship with neighboring zones. The Zone will continue to represent modern Ethiopia for centuries to come. It will continue to demonstrate the strength that lies in diversity in its quintessential form. However, all of these will happen only if its nations, nationalities, and peoples are given back their historical and constitutional right to self-governance and prosperity. South Omo must regain its statehood for the good of its people and the people of Ethiopia and it should regain it now.

If the government continues to treat the people with condescension and engage in opaque behind-closed-door dialogues with selected individuals, it will be a consequential mistake that the government would regret in the future. True democracy, sustainable development, and prosperity cannot be achieved by arbitrary decision-making in government structure and over-centralized government.

This article was first published on Ethiopia Observer.