“People will come to love their oppression, to adore the technologies that undo their capacities to think.”
– Aldous Huxley
Artificial Intelligence (AI) is a disruptive technology that is rapidly transforming our world, impacting everything from healthcare and transportation to communication, education and entertainment. AI refers to programs that have the ability to autonomously identify patterns through accurate data analysis, ‘learning’ and adapting from input data, without being explicitly given instructions on how to do soi. It has the ability to create content, make predictions and suggest recommendations, all of which have upended and revolutionized technology, and made us think in fundamentally different ways.
While AI holds immense potential for progress, its development and deployment raise critical ethical and social concerns, especially in the Global South. In examining the challenges that hinder AI adoption we find the traditional issues regarding the digital divide. These include limited infrastructure, absence of local AI innovation, and a lack of training and skilled personnel. The consequences of this digital divide are far-reaching and have a domino effect of negative consequences. Without access to AI-powered tools and resources, businesses in the Global South will struggle to compete in the global marketplace, hindering economic growth and job creation.
Access to information and services is also impacted. AI is increasingly used to deliver essential services like healthcare and government programs. AI is also transforming education, but the digital divide excludes many students in the Global South, perpetuating existing educational inequalities and limiting their future opportunities. The digital divide can leave people in the Global South without access to these crucial services. Furthermore, as AI plays a larger role in warfare and cyber attacks, the Global South’s lack of resources and expertise could leave them more vulnerable to these threats. The digital divide can also exacerbate existing social inequalities within developing nations themselves: those with access to technology will have a significant advantage over those without.
But there are perils that come with AI, and therefore we need to put the hype aside and examine those perils and discuss the need for responsible AI development and deployment.
From Loss of Control to Inherent Flaws
In my view, there are six key concerns surrounding the development and deployment of AI:
1. Inability to Opt Out: A Loss of Control
As AI becomes integrated into everything from infrastructure to finance, a single point of failure or malicious actor could have devastating consequences. Who controls this critical infrastructure? Can we ensure its security and prevent misuse? Should there be centralized control for data security? Will states have independent control over their own AI development and deployment?
Here is a possible scenario: Imagine a nation reliant on AI for essential services being cut off from its own data as ‘punishment’. This chilling scenario highlights the potential for AI to be weaponized against entire populations who never had the opportunity to opt out in the first place. While this scenario may seem extreme, it is quite plausible given the current state of global instability.
Furthermore, the inability of countries to opt out of AI because of the financial and social consequences of opting out, exacerbate the problem. If we cannot opt out of it, we will therefore be forced to use it as it comes.
2. Ethics Left to Big Tech? A Recipe for Trouble
Companies often tout ethical principles in AI, but history shows a gap between words and actions. We cannot rely solely on the good intentions of corporations to safeguard our data and privacy. Transparency is therefore crucial. We need to know how our data is being used, not just for commercial and marketing purposes, but also in potentially harmful applications like military operations.
The opacity of AI, particularly in its early stages of deployment even in developed nations, is a cause for concern. We often lack transparency into how these algorithms reach their conclusions, making it difficult to assess fairness and potential bias.
Time and again we have read promises and documents of principles from Google, Meta, Amazon, and other big corporations. Time and again we have seen those violated in the most outrageous and egregious ways.
Beyond data collection, the concept of “data colonialism” describes a troubling trend. Governments and corporations may claim ownership and privatize data generated by users and citizensii. This can lead to a lack of control over personal information and further disadvantage developing nations as well as the marginalized.
Additionally, the failure to recognize local contexts within the broader digital ecosystem raises concerns. AI solutions designed for one region may not translate well to another, leading to increased surveillance and disproportionate influence over economics, politics, and culture in developing nations.
3. Privacy Eroded, Repression Amplified
AI is a powerful tool for surveillance and censorship in the hands of authoritarian governments. We cannot trust these regimes to use AI responsibly, raising concerns for freedom of expression and dissent and for the safety of human rights defenders. In this case, AI algorithms can amplify existing biases, leading to the suppression of specific viewpoints. Free speech requires a diversity of voices, not just those deemed “safe” by AI.
We must also take into consideration the potential weaponization of data against the poor. In a 2019 report, the former United Nations Rapporteur on Extreme Poverty and Human Rights, Philip Alston, stated that “systems of social protection and assistance are increasingly driven by digital data and technologies … are used to automate, predict, identify, surveil, detect, target and punish” the poor (A/74/48037 2019).iii He added that “Big Tech operates in an almost human rights free-zone, and that this is especially problematic when the private sector is taking a leading role in designing, constructing, and even operating significant parts of the digital welfare state.”
4. From Surveillance to Warfare: The Dark Side of AI
The use of AI in warfare and conflict zones raises serious ethical concerns. Project Nimbus, Project Lavender, and Where’s Daddy, all used by Israel in Gaza, and other opaque AI projects highlight the potential for harm in the hands of militaries. It raises serious ethical concerns and carries obvious and potential risks. Such key negative consequences include the loss of human control and accountability.If AI makes critical decisions about who to target and engage in combat, what happens when things go wrong? Who is accountable if an AI system causes civilian casualties or makes a devastating mistake? With humans removed from the decision-making loop, the issue of accountability becomes murky.
Some AI decision-making processes are opaque, making it difficult to understand how they arrive at their conclusions. This is what I would call a black box effect where lack of transparency makes it hard to hold anyone accountable for AI mistakes and hinders proper oversight.
Autonomous weapons systems powered by AI could also make it easier for countries to go to war. The threshold for using military force might become lower, potentially leading to escalation and wider conflicts with catastrophic consequences. Additionally, just like the nuclear arms race, there’s a potential for an AI arms race. Countries might compete to develop ever-more sophisticated autonomous weapons systems, leading to a dangerous and destabilizing situation in an already destabilized world.
As with any usage of AI, military data sets can be biased. This bias can be reflected in how AI systems identify targets, potentially leading to the dehumanization of enemies and civilian casualties, similar to playing violent video games. The user does not truly feel the consequences of death and destruction on real human life. The use of AI in warfare is more likely than not to violate international law and lead to unethical targeting practices.
But even those who are using the system may be vulnerable themselves. Imagine a scenario where an autonomous weapon controlled by AI is hacked by an adversary. The potential for these systems to fall into the wrong hands and be misused by malicious actors is a significant concern.
As AI technology advances, these concerns will only grow in importance. International cooperation and regulations are crucial to ensure responsible development and use of AI in warfare.
5. The All-Knowing AI: A Partner or a Threat?
Emerging AI like ChatGPT4o, ‘o’ meaning omni, with its ability to analyze voice and emotions blurs the line between assistant and partner. As AI developer Ghazi Khan suggests, AI may become deeply integrated into our lives, raising questions about control and manipulation.
Already we suffer from marketing trends that use our data and treats us as commodities – a commodification of people in large databases with targeted advertisements. AI will only exacerbate this infringing upon individual rights.
6. Inherent Flaws and Security Concerns
The race for early deployment of AI has led to what is known as AI hallucinations and sycophancy, the latter being where AI generates false information sometimes just to give the user the answers they want to hear. Google’s CEO acknowledges that AI hallucinations are an inherent feature. This raises concerns about the reliability and potential for manipulation of AI outputs.
With a public still grappling with understanding technology in general, AI will require awareness-campaigns and training to navigate its complexities.
These six concerns highlight the need for a multifaceted approach to AI development and deployment. The proposed solution of international organizations and data actors committing to building sustainability in local data production and knowledge generation in low-income countries sounds promising. However, the effectiveness of such a solution hinges on the level of commitment and the willingness to share power and resources. This remains a question mark that needs to be addressed.
In my view, AI tools should have these 7 basic criteria. They should be private, ethical, transparent and open, sustainable, secure, affordable and most importantly, constantly monitored.
- Private: protecting individual data privacy.
- Ethical: AI development and use must be grounded in ethical principles and human rights frameworks including International Humanitarian Laws.
- Transparent and open: clear and open communication about how AI works and how data will be used is essential. It is wise to remember that we cannot be careful nor worried about that which we do not understand.
- Sustainable: AI solutions should be environmentally and socially sustainable.
- Secure: robust security measures are critical to prevent misuse and ensure data integrity.
- Affordable: unless an AI solution is affordable, it will remain in the hands of the few and thus controlled by the few.
- Monitored: We need to constantly evaluate and re-evaluate our usage of AI.
The true challenge before us lies in finding a balance between regulation and innovation. We need to ensure responsible AI development and deployment without stifling its potential benefits. By acknowledging the challenges of AI and working collaboratively to address them, we can ensure that this powerful technology serves as a force for good, bridging the digital divide and fostering a more equitable future for all.
iSAS. (no date). Machine Learning. What it is and why it matters. https://www.sas.com/en_us/insights/analytics/machine-learning.html#:~:text=Machine%20learning%20is%20a%20method,decisions%20with%20minimal%20human%20intervention.
iiNew America. Chapter 3: Government Access to Personal Data Held by the Private Sector.
iiiReport of the Special rapporteur on extreme poverty and human rights
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiCycewkp3wAhURJrkGHTkzD9wQFjAAegQIAxAD&url=https%3A%2F%2Fwww.ohchr.org%2FDocuments%2FIssues%2FPoverty%2FA_74_48037_AdvanceUneditedVersion.docx&usg=AOvVaw1jRH0rb0geojazfZCC8f3V
Article written by M.Tadros with the help of AI