Research Article - Unexplored tool for sustainable development: Can artificial intelligence promote good health and well-being in Africa? photo shows PC data on screen[source: Alamy stock]

[This is an excerpt from an article in The Round Table: The Commonwealth Journal of International Affairs and Policy Studies.]

Artificial intelligence increases openness in healthcare services

The study discovered that utilising AI in healthcare systems can enhance transparency and boost public engagement. As per Bostrom (Citation2018), transparency in AI development can denote different aspects. This might relate to open-source software, open research, open datasets, transparency regarding safety methods, abilities, and organisational objectives, or to a generally non-proprietary development framework. Openness can also relate to making available to the public (regularly and as quickly as feasible) all pertinent source code and platforms and freely sharing information about algorithms, scientific findings, and concepts developed during the research. Machine intelligence offers significant potential for beneficial uses in multiple areas of the economy and society, such as transportation, healthcare, environment, entertainment, security, and scientific research.

Artificial intelligence increases and lowers financial costs

This paper reveals that integrating AI into healthcare systems improves both the accessibility and affordability of health services. A fundamental objective of AI is to leverage computational hardware and software to address complex challenges in ways that rival human capabilities. Practically, this means AI can be assigned tasks or solve problems that humans either cannot perform or lack the time to handle (Olson et al., Citation2018). Ensuring AI is accessible also involves designing with consideration for people with disabilities. Accessibility demands user-centred systems that accommodate individuals regardless of age, gender, ability, or other characteristics. Rather than adopting a one-size-fits-all model, AI solutions should follow Universal Design principles and comply with relevant accessibility standards to reach the broadest possible user base. This approach promotes equitable access and encourages active participation in both current and emerging computer-mediated activities and assistive technologies (Xiong, Citation2020). In this regard, Morris (Citation2020) highlights AI’s potential to eliminate many barriers to accessibility. For instance, computer vision can assist visually impaired individuals in perceiving their surroundings, speech recognition and translation tools can provide real-time captions for those with hearing impairments, and advanced robotic systems can enhance the abilities of people with limited mobility.

Artificial intelligence increases revenue management efficacy and accountability

According to Zhang and Kamel Boulos (Citation2023), artificial intelligence has transformed how healthcare organisations handle billing, claims, and financial operations by streamlining intricate processes. By automating tasks such as billing and medical coding, AI minimises errors, speeds up claims processing, and boosts revenue generation. As these financial workflows become more efficient, the accuracy and rapidity provided by AI not only improve the financial stability of healthcare institutions but also lead to a more seamless experience for both patients and staff.

The integration of AI into healthcare systems also strengthens accountability in service delivery. Millar et al. (Citation2018) highlight the pressing need to address accountability frameworks as AI technologies advance. Accountability, which forms the bedrock of societal trust, entails the transparent acknowledgement and assumption of responsibility or ‘answerability’ for actions, decisions, and policies. In the context of AI, three distinct interpretations of accountability emerge in scholarly discourse, each directing focus to a different area of intervention. The first interpretation, as noted by Villani et al. (Citation2018), centres on accountability as an inherent feature of AI systems, emphasising transparency and auditability in their design. By adopting AI in healthcare, institutions can partially improve both accountability and operational efficiency within public health systems.

Commonwealth Health Ministers meet
Africa’s Health Challenges: Sovereignty, Mobility of People and Healthcare Governance
A ‘New Scramble for Africa’: The Struggle in Sub-Saharan Africa to Set the Terms of Global Health

Accountability in AI is often narrowly interpreted as assigning responsibility for specific outcomes within socio-technical systems. However, a broader perspective frames accountability as a systemic property of the entire ecosystem involved in designing, deploying, and utilising AI. For instance, the AI Now Institute advocates for algorithmic impact assessments, akin to privacy impact assessments, to institutionalise accountability across AI deployment processes, including clarity on responsibility. Similarly, the World Wide Web Foundation outlines principles like fairness, explainability, auditability, and accuracy as foundational to algorithmic accountability (Millar et al., Citation2018). These frameworks emphasise that accountability must extend beyond individual actors to encompass the sociotechnical infrastructure shaping AI’s development and use.

Artificial intelligence serves as an anti-corruption tool

The six results of this paper show that AI applied in healthcare functions to combat corruption. Employing AI can tackle corruption within the health sector and advance the rule of law that allows individuals to attain good health and well-being. Technology increasingly impacts various aspects of life in highly digitised economies with a degree of e-government, where interactions or transactions with authorities mostly occur online. Although automated decision systems spark debate, they continue to be used in social security programs, the legal field, law enforcement, insurance, and security. AI and machine learning are also utilised to identify or discover money laundering. Tax agencies utilise AI to forecast risks related to tax evasion and to oversee and detect questionable tenders or bids in public procurement.

AI can be used to prevent counterfeit medications, which pose a risk to people’s health. Fake medications hinder the achievement of good health and well-being for the people of Africa. The drug markets in the affected countries are flooded with counterfeit and low-quality medications, mainly sourced from abroad, especially from India and China, which are seen as the primary providers (Mhando et al., Citation2016). Most of those affected by counterfeit medicines are impoverished and uneducated individuals who are unaware of the health risks associated with these drugs. The lack of an online system for tracking and monitoring these medications has hindered progress towards good health and threatens the advancement of sustainable development in Africa. The AI system can help oversee and monitor the distribution of these medications to patients.

Legitimate pharmaceutical companies are feeling the adverse effects of drug counterfeiting, with approximately 40% of their market share lost to counterfeiters, harming their brand reputations (Seiter, Citation2009, pp. 576–578). AI can assist in mitigating the financial losses that pharmaceutical companies face due to counterfeit medications. Moreover, fake medications are linked to numerous health hazards. Fake medications include dangerous, toxic, or hazardous ingredients like paint, antifreeze, brick dust, floor wax, heavy metals, boric acid, rat poisons, diethylene glycol, and polychlorinated biphenyl, among others (Adjei & Ohene, Citation2015), exacerbating the living conditions of the community.

Anslem Wongibeh Adunimay is with the Department of Politics and International Relations, University of Johannesburg, 4IR and Digital Policy Research Unit, Johannesburg, South Africa.