In a recent article published in the journal BMJ Global Health, researchers highlighted threats arising from deliberate, inadvertent, or inconsiderate abuse of artificial intelligence (AI) and artificial general intelligence (AGI), a self-improving form of AI. Additionally, they discussed the grave consequences of not anticipating and adapting to AI-driven transformation in society.
Study: Threats by artificial intelligence to human health and human existence. Image Credit: metamorworks / Shutterstock
Major threats from AI misuse
The researchers described three ways in which AI could threaten human existence.
First, AI could increase opportunities for the manipulation of people. In China and 75 other countries, the government is expanding AI-based surveillance. AI rapidly cleans, organizes, and analyses massive amounts of personal data, including video content captured by cameras deployed in public places. While this could help counter acts of terrorism, a good use, on the downside, this data could contribute to the rise in polarization and extremist views.
AI could help create an enormous and powerful personalized marketing infrastructure to manipulate consumer behavior and generate commercial revenue for social media. Experimental evidence suggests that in the 2016 United States presidential election, political parties used AI tools to manipulate political beliefs and voter behavior.
China's Social Credit System automatically denies people access to banking and insurance services, levies fines, and prevents them from traveling and sending their children to schools based on analysis of their financial transactions, police records, and social relationships to produce evaluations of individual behavior.
AI has wide applications in military and defense systems, which raises a second threat due to AI, the advancements in the area of Lethal Autonomous Weapon Systems (LAWS). These autonomous weapons could locate, visually recognize, and 'aim at' human targets with no control of humans over them, which makes them novel and lethal weapons of mass destruction.
Disrupting elements, like terrorist organizations, could cheaply mass-produce LAWS, which comes in all sizes and forms, and set them up to kill at a mass scale. For instance, it is feasible to equip millions of quadcopter drones, small, mobile devices with explosives, visual identification, and autonomous navigational abilities, and programmed to kill without human supervision.
Thirdly, the extensive deployment of AI technology-driven tools could result in the loss of tens to hundreds of jobs in the coming decades. However, the widespread deployment of AI tools would depend largely on policy decisions by governments and society and the pace of development of AI, robotics, and other complementing technologies.
Nonetheless, the impact of AI-driven automation would be worst for people engaged in lower-skilled jobs in low and middle-income countries (LMICs). Eventually, it would not spare the upper segments of the skill-ladder of the global workforce, including those inhabiting high-income countries.
For many decades, humans have envisioned and pursued machines that are more intelligent, conscious, and powerful than ourselves. It has led to the development of AGI, theoretical AI-based machines that learn and intuitively improve their code and start developing their own purposes.
After connecting them to the real world, i.e., via robots, weapons, vehicles, or digital systems, it becomes difficult to envision or predict the effects and outcome of AGI with any certainty. Yet, deliberately or not, these machines could harm and subjugate humans. Accordingly, in a recent survey conducted among members of AI society, 18% of the participants raised concerns that AGI development could be existentially catastrophic.
Furthermore, the authors highlighted that while AI holds the potential to revolutionize healthcare by improving diagnostics and help developing new treatments, some of its applications could be detrimental. For instance, most AI systems are trained on datasets where populations subject to discrimination are under-represented.
Due to this incomplete and biased dataset, an AI-driven pulse oximeter overestimated blood oxygen levels in patients with darker skin. Similarly, facial recognition systems misclassify the gender of darker-skinned subjects.
Conclusions
The medical and public health community should raise the alarm about the risks and threats posed by AI, similar to how the International Physicians for the Prevention of Nuclear War presented evidence-based arguments about the threat of nuclear war.
Another possible intervention could be to ensure adequate checks and balances for AI, which requires strengthening public-interest organizations and democracy. Then, AI would fulfill its promise to benefit humanity and society.
Most importantly, there is a need for evidence-based advocacy for a radical makeover of social and economic policies over the coming decades. Rather, we should begin preparing our future generations to live in a world where human labor would no longer be needed for goods and services production because AI would have dramatically changed the work and employment scenario.