#TrendingNews Blog Business Entertainment Environment Health Lifestyle News Analysis Opinion Science Sports Technology World News
Artificial Intelligence in the Military
Autonomous Weapons Systems Artificial intelligence (AI) has become a rapidly increasing asset to the militaries of several nations. With AI at the focal point, the COVID-19 pandemic and war in Ukraine has driven a global technological revolution. Paul M. Nakasone, the National Security Agency Director Army General, notes: “the most recent strategies guiding U.S. national security, defence and intelligence emphasize the increasingly consequential role of AI”. However, the use of AI weapons, otherwise referred to as Autonomous Weapon Systems (AWS), is a heated matter of debate. AWS are fundamental types of weaponry that incorporate AI, which are already being used and developed in militaries. This includes autonomous stationary sentry guns and remote weapon stations (programmed to fire at humans and vehicles), and drones used in Ukraine (geared with autonomous targeting capabilities). The rapid innovations of AWS pose several ethical concerns which are exposing the insufficiency of existing governance frameworks. Whilst AWS are an immensely efficient tool as unmanned machines, the United Nations outlines that artificial intelligence is both an enabling and a disruptive technology increasingly integrated into a broad array of civilian, military, and dual-use applications, often with unforeseen implications. Here the UN refers to the implications over AWS in the hands of terrorism and anti-humanitarian affairs if not safeguarded immediately. Activists, such as Article 36, who are focused on reducing harm from AWS, are strongly advocating against them and raising awareness to the public of their ethical issues. These issues include the ability of AWS to operate without human control and the lack of cognitive awareness in human deployment of them. Intergovernmental meetings are taking place to establish an immediate legal and ethical global framework on this technology. They aim to enforce regulating and strengthening oversight mechanisms for the use of data-driven technology, including artificial intelligence, for counterterrorism purposes. Recent Events on AWS Recent events on AWS include the General Assembly, the President of the Security Council, and the new Artificial Intelligence Security centre. The General Assembly was held 22 September to debate the rapid advancements of artificial intelligence. Notable participating individuals were Robert Abela (the Prime Minister of Malta), Vivian Balakrishnan (the Minister for Foreign Affairs of Singapore), and Sheikh Hasina (Prime Minister of Bangladesh). Abela called for the “global good” utilization of AI, naming public services, pilot projects of Malta, health care, and traffic management as systems which could all be enhanced. Balakrishnan, however, highlighted the inherent risks of AI for international peace. He enforced the ethical issue outlined by Article 36; “the speed at which autonomous weapons systems can be almost instantaneously be deployed will dramatically reduce decision times for leaders”. This lack of cognitive awareness in human deployment of AI weaponry would “disrupt assumptions on military doctrines and strategic deterrence”, he said. The UN President of the Security Council convened a debate on 18 July to discuss the role of AI in international peace. António Guterres (the UN Secretary-General), Jack Clark (co-founder of Anthropic), and Yi Zeng (from the Chinese Academy of Sciences) attended. Guterres strongly criticised AWS. He called for “a ban on killer robots on moral and technological grounds”, referring to the Summit of the Future (2024) and the New Agenda for Peace (2026). These forums will make progress to establish, by 2026, a legally binding instrument to prohibit lethal autonomous weapons systems that function without human control [and create] national strategies to mitigate the peace and security implications of artificial intelligence. On Tuesday, 28 September, the US National Security Agency announced the creation of an Artificial Intelligence Security Centre. This will oversee the integration of AI within U.S security-related activities. The creation of this centre invokes the USA’s political declaration on the responsible military use of artificial intelligence, which further seeks to codify norms for the responsible use of the technology. Upon this announcement, Nakasone said: "We must build a robust understanding of AI vulnerabilities, foreign intelligence threats to these Ai systems, and ways to encounter the threat in order to have AI security”. Activist Efforts on AWS The debate over AWS is accelerating, founded on the ethical principle that a machine applying force and operating without any human control whatsoever is broadly considered unacceptable. Two strategies developed by activists to contain the use of AI in weapons are the Meaningful Human Control (MHC) policy and Autonomous Weapons Systems treaty. The MHC policy enforces the existing laws of war, ensuring the human behind the machine can be held accountable. Machines operating lethal force devoid of human judgement threatens an accountability gap, which has become a major theme of debate for AWS. There is no clear definition of what should constitute meaningful human control, however, to help push this policy, activists Dr. Michael Horowitz and Paul Scharre have outlined the three essential components to the use of AI weaponry: - Human operators are making informed, conscious decisions about the use of weapons. - Human operators have sufficient information to ensure the lawfulness of the action they are taking, given what they know about the target, the weapon, and the context for action. - The weapon is designed and tested, and human operators are properly trained to ensure effective control over the use of the weapon. The Autonomous Weapons Systems treaty also responds to the concerns over a dehumanised future of machines being tasked to apply force and kill without people understanding or being fully responsible for the consequences. This new international treaty is being developed and pushed by the activist group Article 36. It seeks to address five fundamental problems caused by AWS: dehumanisation, danger to civilians, undermining the law, opaque technologies, and risks to peace and security. Edited by: Anwen Venn
Share This Post On


Leave a comment

You need to login to leave a comment. Log-in