AN AI war dog can identify friend or foe using programmed parameters. Photograph courtesy of PUBLIC DOMAIN
TECHTALKS

Battlefield AI: A double-edged sword

DT

The modern battlefield is rapidly being transformed by artificial intelligence (AI), which goes far beyond science fiction robots. Its applications include analyzing massive amounts of data to advise army movements and developing autonomous weapons systems.

While artificial intelligence promises to revolutionise fighting, ethical considerations and the possibility of unforeseen effects call this new era of warfare into question.

One of the most important ways AI is influencing warfare is through improved intelligence collecting and targeting. AI systems may use data from satellites, drones, and other sources to identify enemy positions, estimate troop movements, and even recognize individual faces.

This can lead to more focused attacks, potentially reducing civilian casualties, according to proponents of AI warfare quoted by Reuters. According to Defense News, Lockheed Martin’s Legion autonomous combat vehicle uses AI to more precisely detect and engage targets, potentially reducing collateral damage.

Artificial intelligence is also altering combat logistics and decision-making. By analyzing massive amounts of data on weather patterns, geography, and troop movements, AI can optimize supply chains, identify future dangers, and even recommend courses of action to commanders.

This can result in speedier and more informed decision-making, perhaps providing a substantial advantage on the battlefield.

Killer robots

However, the deployment of AI in warfare creates significant ethical considerations. One of the most contentious issues is the development of autonomous weaponry, also known as “killer robots.”

These weapons could choose and engage targets without human participation, raising questions about accountability and the risk of unintended escalation.

The International Committee of the Red Cross (ICRC) recommended a prohibition on fully autonomous weapons in 2018, citing the ethical and legal quandaries they present.

Despite developments in artificial intelligence, military professionals emphasize the importance of human judgment and oversight in battle. AI systems are still prone to errors and bias.

For example, an AI system trained on biased data sets may result in discriminatory targeting. Furthermore, as emphasized in a recent Foreign Policy piece, the choice to deploy lethal force should ultimately be made by humans.