The landscape of modern warfare is being shaped by the increasingly prominent presence of autonomous weapons, controlled by artificial intelligence (AI).
UC Berkeley Computer Science Professor Stuart Russell warns of the crucial implications of this advance.
According to him, this equipment has the ability to identify, select and attack human targets or objects carried by human beings, without direct intervention from other people.
‘Killer’ drones worry experts
Cardboard drones used in the Ukraine war. Image: Olhar Digital/Reproduction
As has already been proven in the conflict in Ukraine, remotely controlled drones are redefining the entire dynamics of the battlefield, forcing troops to seek underground refuges to protect themselves.
An example of this are the drones of cardboard used in this context.
Russell points out that the deployment of automatic weapons means, in practice, that being visible in a combat zone becomes a death sentence. These devices have the ability to identify and attack targets, including lethally.
Despite the undeniable efficiency of these AI-controlled weapons, they raise a series of ethical questions to be debated, which are extremely important.
One of the main concerns lies in the unlimited potential offensive power guaranteed by being an economic product, which could result in the mass release of these devices and potentially lead to the decimation of specific ethnic groups.
Another critical point is the limited ability of artificial intelligence to distinguish between civilians and military. The risk of making a mistake in making this distinction raises profound ethical questions about what makes attacking and protecting civilians legitimate in conflict zones.
The Pentagon is already testing to use thousands of drones to deliver food and other supplies autonomously. However, this does not prevent the technology from being used lethally in the future.