05 Mar Robot Armies: The Future of National Security
You may not feel it fully but Artificial Intelligence is certainly changing the world we live in. From educating our youth to even how economies are currently operating, experts believe that there are plenty more fields where AI can have a significant impact. Among these, national security is one of the most debated on.
When it comes to Artificial Intelligence’s possible impact on national security, the talk usually centers on the operational level of war. That is, how wars in the future would be fought given the advanced capabilities that different military powers could have by then, as well as how those advances could influence battlefield conflict. There are many layers to this, after all.
In considering the operational level of war, ethics in national security becomes the key discussion.
Particularly, what role will be played by the decision makes and just how much independence will they be given when it comes to employing and how much of the actual work they can delegate to the machine itself. Again, ethics is key here—though we must also not set aside the tendency of people to either become too greedy or too dependent on something that provides them with convenience; and in this case, power.
Artificial Intelligence, should it grow to this level of advancement, will have a profound effect when it comes to the balance of global economies and military competition. With its rapid and continuous progress in the realms of computing power and learning, as well as the increasing availability of data, who knows where we might be in the next ten or even twenty years when it comes to its uses in the military field.
What it can provide us with, when it comes to natural security strategy formulation is still largely unknown.
Things to Consider:
It can influence who joins and eventually succeeds within the profession, how familiar these people are with what the machines can and cannot inform us, and just how responsible oversight is carried out.
To help make this a bit easier to digest, think about how the field of national security has handled nuclear weapons. Within that niche exists a small team of experts, oft-referred to as the priesthood, who has incited the idea that before they are able to contribute meaningfully to the discussion of nuclear weapons, policymakers must first become experts on the subject. They must have a deep understanding of how things work and overcome the challenges in order to have a say in any decisions related to it.
There are pros and cons to this particular idea.
Having knowledgeable policymakers at the helm of decision making would mean that they’ll respond better to any issues and be able to make more informed choices as well. Instead of simply relying on the data they’re being fed or what information they’ve managed to gather themselves.
Limiting who can and cannot contribute to the decision making process does create a very narrow perspective.
Applied to the influence of artificial intelligence to a broad spectrum of national security affairs, it certainly presents very real issues should it actually come to pass.
A further rise in potential human rights issues. Even today, this is an ongoing debate, what with the progress with using UAV’s or unmanned aerial vehicles when it comes to warfare. Sometimes referred to as “killer robots”, many are questioning if their use is placing innocent human life at stake. The possible collateral damage as a result of using robots that cannot discriminate between friend or foe, unlike a human soldier can.
So, this begs the question: Is putting soldiers in the line of fire and risking their lives less important than that of the civilians? A question of morals, no doubt, and the answer should be an obvious one. Their lives count just as much.
Artificially intelligent robots can be superior to humans when it comes to certain skill and rule-based tasks. Utilizing them can certainly help lessen potential loss of life in the military, but can the right balance be found when it comes to dividing he tasks between man and machine?
Just how reliable are these AI, anyway?
There’s still plenty to consider, after all, important points that often get set aside due to the enthusiasm from both public and private sectors. If we are to adopt the use of AI for national security, in order to successfully do that, the appropriate AI ecosystem is essential.
- Knowledgeable management and a skilled workforce
- The digital capability for capturing, processing, and utilizing data
- Building a technical foundation of security, trust and reliability
- A steady investment environment as well as a solid policy framework that’s necessary for AI to thrive in this field
Among these things, AI reliability should be the top priority. This pertains to trustworthiness, validation, verification, and security. Another critical issue that tends to be overlooked in many of these discussions is how these military AI could be tested in a safe and controlled manner. In particular, those that fire weapons.
The idea of simply sending out these machines in active battlefields with very little to no testing is worrisome. Granted that no progress could be made without ample observation and study, it should never be at the expense of civilians and military personnel.
As for investment, there is no shortage of interest coming from both the military and commercial sectors. With the latter posing other potential complications, especially when it comes to policy discussions on the use of autonomous weapons. If anything, it can hinder progress and make it a challenge for governments to begin deploying and managing these AI systems.
Whether that is a good thing or not entirely depends upon one’s perspective on the matter. That said, it is worth remembering Stephen Hawking’s warning of becoming overly dependent on AI systems—how it could eventually bring about massive species extinction and even the end of mankind. Given the trajectory we’re headed when it comes to warfare and our overall sense of security, these words are worth keeping in mind.