Since the beginning of armed conflict, humanity has looked for ways to improve its ability to wage war. The progression of technology has provided militaries, most notably that of the United States, with a myriad of strategic and tactical advantages, and with it, the advent of the unmanned aerial vehicle (UAV). The integration of the UAV into the battle space, however, has not come without controversy. The notion of a platform that divorces militaries and governments from the reality of war and challenges the ethics of full autonomy is cause for concern. The Fire Scout, a UAV newly adopted by the U.S. Navy, does not stray from this controversy, but rather, adds to the ethical dilemmas that challenge our military forces today.
The Fire Scout has come into its own as a UAV and has proven itself to be the cutting edge in military technology. Unlike other UAVs such as the Predator drone, the Fire Scout has the ability to take off and land without any human assistance. In 2006 the Fire Scout became the first aerial platform to land and take off from the deck of a Navy ship completely unaided. In addition to its artificial intelligence, the Fire Scout carries "two four-packs of 2.75 inch rocket launchers," which are "designed to fire advanced precision-kill weapon-system laser-guided rockets." Nonetheless, the Navy has indicated that firing rockets "would call for human control1
for now."And herein lies the problem. With the advent of autonomous vehicles, the ethics of war begin to blur. That is not to say that a platform like the Fire Scout should not be used by the Navy. Clearly it performs missions, such as reconnaissance and precision targeting, that are beneficial to the war fighter. But that does not remove the concern of having a platform that could potentially use force without human control.
Artificial Intelligence, Artificial Ethics
One of the most common ethical arguments against the use of fully autonomous platforms is the potential for violation of the Just War Theory2
more specifically, justice in war, or jus in bello. One of the two main principles of jus in bello is "the discrimination principle only military targets (personnel and installations) can be attacked." To be ethically sound, fully autonomous vehicles would have to be able to comply with this principle. Unfortunately, the probability that an autonomous vehicle would be able to fulfill this ethical requirement seems low. As Diederik W. Kolff writes about UAVs, "As long as one sticks to the ROE [rules of engagement] and international law, the fact that there is a human in the cockpit of the weapon platform does not change the outcome." And while this assertion is correct, it is under that assumption that artificial intelligence could be developed with ethical norms that match those of human beings. This would mean "proper programming" to avoid violating the principle of distinction, which is voiced not only by the Just War Theory but also by the Geneva Conventions.This would pose no problem if the programming given to the Fire Scout were flawless and if the platform had no glitches or probability of mistakes. But it is well known that no feat of engineering, no matter how great, is immune to error. Granted, human beings are not immune to error, either. Nevertheless, to make the claim that a UAV could conduct strike missions without human judgment for support is to say that the Fire Scout is capable of distinguishing and reasoning as a human would. This is clearly not the case.
Another concern regarding the use of UAVs is that it increases the risk of war. It is clear that the increase in UAV technology will lower the political cost of going to war and will save American lives as it assumes combat roles originally performed by humans. But what does this say about the risk of going to war3 Another answer might be that we are saving the lives of future generations as they will go to combat through a computer screen rather than fly over the battlefield.
One might answer that by replacing humans with automated machines we are reaching the point of war without casualties.Those answers seem correct, insofar as the use of force remains in human hands. The utility of limited war and saving lives becomes irrelevant once the risk of international conflict depends on the judgment of a robot. Take the killing of Abu Ali al-Harithi, the suspected mastermind of the USS Cole (DDG-67) bombing. His death marked an interesting transition of UAVs from a "surveillance drone to a hunter-killer asset." It is believed that the killing took place with the cooperation of Yemeni officials, or at least with their knowledge. Yet despite praise from the Pentagon, Yemen's government never officially acknowledged either cooperation in or knowledge of the matter.4
Or take the case of current operations in Pakistan. In December 2009 Taliban leader Mullah Omar was reported to be hiding in or around the province of Quetta. Pakistani officials denied the claims and refuted the intelligence that pointed to Omar's whereabouts. Moreover, Pakistan does not publicly condone Predator strikes, but they occur nonetheless in authorized areas such as the Taliban tribal belt; an attack anywhere else would constitute a violation of Pakistan's national sovereignty.5
The Risks of Autonomy
Granted, it is highly unlikely if not completely improbable that armed conflict would erupt between the United States and Yemen or Pakistan based on unauthorized Predator strikes. What is important to take from these two cases, however, is that the tensions between the countries were a product of UAV strikes that were based on human discretion. Consider if the Predator were replaced with a fully autonomous Fire Scout; the outcome could have been much different. Given the right set of conditions, or perhaps a technical malfunction, a platform like the Fire Scout might have attacked Abu Ali al-Harithi without approval from Yemeni officials, or attacked Mullah Omar in territories not authorized by Pakistani officials. Either of those scenarios may not have led directly to war, but may have caused irreparable damage to the relationship with countries that provide valuable support in the war on terrorism. The question is whether the United States, or the Navy for that matter, is willing to take that gamble.
Identifying the issue is not enough, though. Perceiving the road ahead does not mean you cannot change direction. Various arguments have surfaced regarding a remedy for the current ethical dilemma. The consensus is that the use of UAVs in combat is justifiable, but that full autonomy is a cause for concern. In his article "Unmanned Aerial Vehicles," James Kunz contends that the use of autonomous vehicles may be justifiable but "stands on unstable ground." He contends that "UAVs should be banned, even at the development stage, until the global economy is equivalent enough to allow armed conflict to reside in the hands of machines only."6
On the other hand, Robert Sparrow asserts in "Predators or Plowshares7
" that arms control might be the next step in curtailing the negative effects of autonomous vehicles, though it is unlikely that it will occur in the near future. Sparrow recognizes that as the United States has the lead in UAV technology, it will take the development of such programs by China and Russia to induce a motion to regulate their development and employment.Both solutions seem rather extreme. Banning UAVs will suffer the same fate as any other ban on arms. It will be impossible to enforce given the archaic nature of the international system and the fact that such technology is not nearly as threatening as, say, nuclear technology, which itself struggles with regulation. But waiting for the development of UAV technology to get out of hand before a solution is provided is not the answer, either.
Defining Ethical Parameters
A preemptive solution can be sought after, though. One would hope that the United States realizes the Pandora's Box that could be opened given the employment of fully autonomous strike-UAVs. The Navy could begin by setting the example and commit to prohibiting the Fire Scout from using weapons without human control. Much like the Chinese claimed a no-first-use policy after they acquired the nuclear bomb, this could perhaps set an international precedent that would preempt the need for UAV arms control in the future.
The effort to protect the lives of American service members and increase military capabilities is worthy in almost all of its facets. But to remove the decision to take life from human beings borders on the unethical. Moreover, the accidental start of conflict by UAVs might put even more American lives at risk, which would defeat the original purpose of the UAV altogether. Moral dilemmas and ethical uncertainty will inevitably come to light with the emergence of new technology, but it is the responsibility of the United States and the international community to recognize and mediate coming challenges to avoid unethical and unnecessary conflict.
1. Erik Sofge, "Robot Chopper: The Navy's Smartest UAV," Popular Mechanics, March 2007, www.popularmechanics.com/science/air_space/4213071.html. "Fire Scout VTUAV Unmanned Aerial Vehicle," Navy-Technology, www.naval-technology.com/projects/firescout/.
2. Diederik W. Kolff, "Missile Strike Carried out With Yemeni Cooperation-Using UCAVs to Kill Alleged Terrorists: A Professional Approach to the Normative Bases of Military Ethics," Journal of Military Ethics, Vol. 2, No. 3 (2003), pp. 240
244.3. Robert Sparrow, "Predators or Plowshares," IEEE Technology and Society Magazine, Spring 2009, p. 26.
4. Kolff, p. 243.
5. David Montero, "Will US Drones Start Attacking Mullah Omar in Pakistan
" Christian Science Monitor, 6 December 2009.6. James Kunz, "Unmanned Aerial Vehicles," http://cseserv.engr.scu.edu/StudentAccounts/ENGR019Winter2004/JKunz/Jkunz_ResearchPaper.pdf.
7. Sparrow, p. 28.