Youth Herald: Explainable AI in the US Navy

Clipart saying AI

The San José Youth Commission is the official youth advisory group to the mayor and city council. We represent all the youth and work to:

  • Empower youth to pursue their careers and encourage them to be civically engaged through local and city-wide events and initiatives.
  • Foster a safe, inclusive, and accessible space for youth in San Jose to express their passions and interest.
  • Provide equitable access and support to marginalized youth communities.
  • Promote awareness about various issues and opportunities to San Jose youth.
  • Advise and prompt City Council to act upon youth priorities and input.

Written By Aroshi Ghosh (Senior at Leland High School)

The video Autonomy for Unmanned Systems featuring Amir Quayumi’s work with smart robots demonstrated that intelligent systems have become integral to the operations of the U.S. Navy and Marine Corps. In fact, as early as 1917, the U.S. Navy used radio-controlled drones to counter German U-boats. After the September 11th attacks in 2001, the CIA also used unmanned autonomous systems to conduct lethal covert operations against militant groups. Former Secretary of Defense, James N. Mattis, predicts artificial intelligence (AI), the underlying technology behind these unmanned vehicles, will change the “fundamental nature of war” and our military will become mere spectators of combat.

Why is Ethical AI So Important?

The Peter Parker principle stipulates that “with great power comes great responsibility” and ethics must be the cornerstone to determine how AI technology is deployed by our armed forces. After all, the technology that makes science fiction come to life also poses serious challenges for our troops because of their unique applications and unpredictable human impact. Hollywood science fiction movies, such as 2001: A Space Odyssey and Blade Runner presented an alarmist dystopian vision of AI that does not do justice to the possibilities of these intelligent technologies to address issues, such as climate change, sustainability, and disaster relief while keeping our troops safe.

While there is considerable excitement about using robotic vehicles and autonomous weapons to protect human combatants and simplify decision-making for analysts, there is an equal measure of anxiety over killer robots and inaccurate facial recognition technologies that mislabel targets. Manipulation of data, imperfect recording of data, and selection of quality training data are some of the challenges facing the effective deployment of AI in the military. Therefore, it is imperative that we think beyond using AI to achieve combat excellence and consider the legal, social, and ethical ramifications when deploying this promising technology.

As Colonel Katherine Powell said in the 2015 movie Eye in the Sky, “There is a lot more at stake than you see in this image” that highlights the moral dilemma of modern warfare that is dependent on autonomous aircraft and technology. The drone strike in the movie on an al Shabaab terrorist camp not only opens up a debate about acceptable collateral damage but walks us through the step-by-step decision-making process amidst shifting realities to negotiate life and death for the characters involved.

A similar conundrum faces our Navy to define effective AI solutions that protect our nation’s “freedom, preserves economic prosperity, keeps the seas open and free…(and) defends American interests around the globe…” while ensuring that the technology is “responsible, equitable, traceable, reliable, and governable”.

What are Some Current Problems With AI?

One cannot help but wonder if AI-driven super-intelligent machines can truly duplicate the essence of human intelligence, emotions, thoughts, and feelings or if we are merely trying to project computing solutions to all problems because AI has become the most fashionable buzzword in the technology world. We often overlook that AI systems can also be leveraged for the more mundane, repetitive tasks in the forces, such as predicting schedules for vehicular maintenance, anomaly detection to prevent enemy attacks, and drone deployment for improved threat detection. Through the featured videos, I learned how autonomous submarines could be leveraged to map sea beds, detect mines, identify invasive species of seagrass, and perform rescue operations. I am inspired by expert systems and their applications to solve real-world problems that face us today.

The video on Autonomous Ships and Boats that showcased the work of Dr. Bob Brizzolara also demonstrated how naval researchers were inspired to imitate natural life forms, such as the wing or fin design of birds, fishes, and dolphins, when developing specific structural features to enhance the mobility, dexterity, and intelligence of their autonomous systems. Dr. Brizzolara explained how autonomous vehicles functioned in uncharted environments and could be deployed in dangerous situations due to their expendable nature.

Despite these pioneering achievements, decision-making in critical situations still involves human beings because human beings are naturally more capable of contextualization and understanding the metadata. These observations reinforced my belief that AI must be leveraged appropriately and cost-effectively to solve various problems in the armed forces. AI is not a panacea but can be used to improve efficiency in specific areas and make our lives easier. However, in other areas, human intelligence still has an edge.

What is the Solution?

The optimal strategy to modernize the Navy until AI development can be trusted in all combat situations is by identifying tasks with rules and patterns so that we can automate use cases that are predictable and non-disruptive. By incorporating technology from the private sector in non-combat and support functionalities instead of designing them from scratch, we can effectively use AI in areas such as field training, maintenance, administration, war games strategy, supply chain logistics, and personality assessments. By prioritizing the integration of legacy databases into Cloud and AI systems, we can ensure that the data that feeds into AI systems is accurate, reliable, inexpensive, operational, and secure. Finally, identifying how machines and human beings can work together seamlessly help to create ideal AI solutions. Several initiatives, such as The Avenger Naval AI Grand Challenge have been launched by the Department of Defense in collaboration with the service, naval laboratories, and the Joint Artificial Intelligence Center to better incorporate AI technologies into the fleets.

Self-learning AI that uses biology-based neural networks with a “watch and learn” approach to “accumulate experience, knowledge, and skills” has a certain degree of capability to define actions and provide context that explains the necessity of those actions. Nevertheless, these machine learning AI algorithms are also susceptible to data poisoning and bias.

What is the Effect of Data Poisoning and Bias?

A fundamental requirement of viable AI solutions that is based on machine learning is constant access to error-free and diverse data. This is a significant challenge for the Navy due to several factors, such as storage in legacy databases, poor Internet connectivity, dependency on open source software, outdated security systems, poor user interfaces, and the cost of designing proprietary technology. Training deep learning models can also be a slow and resource-intensive process that requires a huge amount of computation power.

One option is to initially implement AI solutions adopted from the private sector to build the infrastructure to adopt AI technology. The video on Data Science with Reece Koe highlighted this effort to create better intelligent systems. Project Maven, initiated by the Pentagon, uses machine learning to sort through intelligence surveillance and reconnaissance data that includes unmanned systems video, paper, computer hard drives, thumb drives, and so on, which can subsequently be reviewed by analysts.

What is the Black Box Effect?

Prior to the large-scale deployment of autonomous systems in the field, a thorough understanding of the logic and sequence of the decision-making process is important. The challenge of “making machine learning explain its decision-making process to a human” is the next frontier of development. Without control, traceability, and accountability, the armed forces cannot use AI technology reliably. When it comes to real warfare, decision-making cannot be easily delegated to machines. As Stephen Thaler of Imagination Engines, an expert in AI for robots and drones, said, “Admirals want somebody to court-martial when things go awry.” Additionally, legal accountability of robots is currently unclear under international law, and robotic decisions must be in conjunction with the ethical standards defined by society and international law.

The black box surrounding machine learning can be demystified to a certain extent. Decision trees can be used to run simulations with numerous data points and be associated with corresponding actions. Specific neural networks can be isolated based on their functionality and assigned weights to identify the logic of decision-making. However, this approach does not explain the logic of AI decisions and instead maps out a predictable route based on certain conditions. Therefore, machine learning can also produce a lot of false positives and simplistic, unviable explanations based on limited data, which brings the reliability of AI into question.

Trust is the most important component for an optimal human-machine synergy in the Navy because the cost of bad decisions is significantly higher. Today, AI systems are unable to adequately explain their decisions and actions to human users and inaccurate data and unreliable algorithms prevent an effective partnership between humans and machines.

Explainable AI models can pair effectively with smart human-computer interface techniques and symbol logic math to provide an explanation of the psychology behind AI without compromising on prediction accuracy and performance. The future of AI in the armed forces will be stronger if we can successfully create transparent AI solutions that prevent explicit or illegal bias, understand the context and environment of operation, communicate with other intelligent machines, and engage in understandable explanation dialogues with the end-user. These are exciting times in the field of AI and autonomous systems and our quest must be to identify the appropriate opportunities to use this technology ethically.

WATCH MORE VIDEOS TO SEE HOW TECHNOLOGY IS BEING USED IN THE MILITARY

Works Cited