In 1996, Scottie Pippin, Dennis Rodman, and Michael Jordan led the Chicago Bulls to an 87–75 victory over the Seattle SuperSonics, winning the NBA Championship. It was not actually a great game for Jordan, who averaged just 26.3 percent from the field compared to his career average of nearly 50 percent, but the Bulls were able to capitalize on nearly a decade of team building among long-time members such as Jordan and Pippin, as well as among newcomers Rodman, James Edwards, and John Salley, who joined the team that year but who previously had played together for many years with the Detroit Pistons. The key takeaway? Championship teams are not built overnight, but through years of practicing together.
Like the Chicago Bulls, if the Department of the Navy (DoN) wants to produce and employ winning artificial intelligence (AI) tools, it needs to intentionally build teams of AI experts and enable them to train together over extended periods. These teams need players with diverse skills—uniformed subject matter experts, network engineers, development, security, and operations (DevSecOps) or software engineers, data engineers, data scientists, machine learning engineers, and robotics experts—who can train and work together over the service life of an AI tool.
IT Takes a Team and Time
There are a number of reasons DoN is failing to field AI tools at the desired rate. First, unlike the Defense Advanced Research Projects Agency, the U.S. Army Software Factory, and some of the military research labs, few DoN organizations have bothered to build AI-capable teams with all the requisite personnel for the project being pursued. At present, most of the personnel with AI-relevant skills are distributed in ones and twos across service headquarters, logistics organizations, and acquisition organizations in “analyst” positions or in tactical-level communication and intelligence units. These individuals also are usually the only people in their organizations with formal training in AI, computer programming, data science, or robotics.
Both the Navy and Marine Corps have a bad habit of having their uniformed masters-level computer scientists, data scientists, and information technology (IT) managers serve as generals’ or admirals’ aides or in other high-profile, demanding, but non-IT-related positions. When actually serving in analyst positions, these individuals sometimes can develop a small prototype or justify investment in a one- or two-year project, but once they rotate, the initial funding runs out; a real-time data feed, user interface, or maintenance plan is needed for tool deployment; or the senior leader who authorized the project moves on and the project dies. A few computer and data scientists exist in DoN, but data engineers and software or DevSecOps engineers—key components of any AI team—are nonexistent in most organizations.
In addition, many senior leaders do not have the patience to wait for an AI team to collect data, prototype and determine the most effective algorithms, and test them under deployment or combat conditions, etc. Many expect results on par with what is being produced by commercial companies such as Google and Amazon—companies that are decades into their efforts—without understanding the years of data collection, research and development funding, and man-hours that went into generating those results.
Because they rotate to new positions after two or three years, most uniformed senior leaders are unwilling to commit funding to long-term AI projects unless they see proof of a validated application of the AI tool within a year or two. They prefer “quick wins” that highlight their leadership abilities and help their chances for promotion.
Two examples of this are the Marine Corps’ enlisted retention model and supply chain digital twin efforts. The enlisted retention model was a machine learning project intended to identify correlating factors for and forecast enlisted retention rates. From initiation, the project team indicated it would need at least five years of data collection before it could produce statistically viable results, but the project is stalled at the three-year mark and at risk of losing funding.
The supply chain digital twin effort sought to provide visibility into the supply chain and optimize the flow of parts from suppliers, through the Defense Logistics Agency, all the way to tactical-level units based on forecast demand levels. The team working on it had just gotten access to enough of the requisite data to develop prototype algorithms when Marine Corps leaders decided not to renew its funding.
Indeed, the few Department of Defense AI projects that are beginning to show signs of success are past the five-year and, in some cases, approaching the ten-year mark. Project Maven, for example, is a computer vision project intended to “autonomously extract objects of interest from moving or still imagery.”1 Originally overseen by the Under Secretary of Defense for Intelligence, it struggled for years because of both project leaders who lacked the requisite technical background and an insufficient amount of accurately labeled training data. It has survived numerous funding and personnel hurdles and, it is hoped, will become fully operational in its new home under the National Geospatial-Intelligence Agency’s Chief Digital and AI Office.2
Another example is the Army Special Operations Aviation Regiment’s predictive maintenance effort intended to reduce maintenance costs by optimizing when and what maintenance is performed on its aircraft. It took nearly ten years to install equipment, establish the necessary data flows, and get pilots and maintainers to buy in to the process, but successive unit leaders were committed to the project, and the team has now deployed several AI algorithms that are facilitating maintenance processes.
Outpaced by Competitors
In contrast to DoN, the People’s Liberation Army (PLA) has started developing and deploying AI-capable teams—known as strategic support teams—down to the battalion level. These are elements of the Strategic Support Force trained to “operate AI-enabled platforms to provide other branches of the Chinese military with sound situational awareness and support decision-making through rapid intelligence processing.”3 While these teams may not be AI experts at present, they are at least out practicing.
In addition, the PLA has been hosting a series of AI challenges targeted at young, innovative tech talent from both within and outside the PLA. Its 2020 “Stratagem at Heart, Jointness to Win” competition called for participants to “develop and train AI to carry out decision-making and operation planning for complex operations that include target reconnaissance, electromagnetic countermeasures and coordinated firepower strikes” during a joint island strike operation to resolve a “disputed sovereignty issue pertaining to an island that is currently occupied by the adversary.”4 That sounds a lot like what DoN leaders have called on the Navy and Marine Corps to do in publications such as Advantage at Sea and A Concept for Stand-In Forces. The difference is that DoN is struggling to simply collect, manage, and share its data, as evidenced by shortfalls in Joint All-Domain Command and Control efforts, much less apply AI to that data.5
If the Department of the Navy wants to develop, field, and maintain world-class AI tools that will help convince U.S. adversaries the fight is not worth it and ensure they will lose if they try, it needs to start intentionally building AI teams and providing them with the tools and equipment to train and operate as a team. A good starting point would be to put master’s and PhD-level graduates of the Naval Postgraduate School’s Computer Science, Operations Research, and Electrical Engineering Departments into teams, augmented by civilian data and DevSecOps engineers, as well as enlisted communication, intelligence, or autonomous vehicle operators as appropriate to the project. These teams then would need access to computing and programming resources, as well as sufficient time to develop and test algorithms.
Small projects intended to be quick wins and the individual efforts of an intrepid few will not produce the AI-capable workforce or the AI tools DoN will need to prevail in great power competition.ν Major Weber, a recent graduate of the Naval Postgraduate School’s master’s in operations analysis program, is I Marine Expeditionary Force’s intelligence systems officer. She previously has served as both a UH-1 Huey pilot and financial management officer (comptroller) in multiple assignments stateside and overseas, including in Afghanistan, Pakistan, and Guatemala.
1. Cheryl Pellerin, “Project Maven to Deploy Computer Algorithms to War Zone by Year’s End,” DoD News, 21 July 2017.
2. Jaspreet Gill, “Continuing Resolution the ‘One Thing’ Keeping CDAO from Taking Over DoD Project Maven AI Program,” Breaking Defense, 25 October 2022.
3. Jiayu Zhang, “China’s Military Employment of Artificial Intelligence and Its Security Implications,” The International Affairs Review blog, 16 August 2021.
4. Marcus Clay, “The PLA’s AI Competitions: Can the New Design Contests Foster a Culture of Military Innovation in China?” The Diplomat, 5 November 2020.
5. Brandi Vincent, “Hicks Wants More High-Level Oversight of Pentagon’s JADC Efforts,” FedScoop.com, 23 August 2022.