Staying ahead of our world adversaries is essential
By Terry Troy
While there have been a few articles on the potential dangers of artificial intelligence (AI) since the advent of programs like ChatGPT, AI technology has been around for decades, enhancing our lives while offering improved safety for both civilian and military populations.
One Ohio 501(c)(3) nonprofit research institute working in this space is Parallax Advanced Research in Dayton. We interviewed some of their experts in AI and got a glimpse into three ways their work is making an impact on national security.
AI Use Cases, from Military to Civilian
“You have to realize that with AI there is already a lot of transfer from military to civilian uses,” says Darrell Lochtefeld, Intelligent Systems division manager at Parallax Advanced Research. “If you have ever experienced a car braking automatically when you didn’t realize you were too close to the vehicle in front of you, that technology was partially developed by the military—at least initially. There are numerous examples of where AI technology has been transferred out of the miliary and into civilian applications.”
But a major concern, at least for the average person, is that AI will somehow become sentient, reproduce itself and eventually become a competitor. This is especially true when people think of AI’s use within the military. Don’t worry, those days are still far off, even though some of the technological advancements in AI have been revolutionary. Indeed, there are plenty of uses within our military for AI, and most are mundane in nature.
“AI is used for things like sifting through resumes or suggesting who should review a paper or proposal,” says Lochtefeld. “The military is a very big place, and it has a lot of capability. Sometimes just finding the right capability is a task within itself.”
But there are also more complex applications, which keep our forces and non-combatants safe.
Parallax’s AI and Autonomy team specializes in creating smart solutions for tough problems using artificial intelligence. The team works closely with government clients, tackling challenges like airborne combat operations, processing contract data and even helping with medical decision-making during outbreaks. The team uses advanced technologies like natural language processing and computer vision to build custom software tailored to specific needs.
“The military uses AI to count planes on runways or the number of tanks on a battlefield to give us a sense of where things are at in the world,” says Lochtefeld. “The military uses it at times when no human could possibly look through all the data and get a realistic sense of what is going on. It’s used to process complex, real-time data that can come from anything like open resources to very complex sensors that may be in space or somewhere in the air.”
However, whether AI is applied in military or civilian applications, the imperative of ensuring its accuracy across use scenarios necessitates rigorous research, testing and development. Indeed, AI is even more useful when it can produce accurate responses, amplifying its efficacy and reliability in various scenarios.
Ensuring Ethical and Accurate AI Decision-Making
Nathaniel Hamilton is an AI scientist at Parallax whose research focuses on developing safe and trustworthy AI.
“My field focuses on autonomous control and reinforcement learning,” says Hamilton. “This involves infusing safety considerations into the learning process to ensure high-performing AI solutions also prioritize safety.”
Hamilton’s approach, applicable to control scenarios like spacecraft docking and autonomous driving, aims to enhance AI performance and reliability. This is a critical pursuit for both military and civilian applications, especially for the later, given the rise of autonomous vehicles and the attention drawn by several headline-making crashes.
“You simply can’t always account for what happens in the real world,” says Hamilton. “You are constantly identifying new things that you thought could never happen.”
Hamilton offered one example from his college days when an autonomous vehicle was following a truck carrying a traffic light.
“The autonomous car kept recognizing the traffic signal as a functioning yellow light, rather than a traffic control system that was simply being transported,” he says. “That is something that you would never think could possibly happen. It is literally a one-in-a-billion case.”
But those one-in-a-billion cases happen all the time especially in an open combat scenario or battlefield environment.
“One of our areas of research is identifying the entire space of possibilities and then making sure we remain safe within the entire realm of possibilities and that AI serves human beings through ethical and accurate decision-making,” he adds.
In addition to work in trustworthy autonomous control, the Parallax AI and Autonomy team is working on a Defense Advanced Research Projects Agency-funded project called “In the Moment,” which involves using AI systems to help humans make decisions in difficult scenarios where there is no right answer. The work is focused on small unit triage and mass casualty care.
“Our team wants to ensure AI systems are serving human beings through ethical decision-making,” says David Ménager, an AI researcher at Parallax. “We designed an AI system for In the Moment that incorporates the key attributes of trusted human medical professionals and makes treatment decisions that align to those individuals’ decision-making profile. Unlike most deployed AI systems—which are hard to interpret by humans—our system enables effective human-machine teaming by giving justifications and exposing the reasoning process behind the decisions it makes, thereby boosting trust in the system’s operation. We envision that this technology may be deployed as a shoulder-worn device to advise combat medics in the field.”
Harnessing Human-AI Collaboration
Moreover, the In the Moment project is just one example that exemplifies Parallax’s mission to deliver innovative research and technology solutions via The Science of Intelligent Teaming, benefiting government, industry and academic clients tackling critical challenges. This science explores the interaction, performance and understanding of diverse machine and human teams, leading to enhanced insight, context and explainability.
According to Lochtefeld, intelligent teaming is a dual term that refers to how Parallax optimally utilizes diverse human and machine capabilities and how it teams efficiently with complex systems that typically contain some form of artificial intelligence. Since machines and humans have different qualities and perform differently, intelligent teaming involves understanding the strengths and weaknesses of each member of a team, whether it’s a human or a machine, and using that understanding to develop solutions and improved ways for continued teaming.
“For example, AI excels at crunching numbers within a context but often struggles to recognize when it veers out of context,” Lochtefeld says. “In contrast, humans are adept at identifying contextual shifts. Pairing AI with a human enables the processing of vast amounts of data beyond human capacity, while the human provides crucial contextual understanding, resulting in a synergy where the whole is greater than the sum of its parts. It’s this synergy that creates a convenient and safer environment for AI used by our military, civilians, and business. The future of the technology and its applications are almost endless. It’s a great time to be an AI scientist.”