Image provided by: Catholic University of America
Written by Patrick D. Lewis
Generative AI such as ChatGPT has educators concerned about the future of academic integrity, but this innovative technology is also raising concerns among other constituencies, including national security.
The Department of Political Science, Department of Theology and Religious Studies, and Institute of Human Ecology hosted a conference on “Generative AI and National Security.” The event was planned and moderated by political science professor Dr. Jonathan Asconas on January 31 at Heritage Hall and sponsored by defense contractor and engineering research firm Leidos.
Through four panel sessions and keynotes, more than a dozen experts from technology, defense, national security, journalists, academics, and more will discuss the biggest potential security responsibilities associated with AI's explosive popularity. I spoke. They also discussed the pros and cons of integrating this same technology into government agencies.
Ron Keesing, senior vice president of technology integration at Leidos, said:
Use cases for generative AI in government include helping complete routine tasks such as writing reports, filing documents, and writing press releases and statements. However, this still requires some oversight. The phrase “human involvement” was used many times to describe the need to oversee generative AI and ensure it was working as expected.
Generative AI, bots, and other forms of artificial intelligence have already proven that the concerns of some critics are well-founded. Psychological operations aimed at influencing elections and provoking social unrest are being deployed on the internet, particularly on social media platforms. Furthermore, countless financial crimes and fraud crimes using AI are occurring.
Dr. Kirill Abramov, an assistant professor and director of the Global Disinformation Institute at the University of Texas at Austin, said generative AI is unlikely to become ubiquitous, so it's important to keep an eye on it while monitoring its potential for abuse. I am one of the many who have argued that we need to accept it.
“We must adapt, not panic,” Abramov said.
Conference speakers didn't just focus on the downsides of generative AI. They also highlighted its benefits.
“This is incredibly powerful and will impact everything in the world,” said Michael Kratsios, managing director of Scale AI.
He believes that generative AI, when properly implemented, will positively change the world, and that the Department of Defense and other government agencies should start investing in generative AI immediately.
That potential is already being realized, with generative AI and AI in general being used on the battlefields of Ukraine. Panelists discussed the myriad missions commanders face during combat and believe generative AI can help with these missions, giving officers more time to address operational considerations. We need more AI in the military environment here in the United States. AI will continue to improve as enemies use it extensively.
The dangerous implications of AI were a key part of all discussions at the conference. His fourth and final panel discussed how to govern AI and how each agency should handle its implementation and internal use. Concerns about generative AI programs that are smarter than humans are legitimate, explained Lt. Col. Joe Chapa, the U.S. Air Force's chief AI ethics officer. He and other speakers did not rule out scenarios in which lives would be at risk if superintelligent AI programs were given significant physical powers and responsibilities.
Dr. Bianca Adair, director of CUA's Intelligence Research Program, said it was especially important to discuss issues like this at Catholic universities.
“Especially when you’re here at CUA, it’s good to keep in mind that there are positive AIs and negative AIs,” Dr. Adair said. “AI algorithms have no ethical core.”