
3 minute read
Created a new paradigm for defense research
The Global Security Initiative is unique in the defense research space because it works outside of traditional academic boxes. Its researchers bring many disciplines — engineering, social science, biology and more — to bear on complex problems. In fact, this interdisciplinary style was part of its design from the beginning.
In its first 10 years, GSI has proven to be a highly effective design, serving as a model for other institutions around the nation. It led 46 projects and received funding from over 50 different federal agencies to support its research. Such a scale of work and broad investment base signals a recognition that GSI’s pioneering approach to defense research is a promising path toward solving the world’s biggest security challenges.
Bio-inspired bots
At the Center for Human, Artificial Intelligence and Robot Teaming (CHART), researchers are developing autonomous systems to control robotic swarms — groups of small robots that work together to complete tasks.
To design these robot systems, the team is studying the ways that social creatures like bees, ants, birds and fish can coordinate and respond to challenges to achieve a common goal. Their robot swarms will mimic this unique behavior perfected by nature.
This technology has many applications, including for defense. The swarms could perform security surveillance, search-and-rescue missions, and detection of chemical, biological and nuclear materials. They could also help monitor weather and climate conditions, transport materials, and collect and transmit data from remote places like under water or outer space.
Another goal is to enable these robots to perform dependably in distant, challenging environments where communications could be limited and unreliable.
Sleuthing for AI-generated news
The Center for Narrative, Disinformation and Strategic Influence frequently brings experts in cybersecurity, AI, communications, psychology and narrative studies together to understand the sociotechnical aspects of informational warfare.
In a project that aimed to detect AI-generated text in DARPA’s Semantic Forensics (SemaFor) program, researchers from the center partnered with ASU’s Walter Cronkite School of Journalism and Mass Communication to add a journalism perspective.
The team was tasked with determining whether news articles were written by a person or by an AI-powered text generator. While the computer scientists were stuck, the journalist could pick up clues based on elements like language, story structure and grammar style. Once the team input style guides as features in their AI-detection algorithm, their tool became the top performer with the highest accuracy scores compared to other algorithms.
Protecting the border
Researchers in the Center for Accelerating Operational Efficiency created a simulation toolkit to help detect, deter and disrupt illegal passage into the country more efficiently. It works by identifying probable pathways individuals might take to cross national borders and is currently being tested in a pilot effort in several sectors in Texas.
The toolkit combines data visualization, human factors engineering and mathematical modeling to meet specific security needs on the ground. The underlying technology is based on the same modeling and algorithm approach that Google Maps uses to get people from point A to point B.
When using the toolkit, each station or sector will be able to change the modeling criteria and build its own customized profile for how individuals in that area may act, based on factors such as landscape, prior apprehension locations, and sensor and security camera positions.
Additionally, the toolkit utilizes open-source software so it can be field-deployed on a government laptop in environments that lack internet access.