Our Research

The groundbreaking research being done at AIDA3 will revolutionize various sectors, significantly enhancing safety, efficiency, and collaborative capabilities in critical real-world applications. Windracers is providing Purdue with two fixed-wing UAVs, valued at $1.5 million, to be used by AIDA3 researchers.

WHAT DOES AIDA3 DO?

AIDA3 team working on a project.

AIDA3 performs research that leads to scientific discoveries by tackling two primary challenges in leveraging AI for aviation:

  • Increasing the autonomy and intelligence of uncrewed aerial vehicles (UAVs), and other systems used throughout the aerial value chain, while
  • Ensuring economically efficient, safe and trustworthy human involvement.

AIDA3 ’s team will realize new models and systems that allow UAVs to sense data in real-time and take independent actions in a way that they translate in trustworthy actions not just in simulated environments but in the physical world. Further, researchers will design and validate systems that pair human and autonomous systems to ensure safe and scalable operations while augmenting humans to perform novel remote tasks.

RESEARCH AT AIDA3

AIDA3 team working on a project.

Problems We Solve

Problems that AIDA3 is Solving With Advanced AI and Drone Technology

  • 100:1 Efficiency – One human “operating” 100 Ultras at the same time.
  • Simple to Use – “Standard” huma operator 100% proficient after two hours of training
  • Always Flying – 70% asset utilization
  • Sensing Dynamically – Sensing without constraints
  • Attack Resilience – No outsider can take control
AIDA3 team working on a project.

Our Research Pillars

In the pursuit of discoveries and solutions, research is organized in five pillars:

  1. Technology enabling teams of humans to work in unison with many autonomous vehicles.
  2. AI for onboard autonomous sensing, intelligence and control.
  3. AI for supply chain innovation and optimization.
  4. AI for meteorological sensing and weather hazards.
  5. Cybersecurity for AI-enabled aviation.

OUR CURRENT PROJECTS

drone over urban landscape

The FAA mandates safety operators for taxiing, creating inefficiencies for beyond-visual-line-of-sight (BVLOS) operations. This project aims to develop a control algorithm for automated taxiing (e.g. navigating UAVs from and to the hangar and runway). This framework development will integrate all aspects from localization, path planning, collision avoidance, and taxiing conflict resolution.

Autonomous taxiing vehicles must navigate both multiple static and dynamic obstacles, requiring forms of mitigation strategies. For example, while a path planning algorithm can handle static obstacles, dynamic challenges, such as interactions with other autonomous UAVs, demand more advanced conflict resolution schemes. This project will contribute to solving an autonomous taxiing problem that supports high-traffic “hotspots” in any airfields, ensuring the safe and efficient execution of large, autonomous fixed-wing UAV taxiing scenarios to assist in an end-to-end autonomous fixed-wing UAV operation.

Drone Demonstration at the Purdue UAS Research and Test Facility (PURT) – YouTube

Test Demo of NightVaport at PURT from AIrTonomy ’24

Demo Event Highlights (Dec 3, 2024)


drone over urban landscape

This project aims to answer four research questions related: 1) how to define and quantify cognitive load (CL) using computer vision 2) how to predict CL; 3) can we quantify operators’ expertise using CL and SA during a mission; 4) what is the minimal set up for answering research question 2) and 3).

Recent advances in vision-based facial analysis have enabled new directions of research for capturing subtle facial dynamics. This work has primarily focused on face recognition and expression recognition, with recent increase in popularity of micro expression recognition, a harder task since micro-expressions are fleeting, involuntary changes in facial muscles that last less than 0.5 seconds. Extending this body of work to unobservable states such as stress, fatigue, and cognitive load is a novel task since these are not directly visible on the face. Studies have shown that physiological data such as electroencephalography (EEG) and eye tracking capture indicators of these states. However, these devices are expensive and too intrusive for routine use.

A promising alternative is the use of computer vision techniques applied to webcam data that captures facial information. Webcams are standard equipment on most computers and laptops, making them an ideal candidate for non-invasive cognitive load estimation. Computer vision algorithms can analyze facial expressions, eye movements, and other visual cues to infer cognitive states without requiring additional hardware. Deep learning combined with time-series modelling can be trained to predict the mental state of a subject using the video feed directly.

I tracked my brain while flying UAVs. What did EEG and Eye-tracking reveal?

AIDA3 Research Fellows and Projects

Demo Event Highlights (Dec 3, 2024)


drone over urban landscape

As the demand for rapid, cost-effective, and environmentally friendly solutions for delivery, asset inspection, and disaster response continues to grow, multi-agent drone systems are becoming essential in both urban and rural operations. However, these systems face several practical challenges, including (i) the difficulty for non-expert users to effectively communicate commands to drones, (ii) the challenge of balancing multi-step, multi-objective decision-making while incorporating human preference trade-offs, and (iii) the complexity of coordinating multiple drones to accomplish intricate tasks.

To address these issues, our objective is to develop a multi-agent drone system tailored for non-expert users, leveraging Large Language Models (LLMs) to understand and process: (i) commands from non-expert users, (ii) dynamic trade-offs between efficiency, safety, and resource utilization in complex, evolving scenarios, and (iii) task-specific requirements that require adjustments based on human preference. Motivated by this, we propose the LLM-Assisted route Planning (LLMAP) system, which operates under the centralized guidance of an LLM to optimize drone paths, enhance delivery efficiency, and ensure operational safety in complex environments.

By dynamically weighing competing objectives—such as speed, energy consumption, and risk avoidance—while incorporating human preferences at key decision points, LLMAP enables a balance between automation and human control. Remarkably, with just two hours of basic training, a non-expert user can effectively manage and control up to 100 drones simultaneously. This system significantly reduces operational complexity, enabling ordinary users to execute sophisticated tasks without requiring specialized knowledge while maintaining human oversight of critical decisions to ensure optimal trade-offs between competing priorities.

Conversational route planning for UAVs

Demo Event Highlights (Dec 3, 2024)


drone over urban landscape

Recent advancements in unmanned aerial vehicles (UAVs) bring innovation across different industries and inspire a wide range of applications. Among them, mid-mile delivery stands out for its huge potential in leveraging UAVs to greatly increase efficiency and explore regions that are hard to reach using existing transportation systems. However, due to the properties of UAVs, that is, relatively small size and mass, limited visual and sensor information, and low flight levels, they are generally: 1) susceptible to weather, 2) risky to properties on the ground, and 3) restricted to various airspaces.

This project aims to find the optimal 3D path for UAV mid-mile delivery by considering these factors. Weather forecasts, ground risks, and airspace information are integrated to build costs and constraints for the 3D path planning algorithm. The simulation results show that the generated 3D path can effectively reduce total mission time, fuel consumption, and risk while ensuring restricted airspace and areas are avoided.

Deng, C., Sribunma, W., Brunswicker, S., Goppert, J. M., & Hwang, I. (2025). 3D Path Planning With Weather Forecasts, Ground Risks, and Airspace Information for UAV Mid-Mile Delivery. In AIAA SCITECH 2025 Forum (p. 1806).

AIDA3 Research Fellows and Projects

Demo Event Highlights (Dec 3, 2024)


drone over urban landscape

Network protocol reverse engineering is crucial for scenarios involving customized, undocumented, or proprietary protocols. Precise protocol modeling is also vital for security research tasks, such as vulnerability analysis, fuzzing, attack detection, and malware analysis.

Existing reverse engineering techniques typically involve analyzing source code, binary code, or packet captures. However, the first two approaches often prove infeasible due to limited access, obfuscation, and packing methods.

To address this limitation, we propose an approach that extracts protocol semantics from network packets using an encoder-decoder model. First, we construct a dataset consisting of a diverse range of protocols. Next, we design a neural network architecture that we pretrain to understand the structure of network packets. We then fine-tune this model to accurately identify semantic fields within the packets. Finally, we compare our method with state-of-the-art reverse engineering techniques across multiple protocols and highlight its performance based on various metrics.

Demo Event Highlights (Dec 3, 2024)


drone over urban landscape

Safety assurance for Unmanned Aerial Systems (UASs) throughout their operations is crucial as UASs are becoming more of interest in our daily aerial operations. Especially during the landing phase of flight, the slow-speed environment may pose serious risks to the UASs when there is a presence of external disturbances such as wind and noise. Our analysis kickstarts the safety verification methods to ensure the UAS landing can be successful, minimizing the need for human intervention, unnecessary go-arounds, and forced landings.

Our work on the safety verification for fixed-wing vehicle robustness applies Linear Matrix Inequality (LMI) techniques to perform safety verification of a fixed-wing aircraft’s Total Energy Control System (TECS) and demonstrate the Bounded-Input Bounded-Output (BIBO) stability of the system. Our approach focuses on a continuous time longitudinal model of a fixed-wing aircraft with bounded wind disturbances. The TECS controller contains cascaded Proportional-Integral (PI) controllers which complicate the safety verification process as the integrator states must also be considered. We reconsider the TECS controller with a Linear Quadratic Regulator (LQR) with output feedback and show the PI controller gains can be autotuned. We also simulate the longitudinal states of the fixed-wing aircraft with bounded disturbances for verification of the method.

Drone Demonstration at the Purdue UAS Research and Test Facility

Test Demo of NightVaport at PURT from AIrTonomy ’24

Demo Event Highlights (Dec 3, 2024)


drone over urban landscape

This project aims to answer three research questions related: 1) how to define and quantify situation awareness (SA); 2) how to compute SA in real time; 3) how to maintain and maximize the operator’s SA.

BVLOS UAV operations mostly rely on a computer interface where an operator monitors the UAV’s states, positions, and its environment, and makes decisions to complete the mission. Throughout the study in the last few decades, it is agreed that SA of the operator, defined as perception of critical elements in the environment, comprehension of the meaning and significance of those elements, and projection of future events based on this understanding, is extremely critical for ensuring safety. In this work, we formally and mathematically define level-1 SA, perception, as a real number between 0 and 1 computed by operator’s memory, SA elements and their corresponding SA values. Consequently, we leverage the real-time streaming eye tracking data to compute level-1 SA in real operations.

In the end, we consider operator’s memory and attention constraints and discover policies to maintain and maximize the operator’s level-1 SA value by making the interface react to it.

I tracked my brain while flying UAVs. What did EEG and Eye-tracking reveal?

AIDA3 Research Fellows and Projects

Demo Event Highlights (Dec 3, 2024)


drone over urban landscape

Finding collision-free paths is important for autonomous multi-robots (AMRs) to complete missions, ranging from search operations to delivery tasks. To achieve this, AMRs rely on cooperative collision avoidance algorithms. Unfortunately, the robustness of these algorithms against false data injection attacks (FDIAs) remains unexplored.

In this paper, we introduce Raven, a tool to identify effective and stealthy semantic attacks (e.g., herding). Effective attacks minimize positional displacement and the number of false data injections, by using temporal logic and stochastic optimization techniques. Stealthy attacks remain within sensor noise ranges and maintain spatiotemporal consistency. We evaluate Raven against two state-of-the-art collision avoidance algorithms, ORCA and GLAS.

Our results show that a single false data injection impacts multi-robot systems, by causing position deviation or even collisions. We evaluate Raven on three testbeds–a numerical simulator, a high-fidelity simulator and Crazyflie drones. Our results reveal five design flaws in these algorithms and underscore the importance of developing robust defenses against FDIAs. Finally, we propose countermeasures to mitigate the attacks we have uncovered.

Demo Event Highlights (Dec 3, 2024)