Research at AIDA3
What does AIDA3 do?
We perform research that leads to scientific discoveries.
The groundbreaking research we do at AIDA3 will revolutionize multiple aviation sectors, significantly enhancing safety, efficiency, and collaborative capabilities in critical real-world applications. Windracers is providing Purdue with two fixed-wing UAVs, valued at $1.5 million, to be used by AIDA3 researchers.
We’re tackling two primary challenges in leveraging AI for aviation: Increasing autonomy and intelligence of uncrewed aerial vehicles (UAVs) and other systems used throughout the aerial value chain, while ensuring economically efficient, safe and trustworthy human involvement.
Our team will realize new models and systems that allow UAVs to sense data in real-time and take independent actions. These systems will produce trustworthy actions not just in simulated environments but in the physical world. Further, researchers will design and validate systems that pair human and autonomous systems to ensure safe and scalable operations while augmenting humans to perform novel remote tasks.
Current Projects
Safe Landing Controller
Robustness and Safety Verification of Total Energy Control System Under Disturbances
Safe Landing Controller
Robustness and Safety Verification of Total Energy Control System Under Disturbances
Safety assurance for Unmanned Aerial Systems (UASs) throughout their operations is crucial as UASs are becoming more of interest in our daily aerial operations. Especially during the landing phase of flight, the slow-speed environment may pose serious risks to the UASs when there is a presence of external disturbances such as wind and noise. Our analysis kickstarts the safety verification methods to ensure the UAS landing can be successful, minimizing the need for human intervention, unnecessary go-arounds, and forced landings.
Our work on the safety verification for fixed-wing vehicle robustness applies Linear Matrix Inequality (LMI) techniques to perform safety verification of a fixed-wing aircraft’s Total Energy Control System (TECS) and demonstrate the Bounded-Input Bounded-Output (BIBO) stability of the system. Our approach focuses on a continuous time longitudinal model of a fixed-wing aircraft with bounded wind disturbances. The TECS controller contains cascaded Proportional-Integral (PI) controllers which complicate the safety verification process as the integrator states must also be considered. We reconsider the TECS controller with a Linear Quadratic Regulator (LQR) with output feedback and show the PI controller gains can be autotuned. We also simulate the longitudinal states of the fixed-wing aircraft with bounded disturbances for verification of the method.
Dynamic Path Planning
3D Path Planning With Weather Forecasts, Ground Risks, and Airspace Information for UAV Mid-Mile Delivery
Dynamic Path Planning
3D Path Planning With Weather Forecasts, Ground Risks, and Airspace Information for UAV Mid-Mile Delivery
Recent advancements in unmanned aerial vehicles (UAVs) bring innovation across different industries and inspire a wide range of applications. Among them, mid-mile delivery stands out for its huge potential in leveraging UAVs to greatly increase efficiency and explore regions that are hard to reach using existing transportation systems. However, due to the properties of UAVs, that is, relatively small size and mass, limited visual and sensor information, and low flight levels, they are generally: 1) susceptible to weather, 2) risky to properties on the ground, and 3) restricted to various airspaces. This project aims to find the optimal 3D path for UAV mid-mile delivery by considering these factors. Weather forecasts, ground risks, and airspace information are integrated to build costs and constraints for the 3D path planning algorithm. The simulation results show that the generated 3D path can effectively reduce total mission time, fuel consumption, and risk while ensuring restricted airspace and areas are avoided.
Cognitive Modeling
Multi-modality Based Cognitive Modeling of UAV Remote Operators
Cognitive Modeling
Multi-modality Based Cognitive Modeling of UAV Remote Operators
Abstract
This project aims to answer four research questions related: 1) how to define and quantify cognitive load (CL) and situation awareness (SA); 2) how to classify and predict CL and SA in real time; 3) can we quantify operators’ expertise using CL and SA during a mission; 4) what is the minimal set up for answering research question 2) and 3).
CL and SA are two separate concepts. Together, they describe a person’s cognitive states. In general, CL can be interpreted as the amount of mental effort and resources required to process information, perform tasks, or solve problems, and SA can be interpreted as the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future. In recent years, estimating CL and SA via various sensors and machine learning techniques has become popular. Among all sensors, electroencephalography (EEG) and eye tracker are the most popular due to their non-intrusive nature. An extensive research has been done. Nevertheless, there exist several research gaps. Firstly, it is well known that human’s cognitive states consist of multiple modalities. For instance, CL is found to be related to both brain’s activities and eye movements. However, the majority of the works only consider one modality when they model CL and SA. Secondly, although machine learning algorithms such as the support vector machine and random forest are widely used for estimating CL and SA with physiological data, deep learning models such as convolutional neural network or recurrent neural network are rarely used. This is because the size of the experiment data is usually too small to fully take advantages of deep learning models. Thirdly, current works mostly estimate CL and SA in a short and simple scenarios and they do not demonstrate the real-time applicability. Based on the aforementioned research gaps, we design an experiment and collect data using sensors for various modalities (EEG, eye tracker, webcam, and microphone), and propose a multimodal deep learning model. Within the experiment, there are two tasks, one is a simple visual tracking task that aims to collect physiological data in a controlled environment, and the other one is a mission planning task that aims to collect physiological data in a realistic environment. The multimodal deep learning model utilizes and combines the extracted features from each sensor to estimate and predict CL and SA in real-time.
Autonomous Taxiing
Autonomous Taxiing for Large, Fixed-Wing UAVs
Autonomous Taxiing
Autonomous Taxiing for Large, Fixed-Wing UAVs
The FAA requires safety operators for taxiing, creating inefficiencies for beyond-visual-line-of-sight (BVLOS) operations. This project aims to design a control algorithm for automated taxiing (hangar to the runway before takeoff, runway to the hangar after landing), and to realize path planning and collision avoidance under disturbances (multiple static and moving objects).
Work with us!
Our team has identified key problem areas that each of our research pillars expects to address. Get hands-on in world-class facilities with leaders in autonomous vehicle technologies. Email aida3@purdue.edu about opportunities to get involved.