AIrTonomy: A Cyber-Physical Research Infrastructure for Next-Generation Autonomous Aerial Vehicles
- Executive Summary
- Workshop Goals and Objectives
- AIrTonomy: Envision a new experimental infrastructure for aerial autonomy sciences
- Workshop Planning and Structure
- Technical Demonstrations
- Lightning Talks – Research Use Cases
- Panel Discussion – Research Use Cases
- Poster Session – Research Use Cases
- User Feedback
- Operations Model, Business Model, and Sustainability of AIrTonomy
- Broader Impacts
- AIrTonomy Sponsors
- References
PDF Download
Please enter your email address to download a detailed PDF copy of the entire report (2.5MB)
Executive summary
The AIrTonomy workshop, hosted September 11-12, 2024, brought together researchers, industry experts, policymakers and students to discuss research use cases for a new experimental infrastructure to demonstrate the safety and trustworthiness of artificial intelligence (AI) and machine learning (ML) for autonomous aerial vehicles (AAVs). Purdue University’s Center on AI for Digital, Autonomous and Augmented Aviation (AIDA3) hosted the workshop held in West Lafayette, Indiana.
The goal of the two-day workshop and its pre-and post workshop activities was three-part: First, to share our vision of the experimental cyber-physical infrastructure of AIrTonomy and the progress toward advanced components and next-generation design capabilities. AIrTonomy will support experimental research on safe and trustworthy AI and data-driven ML for next-generation AAVs, using tangible demonstrations in existing research facilities and test beds and an interactive presentation of envisioned advancement of the existing facilities.
Second, the workshop sought to identify the research needs, use cases and unique experimental infrastructure requirements of potential users of AIrTonomy infrastructure. We collected this information through participation in lightning talks, poster presentations, online discussions and the completion of a research use case survey.
Third, pre- and post-workshop activities were focused on recruiting and learning from a potential lead user community that would engage in pilot experiments in the near future.
The in-person workshop was structured using four different interactive forms of participation by potential users and other stakeholders to achieve this goal: 1) demonstrations, 2) lightning talks to showcase research use cases, 3) poster presentations and 4) panel discussions to refine the targeted research user community and their research needs, to detail infrastructure requirements, and to discuss a plan for sustainability for the research infrastructure.
During pre-workshop activities, the AIrTonomy team detailed its vision of the AIrTonomy infrastructure and refined the specification of six physical and six cyber-physical components, all tightly integrated to eventually provide all U.S. autonomy researchers access to its unique capability to perform experiments in physical environments remotely. After forming a steering committee with representatives from other R1 institutions as well as industry partners, the team focused on the physical components specifications, namely the Purdue Unmanned aerial Proving ground (PUP) that brings together 4 different indoor and outdoor motion capture testing facilities. These facilities are combined with a diverse fleet of small and large AAVs, including fixed-wing and multi-rotor, supported with a state-of-the-art communication and networking infrastructure, organized within a 15-mile triangular corridor. These physical facilities will be complemented with a cyberinfrastructure, including remotely accessible digital twins that support virtual experimentation in the cloud but also augment remote physical experimentations (e.g., sensor emulation).
During pre-conference webinars and through an online community, the AIrTonomy team communicated the key benefits and capabilities of existing and future infrastructure components: 1) real-time access to ground truth data collected in physical facilities remotely through the cloud, 2) simulated experiments using high-fidelity digital twins of a specific physical object in the PUP (e.g., a vehicle, a facility) based on real-time data and 3) experimental workflows for designing and launching customized flights (e.g., choosing from a certain vehicle) and quickly moving from simulated flights in the cloud to indoor and outdoor experiments.
The demonstrations performed by the AIrTonomy team showcased the unique research capabilities of existing facilities in order to design and validate safe and trustworthy AI and ML models for aerial autonomy. They initiated robust discussions on AI/ML research use cases and infrastructure requirements to tackle research questions related to 1) complex missions (e.g., outdoor conditions such as beyond visual line of sight [BVLOS] missions), 2) operations in sensor-degraded environments (e.g., GPS-denied environments), 3) human-autonomy teaming and 4) remote sensing at scale in areas such as environmental monitoring (e.g., digital forestry and forest inventory management or wildfire).
Participants learned about and confirmed the importance of capabilities of the facilities used for demonstrations: 1) real-time access to ground truth data such as the measurement of a vehicle’s position using motion-capture at the millimeter level, 2) real-time sensor fusion and data streaming such as integrating data accessed from sensors embedded in wearable devices and brain computer interfaces (e.g., eye tracking, EEG and video) during human-sensing experiments involving operators interacting with drones remotely or the fusion of vehicle data such position, velocity and propulsion collected via motion-capture cameras along with GPS and other communication and networking infrastructures and 3) the use of digital twins in order to perform high-fidelity simulation experiments in the cloud prior to physical experiments in order to successfully move from sim-to-reality where the algorithms perform robustly in the physical world, even under unseen conditions.
The 34 high quality and detailed research use cases submitted resulted in eight insightful lightning talks, 12 poster presentations and documentation of additional research use cases and needs collected via an in-depth research use case survey.
Research use cases emerged across five different problem areas: 1) perception and navigation, 2) remote sensing, 3) human-autonomy teaming and remote operations, 4) cybersecurity and 5) digital twins. Participants seek to explore the safety and trustworthiness of various technical focus areas related to AI and ML for autonomy. Computer vision was a common theme across many submissions, followed by reinforcement learning, digital twin design and high-fidelity simulations. Model-based control and formal methods were also featured prominently in the proposals, particularly in the contexts involving drone navigation and control systems and assured autonomy and cybersecurity. Researchers demonstrated significant interest in explainable AI approaches, with many submissions emphasizing the importance of interpretability in safety-critical unmanned aerial vehicle (UAV) operations. Physics-informed neural networks (PINNs) were specifically mentioned in the context of system identification and digital twin design, and federated learning emerged central to the discussion for handling distributed sensing and control challenges. The integration of large language models (LLMs) was proposed for higher-level decision-making and human-computer/robot interaction scenarios, while generative AI techniques were suggested for data augmentation and simulation enhancement. Both probabilistic and deterministic approaches for safety verification (e.g., control barrier functions) were also frequently cited as a crucial methodology for ensuring reliable UAV operations in complex environments. Overall, the workshop clearly articulated the need to integrate both formal, theory-based approaches with data-driven approaches, including deep learning to advance the field of aerial autonomy. This is in line with the recommendations of a recent report published by the National Academies. Our analysis of the in-depth research use case surveys containing qualitative and quantitative information related to the infrastructure needs is further detailed in this workshop report in Section 10 .
Experts provided additional insights into the infrastructure requirements during panel discussions, emphasizing the need for specific onboard sensors (including SAR, lidar, thermal and hyperspectral sensors) and onboard computing processors (e.g., for edge computing), and the need for an easy-to-use workflow that allows participants to quickly move back and forth between simulation and modeling using realistic data and physical experimentation in different test beds, ideally remotely. Finally, the importance of the digital twins was widely emphasized. Specifically, participants stated that they would significantly benefit from digital twins not only for vehicles but also human operators, facilities, airspaces and also objects of interest for remote sensing scholars (e.g., trees). Further, it was also emphasized that the infrastructure needs to clearly appeal to users in R1 institutions in order to build a new field of research, and it should be highly interoperable with other test sites, including those focused on both indoor and outdoor testing.
In a session on business models and sustainability, participants discussed a potential pricing model for academic users, following a tier-based approach with an hourly usage fee and a yearly usage fee. The pricing model is based on an operations model that the AIrTonomy team has developed, assuming that within two years time after the completion, AIrTonomy’s physical infrastructure will support its remote users to launch and implement at least 280 short physical experiments (flights) a month within the different facilities and also along the 15-mile corridor allowing operations integrated in real airspaces and BVLOS. 400 digital experiments a month support users in AI/ML training and validation along with “virtual” flights using digital twins. This operational plan will support AIrTonomy’s vision to equip the U.S. with a research capacity to remotely test and validate the safety and trustworthiness of AI/ML for aerial autonomy.
In post-workshop activities, we identified further research use cases jointly with lead user candidates, developed a flexible workflow model, identified and established experts in five specific interdisciplinary research use case categories, who will help us with future lead user engagement.
Ongoing online outreach post-workshop, primarily on LinkedIn, YouTube and through opt-in email marketing, continues to engage individuals performing or promoting research on AI/ML and aerial autonomy. We are reaching our goal to build an active lead user community to help us further specify the AIrTonomy infrastructure. We also successfully increased our public awareness of AIrTonomy: As of November 2024, our LinkedIn content had reached more than 4,300 individuals. Webinars and other videos introducing AIrTonomy concepts earned more than 500 views on YouTube. We are actively engaging with our audiences by regularly publishing additional videos from the workshop and our research facilities, sharing research demonstrations and hosting informational and focus-group events.
Workshop Goals and Objectives
Since 2023, an interdisciplinary team of researchers and engineers involved with the AIrTonomy project have been designing a unique new cyber-physical infrastructure for research on aerial autonomy. This vision is to build an experimental infrastructure that will allow the United States to lead in developing trustworthy AI and ML systems for truly autonomous aerial vehicles that can operate safely in real urban airspaces.
By using this infrastructure, researchers from various domains can converge into a new field of aerial autonomy science that can leverage unique research capabilities that no other infrastructure can provide. Decadal surveys and high-impact publications indicate that scientists are in need of such an infrastructure in the emerging fields of AI/ML, computer vision, robotics and control, human factors, communication networks and environmental engineering.
To ensure that AIrTonomy meets its users’ research needs and develops a sustainable plan for implementing the envisioned infrastructure, the AIrTonomy team organized a two-day workshop guided by the following goals and objectives.
The goals of the workshop were:
- Introduce participants to the capabilities of the AIrTonomy cyber-physical infrastructure to perform scientific research through presentations, demonstrations and breakout sessions
- Discuss and detail research use cases for AIrTonomy
- Refine requirements for existing and future components of AIrTonomy to make it unique and interoperable with other sites
- Capture input on AIrTonomy’s business model and sustainability plan
- Build the foundations for a lead user community
These goals relate to following objectives:
- Establish a social media community and organize virtual lead user sessions and online webinars
- Successfully complete indoor and outdoor demonstrations during the workshop and discuss scientific value and infrastructure requirements with workshop participants
- Collect use case survey responses and poster submissions documenting such use cases
- Document requirements in user surveys and focus groups
- Document input and expectations of AIrTonomy’s business model and sustainability plan
AIrTonomy: Envision a new experimental infrastructure for aerial autonomy sciences
Autonomous aerial vehicles (AAVs), or drones – colloquially speaking, impact society in areas such as safe and rescue, humanitarian aid, and many other societal impact areas. The estimated market potential of AAVs outside of security and defense is $90 billion by 2030. Yet despite advances in research on AI and ML, major research challenges remain. AAVs are unable to engage in self-adaptive behavior that allows them to safely perform complex real-world tasks in interaction with other agents and their environments, in particular when faced with signal degradation, high uncertainty, exposure to new tasks and unseen data, as well as limited onboard processing.

Figure 1: Vision for the AIrtonomy infrastructure
To address this challenge, the U.S. has invested in infrastructures and research in this area through programs including the NSF, the National Science and Technology Council (NSTC), Advanced Airmobility (AAM), the U.S. Department of Defense and others. However, existing efforts fail to translate into safe AI/ML for AAVs so they can be trusted when used in scenarios such as natural disaster response. The reason for this failure is a lack of coordinated infrastructure to support the experimental research needed to establish the foundation for next-generation autonomous aerial systems (comprising AAVs, their environments and other agents interacting with them). Aerial autonomy researchers need an experimental infrastructure to design realistic experiments and to join forces in rigorously validating, verifying and quantifying uncertainties of the self-adaptive behavior of autonomous vehicles across various tasks and constraints – from anywhere in the world. AirTonomy tackle this need.
Its vision, illustrated in Figure 1, is to implement an open and remotely accessible research experimentation infrastructure that tightly integrates a unique proving ground for realistic and scientifically rigorous physical experiments and high-quality data collection with a remotely accessible cyberinfrastructure. We will support researchers in a new converging field of aerial autonomy science to validate and verify that AAVs and humans can jointly leverage AI/ML to safely perform complex real-world tasks under constraints and unpredictable or unseen conditions and data using advanced communication and sensing technologies |
This vision translates into four actionable goals: To ensure that researchers in this converging field can
- easily access high-quality ground truth data related to the structure, context, and behavior of AAVs and humans sampled during realistic physical experiments across various conditions and tasks using next-generation airspace communication, motion-capture and human sensing technologies;
- use and design high-fidelity and continuously updated digital twins of AAVs, humans and their environments1 to remotely launch simulated virtual experiments;
- benefit from easy-to-use workflows and tools to remotely launch experiments in physical indoor and outdoor environments supported by digital twins – to induce uncertainty, unseen data, and constraints (for example, by streaming degraded signals from the digital twin to the vehicle’s onboard software) during such physical experiments or to study real-time decision-making between humans and AAVs; and
- use common frameworks and metrics to validate and verify autonomous aerial systems.
The key physical and cyber components of the infrastructure are visualized in Figure 2. AIrTonomy will implement a physical infrastructure (component 1.1 to 1.6) tightly integrated with 6 cyberinfrastructure components (component 2.1 to 2.6). We will discuss them briefly next.
Figure 2: The key components of the AIrTonomy infrastructure
The Physical Components: The Purdue Unmanned aerial Proving ground (PUP)
Figure 3: The Purdue Unmanned Aerial Proving Ground (PUP)
The physical infrastructure of AIrTonomy, located near Purdue’s regional airport (LAF), will be called the Purdue Unmanned aerial Proving ground (PUP). PUP offers researchers remote access to four facilities and supports ground infrastructure. Geographically organized within a 15-mile triangular corridor, each physical component offers unique capabilities. However, the major value comes from the integrative use of all components as it allows researchers to easily move back and forth from controlled indoor experiments to outdoor experiments in the form of real-world complex missions where AAVs fly BVLOS. The vision of PUP is summarized on our YouTube channel (see Video 1).
A Cyber-Physical Infrastructure for Next Generation Autonomous Aerial Vehicles
Video 1 – AIrTonomy
Indoor motion-capture lab (Purdue’s UAS research and testing facility PURT)
PURT is unmatched in its size and precision. Its 600,000 cubic feet of volume allows tracking of many types of drones, including fixed-wing, in a space protected from weather. Its network of motion-capture cameras offer unmatched precision, accurately locating vehicles in its space to within 1 mm. This space supports research to develop drone navigation algorithms and corrects for shortcomings with GPS. The capture and virtual reality systems here offer significantly higher precision than commercially available VR equipment, and in a space of a much larger volume: PURT’s footprint has an area of 20,000 square feet, and extends to a 30 foot high ceiling.
Purdue Urban Canyon Lab (PUC)
In 2025, we plan to build the Purdue Urban Canyon lab (PUC), a new 12-acre outdoor motion-capture lab within the Purdue Martell forest to study autonomy outdoors in challenging conditions such as urban canyons and forests that cause signal degradation. Like PURT, PUC will be equipped with motion capture technology that uses active LED markers on the vehicle to capture and stream a vehicle’s position at very high accuracy (<1 cm), even around buildings and natural objects where other methods of ground truth data collection fail (e.g., Real-Time Kinematics (RTK) GPS).
Earhart Airfield
The Earhart Airfield, a dedicated AAV airport to be built in 2025 in Romney, will have a runway of 1200×130 ft. Both the airfield and the vehicles operating out of Earhart Airfield will be equipped with a sensing infrastructure for ground truth data collection and real-time data streaming, combining RTK-based ground station with vehicle onboard Inertial Measurement Unit (IMU) sensors to perform sensor fusion. Researchers can launch experiments using AIrTonomy’s fleet of large fixed-wing and small multi-rotor-based vehicles. These two vehicles offer new opportunities for studying complex problems of autonomy where longer ranges, greater payload, and higher altitudes are needed (e.g., environmental sensing). In addition, researchers can use a variety of small multi-rotor based vehicles, e.g., for studying autonomy in dense urban canyons or inside buildings (e.g., building inspection) or underneath the canopy of trees (e.g., forestry). Our diverse fleet will be equipped with a modular sensor stack combining LiDAR and hyperspectral imaging. We plan to add advanced LiDAR sensor payloads to overcome limitations of existing LiDAR-enabled systems, such as their sensitivity to lightning conditions. This creates a new research capacity for studying perception-based sensing and navigation algorithms under harsh constraints (e.g., GPS signal degradation) and challenging tasks (e.g., real-time geometry reconstruction).
Purdue XTM
The Purdue XTM, a next-generation Traffic Management System integrating crewed and uncrewed traffic management will provide a communication and networking infrastructure to ensure airspace integration not only through wireless communication and 5-G but also through active and passive radar systems. Complementary use of passive radars enables the ability to sense and image drones and other flying craft in frequency bands that active radars are not permitted to operate in. The data fusion of various sensing modalities and locations will yield greatly enhanced data fidelity.
Smart Operations Center (SOC) (now Augmented Aviation Lab or AAL)
The Augmented Aviation Lab (AAL), a highly immersive and reconfigurable space, offers truly new ways to study how ML can advance human capabilities and realize safe collaborative decision-making by optimizing the interaction between multiple AAVs and remotely located individuals using ground-based sensing technologies. The AAL is equipped with a gridded wall of screens, cameras for video and audio recording, virtual reality technologies, and wearable brain-computer interfaces (BCIs). Researchers can use research-grade non-intrusive eye-tracking glasses (e.g., Pupil Lab Neon), mobile EEG devices (e.g., mBrain Train Smarting Pro), and biometric sensors (e.g., Shimmer’s Consensys GSR) to design ML-enabled autonomy systems for real-time human sensing and augmentation (e.g., through new conversational, haptic, or even thought-based interaction). After its launch in 2024, we plan to upgrade the AAL to allow for even greater granularity in data capture and faster streaming.
Fleet and Onboard Sensors
Our fleet of configurable open-source vehicles will meet the needs of a diverse user group and allow for even more experimental set-ups. The fleet includes fixed-wing and multirotor vehicles in various sizes and capabilities, ranging from palm-sized quad-rotors to large-scale fixed-wing, BVLOS -capable vehicles with a 220 pound payload capacity, and a range of 600+ miles. This offers research the ability to study autonomy in airspaces where autonomous aerial vehicles fly at higher altitudes. We also imagine the use of mixed-drone applications in the future where a large fixed-wing vehicle carries multiple smaller drones. For example, they may deploy a fleet of smaller drones during safe & rescue missions where smaller drones are “dropped” when approaching urban canyons and used to detect fires near buildings.
Cyber Components of AIrTonomy
The cyberinfrastructure components described in Figure 2 (items 2.1 to 2.5) will be tightly integrated with our physical components. One of the most essential components is our digital twin. The benefits of our digital twin engine (2.1) offer researchers the ability to simulate a digital model of a physical system, such as a specific AAV, a specific physical airspace environment (PUC and PURT), a natural object (e.g., a forest fire), and also a human, based on continuous data exchange between the digital model and its physical counterpart. Digital twins offer a unique research capacity for designing and validating autonomous systems. They can remotely design and launch virtual or physical experiments leveraging digital twins of our vehicles, their environment (both airspace and natural environment on the ground) and humans to emulate experimental conditions or study digital twin-enabled decisions of humans.
Additional cyber components such as remote software used by humans as operators (2.2), software on board AAVs (2.3), an AI/ML workspace (2.4) and data and research services (2.5) are remotely accessible via a virtual experimentation bench (2.5). Through active lead user engagement of AI/ML researchers distributed across multiple disciplines of natural, engineering and social sciences, the project will generate cross-disciplinary research use cases to iteratively evaluate and refine implementation efforts to realize true value for a broad community of future infrastructure users.
Workshop Planning and Structure
AIrTonomy’s steering committee met weekly to prepare for the September 2024 workshop. The steering committee discussed how to achieve the workshop’s goals using pre-workshop activities and in-person activities, as well as postworkshop engagement. Specifically, the steering committee focused on 1) refining the AIrTonomy concept with a focus on the physical infrastructure and detailing the unique capabilities of the PUP discussed in Section 3 and producing a video describing the uniqueness of PUP, 2) refining the invitation list for engaging stakeholders in the workshop and its pre-workshop activities, 3) planning and preparing for the demonstrations for the in-person event to showcase the potential capabilities of AIrTonomy, 4) detailing a call for contributions for research ideas that articulate use cases for the AIrTonomy infrastructure via lightning talks and poster presentations, 5) recruiting speakers to contribute use case presentations and 6) planning panel discussions and breakout sessions. Additionally, the committee discussed the potential business model of AIrTonomy and best ways to use the breakout sessions and panel discussion to receive feedback for proposed business models.
The AIrTonomy steering committee’s weekly meetings from July to September 2024 focused on the above activities. The steering committee included members from Purdue, other R1 institutions and professional societies, as well as industry partners.
Promotional strategies included social media, press releases and stakeholder outreach, culminating in a workshop to showcase AIrTonomy’s capabilities. The committee aimed to establish AIrTonomy as a cornerstone of national infrastructure for autonomous systems research, enabling U.S. researchers to lead in next-generation AAV discovery.
Event dates and locations
The AIrTonomy workshop was held across two days, structured around different themes exploring autonomous systems development and implementation. Day 1, focused on “sim-to-real” applications, took place on September 11, 2024. It primarily took place at the Burton Morgan Center for Entrepreneurship at Purdue University, with technical demonstrations conducted at the PURT and AAL facilities. Day 2, September 12, 2024, centered on outdoor complexity and autonomous systems. The day began at the Purdue Martell Forest before transitioning back to the Burton Morgan Center in the afternoon. The workshop utilized multiple venues across Purdue’s campus, including specialized research facilities. Transportation was provided between locations and parking was available at the Convergence Center for all attendees, demonstrating the comprehensive and integrated nature of the research infrastructure being discussed. The full workshop agenda can be viewed here.
In total, 90 individuals participated in the workshop in addition to the 10 speakers/presenters. Attendees came from across the industry, with 55 participants from universities, 22 from industry and seven from government. Six participants attended as unaffiliated individuals or were students.
Information Sessions and Keynote Speeches
The workshop’s primary goal was to develop research uses and refine AIrTonomy’s infrastructure requirements using demonstrations, lightning talks, poster presentations, and breakout sessions/panel discussions. In addition to that the agenda’s workshop also included information sessions and keynote speeches to frame such focused workshop activities. Over the two days, keynote presentations provided high-level perspectives on AIrTonomy’s vision and trends in advanced airmobility, AI/ML for autonomous systems along with challenges and opportunities of infrastructure development. Such keynotes were held by academics, industry experts but also representatives of funding organizations. On day one, after opening remarks, Sabine Brunswicker, founding director of AIDA3 and principal investigator (PI) of AIrTonomy, presented the vision of AIrTonomy and explained the key components of AIrTonomy’s experimental infrastructure to kick off the workshop on day one, followed by a keynote speech from Saab Inc CTO Julia Allen, who discussed important research considerations for safe scaling of human-in-the-loop robotic fleet management. On day two, NASA’s Advanced Air Mobility Mission Integration Manager, Parimal Kopardekar, next discussed critical capabilities needed for advancing air mobility with a focus on automation and digitalization. During a dinner speech, Windracers Group CEO Simon Thompson shared insights on the benefits of AIrTonomy’s infrastructure for validating the safety and trustworthiness of AI/ML for greater autonomy from the perspective of a drone start-up focused on mid-range operations BVLOS. Former Saab Airborne Systems CTO Gerald Charlwood provided detailed insights into how PUP’s unique capabilities in airspace integration and sensor/communication infrastructures facilities rigorous data-driven testing and experimentation. In a final session sustainability, Ismael Guvenc, PI of the NSF-funded miscale infrastructure AerPAW, a testbed for research on wireless communication and autonomy in North Carolina, shared his experiment in building a new infrastructure, and also highlight opportunities for inter-site collaboration. Kyle Snyder from Best Autonomous Insights addressed the crucial aspect of infrastructure sustainability from advanced airmobility perspective. AIrTonomy team members Carol Song and Vipin Chaudhary focused on the cyber components and articulated how AIrTonomy would support remote “digital” experimentation in the cloud using virtual flights and address researchers need for high quality data, real-time access, and high computational performance for deep learning and simulation.
Technical Demonstration
On day 1, the technical demonstrations were centered around the theme “sim2real” and remote experimentation outdoors. It focused on two existing indoor facilities: PURT and AAL, showcasing advanced capabilities in autonomous system testing and human-machine interaction with the goal to move from sim2real quickly. Demonstrations at PURT included signal degradation testing, autonomous taxiing and assured autonomy design projects. Demonstrations at AAL focused on human-in-the loop research using real-time human sensing of cognitive load and situation awareness. The third demonstration featured remote operations of the largest fixed-wing UAV in the U.S., illustrating the capabilities of the research infrastructure to perform missions outdoors in real airspaces.
On day two, the demonstrations move outside to the envision location of the PUC, the Purdue Urban Canyon Lab to be established close to Martell Forest. The demonstration at Martell Forest provided insights into the need for more advanced infrastructures for research on AAVs focused on environmental sensing and forestry management. These demonstrations effectively showcased the full spectrum of AIrTonomy’s capabilities from more “controlled” indoor to complex outdoor experiments where large AAVs fly in realistic environments, and real airspaces.
Indoor Demonstrations, Day 1
Indoor demonstrations during the AIrTonomy workshop showed the existing and potential capabilities in the PURT and AAL facilities at Purdue University.
SOC/AAL: Human sensing demonstration
Demonstrations in the Augmented Aviation Lab (AAL), formerly named the Smart Operations Center (SOC), focused on monitoring remote operator performance through real-time human sensing of cognitive load and situational awareness. A key highlight was the implementation of VR/AR technology to showcase novel forms of interaction with autonomous systems.

Figure 6: Multimodal transformer model for sensing humans’ situation awareness
and cognitive load in real-time
The model was an intelligent system built using cutting edge transformer models that estimated the cognitive load and situation awareness of a human operator when fed with electroencephalography, eye tracker and webcam data. Detail of the architecture is presented in the poster session The system was capable of real-time estimation, and the data used for training the model was collected on the Purdue campus. A highlight video of the AAL demonstrations can be seen on our YouTube channel (see Video 4).
Demo Event Highlight
Video 4
PURT: Autonomous taxiing and assured autonomy
The PURT demonstrations centered on three critical aspects of autonomous vehicle operations: signal degradation and spoofing attacks, autonomous taxiing and assured autonomy design projects. Under the moderation of James Goppert (Co-PI AIrTonomy), these demonstrations highlighted the facility’s capability to test and validate autonomous systems in controlled environments. The activities particularly emphasized the challenges and solutions in developing reliable autonomous ground movement capabilities while maintaining security against potential signal interference.
Figure 7: Snapshot from a demonstration of NightVapor at PURT
The demos can be viewed on our YouTube Channel (see Video 5 and Video 6).
Test Demo of NightVapor at PURT from AIrTonomy Workship ’24
Video 5
Drone Demonstration at the Purdue UAS Research and Test Facility (PURT)
Video 6
Demonstration of remote flight of large fixed-wing vehicle monitored remotely from AAL
As part of the demonstrations, the Windracers ULTRA “Earhart” (N20PU) operated from Jasper County Airport, Indiana, under a Certificate of Waiver or Authorisation (COA) issued by the FAA. The AAL is set up to provide live telemetry and video feeds for the automated flights allowing remote operators to monitor and control the aircraft, these feeds were demonstrated to attendees via video display walls. This concept of operations demonstrated how remote control centers will become pivotal for the development of remotely piloted automated systems in the US. The activity consisted of an automated local flight, with automatic takeoff, circuit flying, and automatic landing.
An example video of the aircraft operating, both on board and from a remote camera, can be seen via AIDA3’s YouTube channel (see Video 7).
AIDA3’s Earhart Flight
Video 7
Outdoor Demonstrations (Day 2)
Forestry and Agricultural Applications
The demonstrations at Purdue Martell Forest, led by Professor Songlin Fei, showcased advanced AAV applications in remote environmental sensing and forestry. Forests play a critical role in ecosystems, and understanding their compositions, especially tree species, is essential for the ecosystem’s management and conservation. However, identifying tree species is challenging and time-consuming. Recently, uncrewed aerial vehicles equipped with various sensors have emerged as a promising technology for this task due to their relatively low cost and high spatial and temporal resolutions. Despite these advancements, a thorough understanding and comparison of state-of-the-art AI models for remote sensing with AAVs are still needed for applying them to forestry and related fields. To showcase the opportunities of the envisioned Purdue Urban Canyon Lab (PUC), Songlin Fei and his team demonstrated the use two different multi-rotor drones (Freefly Alter X and DJI M300) equipped with different sensors (thermal, LiDAR, and multispectral sensors) and showcased the use of these sensor stacks for forestry inventory management. During the demonstration the team emphasized challenges related to signal degradation when flying close to or underneath the canopy, and also the wish to find ways to increase the autonomy of the vehicle. In an ideal case, the remote pilot could use multiple drones at the same time, and focus on the sensing task rather than navigation and control. Participants asked questions about how they currently tackle challenges of signal degradation, leading to a more detailed discussion on whether motion-capture technologies could help addressing those issues. Further, researchers also discussed the need for onboard processing to support more real-time monitoring in certain application areas. A video of a demonstration of the remote sensing capabilities similar to the one shown can be found online (see Video 8).
Outdoor motion capture capabilities: Phasespace’s motion capture technology
To showcase the capabilities of the envisioned Purdue Urban Canyon Lab, the team had invited Phasespace Inc. to demonstrate(remote/virtual) their motion-capture technology. Dr. Kan Anant, Ph.D. and his team presented the system to the audience combining a presentation with a remote demonstration showcasing how the Impulse X2E System is used in other facility settings (see Video 9 ) . Impulse X2E employs Active LED Markers that flash unique ID patterns. Active LED Markers are smart modules composed of a LED light source and a microprocessor that controls the modulation of the LED’s pulse duration and amplitude. The system generates a unique ID for each of up 512 different markers. This enables the system to quickly resolve gaps and occlusions. It offers the ability to capture a vehicle’s position at quality data in real-time, even if other signals degrade. 3D coordinates, velocity, acceleration, roll, pitch, and yaw data are available with 3 milliseconds latency if needed. The system sets the standard with the fastest frame rate of 960 frames per second. During the live demonstrations participants were able to observe how the system was deployed in other application areas. Participants asked questions about where the cameras and the LED markers would be positioned, and wondered about how robust the system is in term’s measurement accuracy.
Lightning Talks – Research Use Cases
The AIrTonomy lead user workshop featured several pioneering research presentations that showcase the transformative potential of tackling research questions on AI and ML for next generation autonomy. Further, the presentations highlighted diverse applications leveraging AIrTonomy’s comprehensive testing infrastructure, demonstrating its crucial role in need for more advanced experimental infrastructures.
Collectively, these presentations underscore AIrTonomy’s vital role in advancing autonomous systems research across multiple domains. The facility’s unique combination of indoor testing capabilities, urban canyon simulation and sophisticated human-machine interaction laboratories provides an unprecedented platform for developing and validating next-generation autonomous systems. This integration of diverse research initiatives demonstrates AIrTonomy’s potential to accelerate the transition from theoretical concepts to practical, real-world applications in autonomous aerial systems.
The lighting talk sessions were divided between the two days. The first session was aligned with the theme of day one — indoor testing and sim-to-real — and focused on human-autonomy teaming, perception and navigation, and digital twins. The second session was aligned with theme two of day two — robust outdoor testing — and focused primarily on remote environmental sensing.
R&D use case 1 (Day 1): AI/ML for perception-based navigation in urban canyons under conditions of signal degradation — the role of indoor testing
Use case presenter: Karthik Dantu, Associate Professor, Department of Computer Science and Engineering, Drones Lab, University of Buffalo
Abstract: Continuously observing dynamic phenomena, such as crop growth at scale, requires continuous monitoring of large landscapes to find events of interest such as lack of moisture, evidence of crop disease or pests and timing of harvest. Operating UAVs continuously is extremely challenging and not feasible for large landscapes. Therefore, modern approaches use remote sensing to identify areas of interest and the intelligent use of specialized sensors (hyperspectral, thermal) to better characterize those interests. However, characterizing these methods requires in-field testing which is always challenging. The envisioned AIrTonomy test bed could be a great venue to test various pieces including UAV autonomy (e.g., navigation in GPS-denied scenarios), sensor characterization (e.g., tuning of the hyperspectral sensor for identifying foliage characteristics) and cross-sensor characterization (e.g., identify areas of interest from InSAR satellite images) separately before putting all of them together for the real deployment. We are putting this method to use in NASA’s Advanced Information Systems Technology (AIST) Program as part of the NASA New Observing Strategies (NOS) program from the NASA Earth Science and Technology Office (ESTO). Further, our recent work on identifying archeological remains follows a similar approach correlating satellite images with UAV visual and hyperspectral data to better identify regions of interest. In this talk, I’ll describe these two applications briefly and highlight how the AIrTonomy test bed could be used to develop our algorithms before real deployment.

Figure 8: Overview of UAV research opportunities with AIrTonomy related to remote sensing, navigation, and swarms
R&D use case 2 (Day 1): Explainable AI for aerodynamics/propulsion modeling — Why PURT is needed!
Use case presenter: Devesh Upadhay, AI/ML Lead Skapa, SAAB, Inc.
Abstract: Developing PINN for fast surrogate models of dynamical systems will enhance the ability to produce synthetic data at a rate faster than standard simulation, while being more physically accurate relative to purely data-driven surrogate models and reduced order models (ROM). This also gives PINNs better generalizability and explainability relative to purely data-driven approaches. In this talk we discuss the need for data in the development cycle of UAM and how PINNs can help reduce the burden of data needs.
R&D use case 3 (Day 1): How real-time human sensing will facilitate research on safe human-autonomy swarms
Use case presenter: Ehsan Esfahani, Associate Professor, Department of Mechanical and Aerospace Engineering, School of Engineering and Applied Sciences, University of Buffalo
Abstract: Advanced cyber-physical systems rely on human-AI teaming in which understanding the interaction dynamics between human cognition and AI agents is key to enhancing efficiency. This requires thoughtful design of feedback mechanisms, reasoning processes and compliance protocols that account for their impact on human cognition. The goal is to foster shared awareness between humans and AI while calibrating feedback to keep operators informed without overwhelming them with excessive information. Current research in explainable AI highlights that human acceptance of AI suggestions is influenced by factors such as cognitive biases, trust imbalances and the perceived cost-benefit balance of the task at hand. Explainable AI is thought to gain broader acceptance if it is easy to understand. However, much of this research focuses on interactions between a human operator and a single AI agent, often in unrealistic settings with limited real-world consequences for failure.
This case study explores human operators interacting with multiple UAVs to monitor an unknown environment. Due to differing environmental perceptions, human commands may be seen as infeasible or risky by AI, potentially leading to AI non-compliance. We aim to examine how non-compliance rates and feedback mechanisms affect human cognition, trust and the role of individual differences in this interaction.
Building on our previous studies, we propose using the AAL to collect physiological data (EEG, fNIRS and eye-tracking) from human operators working with multiple UAVs. We will estimate mental engagement, workload, distraction and attention from these modalities to better understand AI-human dynamics. Our goal is to develop explainable AI systems that enable effective and transparent dialogue between humans and AI, fostering mutual awareness and trust, thereby enhancing decision-making and human reliance on AI. This talk will review our prior works and outline plans for utilizing the SOC to achieve these objectives.
Figure 9: The inter-relation between human operators, cyber systems and physical systems
Discussion: The research makes significant contributions to understanding human-AI teaming in multi-UAV environments by examining non-compliance rates and feedback mechanisms’ effects on human cognition and trust. This work addresses a critical gap in current research by moving beyond single AI agent interactions to study complex multi-UAV scenarios with real-world consequences, while developing explainable AI systems that enable effective human-AI dialogue and mutual awareness.
R&D use case 4 (Day 1): NXP mobile robotics modules and solutions
Use case presenters: Iain Galloway, Gerald Peklar Mobile Robotics, System Innovations CTO R&D, NXP and Benjamin M. Perseghetti, Founder & CEO, Rudis Laboratories
Abstract: The NXP mobile robotics team is interested in providing robust and scalable robotics development platforms. NXP has been developing CogniPilot along with Purdue and is interested in testing the robustness of this software and related hardware platforms in real-world settings. We envision leveraging PURT for rapid development of mobile robotics solutions with motion capture provided ground truth. In addition, the PUC would be very advantageous for urban robotics testing with outdoor ground truth. NXP has not yet released a fixed-wing development platform for CogniPilot, but testing could be conducted at PURT and later deployed at the PUP airfield. We believe the SOC may also prove beneficial for studying human operator interaction with mobile robotics. AI/ML are critical tools for robotics and NXP hardware leveraged for CogniPilot is specifically designed to enable neural net based solutions. Testing at PUP as part of AIDA3 will help ensure that developing products leveraging AI/ML are robust. We also envision that the facility may serve as a location for future mobile robotics competitions and cutting edge research and development for the future of AI/ML in mobile robots.
Figure 10: Systems block diagram of mobile robotics
R&D use case 5 (Day 2): Optimizing edge computing for precision agriculture using AIrTonomy infrastructure
Use case presenter: Christopher C. Stewart, Professor of Computer Science and Engineering, The Ohio State University
Abstract: Can a community of researchers discover novel drone management techniques that scale AAV swarms abilities in terms of sensing frequency, sensing resolution, sensing operations and duration of missions without increasing the lifetime cost of AAV swarm deployments? AIrTonomy would provide multiple benchmark scenarios (digital agriculture, traffic surveillance, etc.) and real-world flight infrastructure for legal, long-distance, out-of-sight missions.
We envision a research community similar in spirit to the “Top500” and/or “Green500.” This community will develop rules to validate research platforms intended to deploy swarms for long-running land use surveillance of agriculture, forestry and urban environments. The platforms will be ranked by cost and along the dimensions in the AFFORDSS acronym (sampling frequency achieved, resolution supported, operations supported and duration of execution). The community would strive to achieve functional swarm deployments without increasing costs.
Long-term impact: The AFFORDSS Community would attract researchers to this emerging area. In the long term, the platforms developed would be integrated into industry products to reduce the cost and improve functionality of AAV systems broadly.

Figure 11: Overview of integration of multi-agent reinforcement learning
with advanced sensing capabilities and intelligence
R&D use case 5 (Day 2): Deployment and use of private cellular networks in agriculture
Use case presenter: Scott A. Shearer, Professor and Chair, Food, Agricultural and Biological Engineering, The Ohio State University
Abstract: The Ohio State University is building wireless/edge computing infrastructure to drive the adoption of AI in Midwestern agriculture. The emergence of AI driven product offerings in agriculture will be limited by an agricultural producer’s ability to collect and transmit data from ground and airborne platforms. Private cellular networks allow end users to prioritize communications between connected platforms, edge computing and cloud environments. Currently, 24 companies offer AI-driven vision systems for deployment in agriculture. As product offerings and producer adoption expand, the value of these systems will be limited by access to inference models trained for an expanding variety of use cases. The capability to move large amounts of data to processing pipelines, such as Intelligent CyberInfrastructure With Computational Learning in the Environment (ICICLE), will enhance systems capabilities thereby enabling agricultural producers to fine-tune inferencing to specific crops and specialized cropping practices.
Figure 12: Overview of ML architecture for AI-driven vision systems for precision in agriculture
The project introduces the AFFORDSS community concept, creating a standardized framework for validating research platforms in agricultural drone management. Their contribution includes developing novel solutions for scaling AAV swarm capabilities without increasing deployment costs, while also establishing private cellular networks and edge computing infrastructure crucial for AI adoption in Midwestern agriculture. This work directly addresses the growing need for efficient data collection and transmission in agricultural AI applications.
Insights from discussion and user feedback
The lighting talk sessions were held across two days. After each session of three short lightning talks, there was a panel discussion to discuss the contributions. Additional experts on those panels provided perspectives towards the use cases discussed. The panel discussions are briefly discussed in more detail in the following section.
Panel discussions – Research Use Cases
Panel discussion 1 (Day 1): Sim-to-real: Challenges and requirements for realistic demonstrations
After demonstrations in PURT and AAL, researchers at Purdue engaged with the audience via a panel discussion of the following questions: What are the requirements for realistic indoor testing for successful transition from lab to life? What are the challenges?
The following panelists were moderated by the PI of AIrTonomy, Sabine Brunswicker:
- Berkay Celik, Assistant Professor in the Department of Computer Science at Purdue University and co-director of Purdue Security Laboratory (PurSec Lab)
- James Goppert, Research Assistant Professor and Managing Director of Purdue UAS Research and Test Facility, School of Aeronautics and Astronautics
- Philip Pare, Rita Lane and Norma Fries Assistant Professor of Electrical and Computer Engineering
- Joseph Roberts, Purdue Liaison, Windracers Group
- Moderator: Sabine Brunswicker, Founding Director AIDA3
The panelists agreed that the transition from controlled laboratory settings and simulations to real-world environments necessitates rigorous indoor testing procedures. Such testing should prioritize creating a controlled environment that precisely manages variables such as wind and altitude. Furthermore, realistic sensor simulation, incorporating diverse and dynamic obstacle representation and ensuring scalability for various AAV sizes are crucial. Robust safety protocols mitigate potential risks. Yet challenges persist in bridging the gap between lab and life. Vehicles must be engineered to withstand unpredictable outdoor conditions, navigate complex and dynamic obstacles and utilize robust localization methods independent of GPS. Compliance with regulatory frameworks and addressing public perception regarding safety, security and privacy are equally critical for successful real-world deployment. From a scientific perspective, participants emphasized that data collected in realistic environments are essential for theoretical perspective. They facilitate validate theories, but most importantly also help refine existing model-based approaches. For example, in the field of control theory and dynamical systems, realistic testing offers the ability to advance methods by integrating data-driven approaches with model-based approaches, an important scientific research endeavor, in particular related to aerial vehicles and autonomy.
Panel discussion 2 (Day 2): Sim-to-real — Critical use cases and their infrastructure requirements
After the lightning talks on day one, a panel of experts assembled to discuss additional critical use cases and their infrastructure requirements. Questions and input from the audience further nurtured the discussions. Panelist included:
- Karthik Dantu, Associate Professor, Department of Computer Science and Engineering, School of Engineering and Applied Sciences, University of Buffalo
- Ziran Wang, Assistant Professor, Autonomous and Connected Systems Initiative, Lyles School of Civil Engineering, Purdue University
- Yuehwern Yih, Tompkins Professor of Industrial Engineering, Director of Smart Operations and Systems (SOS) Laboratory, Purdue University
- Nan Kong, Professor of Biomedical Engineering, Joint Appointment with Industrial Engineering, Purdue University
- Moderator: Sabine Brunswicker, Founding Director AIDA3, Purdue University
The five panelists first focused their discussion on broader use cases in the area of emergency response and healthcare. Under such conditions both greater autonomy and successful human-in-the-loop modeling is particularly important. Then, the team discussed the relationship between digital twins, realistic simulations and how digital twins can be used to perform remote research in AAL and PURT but also outdoors remotely from the AAL. The requirements of data and computation on the edge and backend were also extensively discussed among the panelists. Finally, the panel answered several questions from the audience, including one regarding what the biggest challenge is to deploy a critical infrastructure for the AAL that supports remote experimentation. Further, they also emphasized the need for a clear workflow to set up experiments remotely, collect and fuse data on human subjects and ensure that such data is streamed in real-time so that safety critical situations can be studied under realistic (or realistically emulated) conditions. Data augmentation was also a critical point mentioned in order to build digital twins that allow exploration and counterfactual reasoning under rare conditions.
Panel discussion 3 (Day 2): Critical use cases and infrastructure requirements for research on remote sensing and complex AAV missions in outdoor test beds
After the lightning talks on day one, a panel of experts assembled to discuss additional critical use cases and their infrastructure requirements. Questions and input from the audience furthered the discussions. Panelists included:
- Christopher Brinton, Elmore Associate Professor of Electrical and Computer Engineering, Purdue University
- Dennis Buckmaster, Professor, Agricultural & Biological Engineering, Dean’s Fellow for Digital Agriculture, Purdue University
- Melba Crawford, Nancy Uridil and Frank Bossu Distinguished Professor in Civil Engineering, Purdue University
- Nan Kong, Professor of Biomedical Engineering, Joint Appointment with Industrial Engineering, Purdue University
- Scott Shearer, Professor and Chair, Food, Agricultural and Biological Engineering, The Ohio State University
- Moderator: Gerald Charlwood, Saab Airborne Systems, Former CTO
Panelists discussed additional critical use cases and infrastructure requirements for research on remote sensing and complex AAV missions in outdoor test beds. The experts called for more research on the need to scale AAV operations using AI/ML for forest inventory, edge computing optimization for precision agriculture and deploying private cellular networks in agriculture. Indeed, reaching the 1:10 goal would significantly advance the environmental monitoring research using AAVs. David Corman from NSF also participated actively in the discussion to challenge the panelists and the audience to think about what the remaining gaps are that university-led mid-scale infrastructure can bridge. Topics of discussion included the importance of longer range real-time data processing, a flexible sensor stack, dynamic testing and an “easy” workflow to make the platform more accessible to a broader range of users.
Poster Sessions – Research Use Cases
The call for participation launched prior to the AIrTonomy workshop attracted diverse research proposals that demonstrate the infrastructure’s potential to advance autonomous systems across multiple domains. Research themes include human-autonomy teaming with cognitive load assessment, vehicle-to-vehicle communication protocols for collision avoidance, deep learning based object detection, emergency response optimization and applications in remote sensing. Table 1, provides an overview of the 11 posters invited to present at the workshop. The active research use case survey can be filled out here.
Table 1: List of posters, their research foci, infrastructure needs and expected outcomes
Poster Title | Research Focus | Technical Approaches | Infrastructure Needs | Expected Outcomes/Goals |
---|---|---|---|---|
Threat and Situational Understanding with Networked-Online Machine Intelligence | BVLOS operations, urban airspace integration | Explainable ML, multimodal target recognition, sensor fusion | Earhart Airfield, hangar space, RTK systems | Automated tracking and prediction for multi-tiered air defense |
Autonomous Bamboo Propagation for CO2 Sequestration | Digital forestry, precision agriculture | Computer vision, digital twins, robotics | PURT, AAL, PUC, Fleet management, digital twin systems | Accelerated bamboo growth, improved carbon sequestration |
Human Sensing: Real-time Cognitive Load | Human-autonomy teaming, pilot monitoring | Computer vision, explainable AI | PURT, AAL, PUC, Earhart Airfield | Real-time cognitive load and situation awareness monitoring |
Edge with Intelligent Digital Twins with UAVs | Food production, supply chain optimization | Edge computing, generative AI, digital twins | Comprehensive infrastructure needs including PURT, AAL, fleet management | Food security solutions, workforce development |
Adaptive Sensor Fusion for Degraded Environments | GPS-denied operations, urban operations | Active SLAM, embodied AI, sensor fusion | PURT, AAL, PUC, Earhart Airfield | Enhanced performance in challenging environments |
UAV Real-time 3D Path Planning | Transport logistics, safety planning | AI/ML integration, real-time path planning | PURT, AAL, PUC, Fleet management | Safe and efficient autonomous flight paths |
Collision Avoidance Strategies | Advanced Air Mobility, traffic management | Generative AI, LLMs, digital twins | PURT, AAL, PUC | Efficient airspace management system |
Emergency Medical Response with UAVs | Emergency response optimization | AI, computer vision, LLMs | PURT, AAL, cyber infrastructure | Reduced emergency response times |
Optimal Function and Attention Allocation | Human-autonomy teaming | Reinforcement learning | PURT | Enhanced human-AI teamwork efficiency |
Robustness and Safe Verification | Autonomous landing operations | Non-linear control theory, digital twins | PURT, AAL, PUC, comprehensive facilities | Verified safe operations under disturbances |
YOLO-Air: Object Detection | High-resolution image processing | Object detection, computer vision | Not specified | Improved detection for resource-constrained scenarios |
Digital copies of the posters are available in our online summary and can be viewed here.
User Feedback: Results of In-Depth Research Use Case Survey
As part of the call for participation (see section 4.1.1), we collected qualitative and quantitative information related to 34 use case submissions, including those that were selected for a lighting talk or a poster submission. Submissions included responses from researchers at leading R1 institutions such as Ohio State University, University Buffalo, Case Western University, and others. Participants were asked to describe their research area and research questions as well as their methodological focus. Additionally, they were asked to select one of the six physical components of AIrTonomy infrastructure they would like to use, describe how they would use those components to design and validate AI/ML models and rank the importance of access to PURT remotely. Further, we also asked them to specify additional requirements for each physical component. It took about two hours to complete the use case survey.
Research Areas of Submitted Use Cases
Figure 13 describes the relevance of four different problem areas of aerial autonomy research we had asked the participants to choose from: 1) complex real-world AAV missions (e.g., BVLOS operations outdoors in real urban airspaces), 2) mission in signal degraded or denied environments (e.g., GPS denial), 3) human-autonomy teaming and 4) remote sensing at scale, enabled by AAV. Overall, the focus on remote sensing achieved the greatest attention by the submitted research ideas.
The survey of problem areas shows that 38.7% of respondents consider real-time sensing capabilities as crucial for their work, indicating this should be a primary focus of the facility’s infrastructure. The equal interest in research on operations under constraints (GPS denial) and complex missions (19.4% each) suggests users need a facility that can seamlessly transition between controlled experiments indoors and realistic experiments outdoors. At the same time, it underscores the importance of having access to ground truth data in order to validate AI/ML models operating in signal degraded settings.
In examining technical focus areas, navigation and perception emerged as the most important topical focus, with 28.1% of respondents indicating it as central to their research. The strong interest in UAV operations and human-autonomy teaming (23.4%) highlights the need for sophisticated human-in-the-loop testing facilities and access to human-related data. Research on control (17.2%), ranked third in priority, suggesting the facility should provide the ability to deploy higher level navigation as well as lower-level control algorithms remotely for testing and validation and ensure that they smoothly interoperate with the autopilot of our vehicles. Notably, only 3.1% of respondents indicated immediate focus on research on urban airspace management, though this may increase as researchers are able to deploy vehicles outdoors and BVLOS.
Figure 15 shows the relevance of different AI and ML methods in the respondents’ research. Computer vision was a common theme across many submissions, followed by reinforcement learning, digital twin design and high-fidelity simulations. Model-based control and formal methods were also featured prominently in the proposals, particularly in the contexts involving drone navigation and control systems and assured autonomy and cybersecurity. Researchers demonstrated significant interest in explainable AI approaches, with many submissions emphasizing the importance of interpretability in safety-critical UAV operations. PINNs were specifically mentioned in the context of system identification and digital twin design, and federated learning emerged central to the discussion for handling distributed sensing and control challenges. The integration of LLMs was proposed for higher-level decision-making and human-computer/robot interaction scenarios, while generative AI techniques were suggested for data augmentation and simulation enhancement. Both probabilistic and deterministic approaches for safety verification (e.g., control barrier functions) were also frequently cited as a crucial methodology for ensuring reliable UAV operations in complex environments.
A qualitative analysis of the free-text survey responses, the poster submissions and abstracts using natural language processing revealed additional insights into the topical interest areas and supported the quantitative survey responses: Human-Autonomy Teaming was central to the discourse. Researchers highlighted the need to leverage AI/ML for optimizing the interaction between human operators and autonomous systems, including cognitive load monitoring and attention management. Further, safety verification and uncertainty quantification was important. Participants also highlight the need to focus on remote sensing to identify environmental and social impact applications, such as CO2 sequestration) and emergency medical response. Edge Computing and sensor fusion was also a central topic throughout the contributions focused on deep learning methods: The ability to easily fuse multiple data sources and having processing capabilities at the edge for real-time decision making is essential for researchers. Finally, making progress on research on digital twins was important: The Implementation of high-fidelity simulations and digital models of a physical system — updated in real-time for testing, validation and optimization of autonomous systems — was highlighted multiple times. For example, in the context of digital twins of humans, scholars emphasized the need for temporal accuracy in the representations of human decisions. Current simulation models are not accurate enough to allow the use of simulated experiments to validate models for task reallocation.
Importance and requirements related to physical components of AIrTonomy
The participants were also asked to rate the importance of each of the four facilities and the supporting communication and networking infrastructure (Purdue XTM) on a scale from 1 to 5, with 1 being not important at all and 5 being very important. PURT, AAL, and PUC all received average ratings of being important (ratings of 4, 4, 3.9 respectively), and the Earhart Airfield received a score of 3.7. This supports the research interest in studying operations with larger fixed-wing vehicles outdoors. The low score of 2.7 for the Purdue XTM reflects the fact that, currently, researchers barely study the operations in BVLOS missions where vehicles operate in real urban airspaces.
For each facility, participants also articulated specific requirements for their research.
PURT: Researchers emphasized the need for highly accurate motion capture, real-time vehicle data streaming and asked for the ability to simulate environmental conditions indoors. The responses also indicated the need to study different communication and networking conditions. A key theme that emerged was the need for realistic emulation, especially concerning GNSS signal degradation, wind effects and urban canyon scenarios. Respondents specifically highlighted requirements for testing drone behavior and control systems in challenging communication environments, in the sense of outlier conditions.
AAL: Researchers emphasized human-computer interaction and cognitive research capabilities, such as advanced sensor systems, especially for human cognition studies involving eye-tracking and brain-computer interfaces that are wearable and not intrusive to study realistic conditions. Highly accurate measurement and real-time streaming was also essential for research on neuro-informed control systems. VR/AR technology requirements featured prominently in the responses, with a specific focus on team-based research scenarios. Motion capture systems were frequently mentioned as critical tools for studying team behaviors and inferring stress or emotions from body movements and gestures. Motion capture was also important for gesture-based human-robot interactions.
PUC: Researchers emphasized the need for urban infrastructure testing and signal degradation studies. Researchers expressed particular interest in utilizing the facility’s motion capture capabilities within urban canyons and requested reconfigurable infrastructure. The need for portable cameras was also mentioned. A significant focus was placed on studying various forms of signal degradation and their impacts on UAV performance. The responses indicated a strong desire to test urban-specific scenarios and validate UAV behavior in complex urban environments.
Earhard Airfield and 15-mile corridor: The survey revealed diverse requirements for the airfield, with a strong emphasis on data collection and validation. Researchers showed particular interest in ground truth data collection and different sensing and communication standards. The responses indicated a clear need for comprehensive testing environments that can support various types of operations and data validation scenarios.
We also asked the participants about their expectations with respect to the AIrTonomy fleet, which will include multi-rotor and fixed-wing vehicles, including smaller and larger ones and variable payloads. Overall, we learned that researchers are seeking highly configurable sensor stacks (79% said it’s important) and consider work with small multi-rotor aircrafts highly important (77% said it’s highly important). Newer vehicles like VTOL (vertical take-off and landing) and larger fixed-wing UAVs are not (yet) that important for the researchers’ projects and extensions of them. This is not unexpected as these vehicles are newer and not in focus for AI/ML researchers.
Importance and requirements related to cyber components of AIrTonomy
Finally, we also focused the workshop discussions on the cyber components of AIrTonomy.
The assessment of desired cyber infrastructure components in Figure 18 reveals that users see remote access to the physical infrastructure as a very important AIrTonomy feature in order to facilitate experimental research. As expected, having access to a digital twin was mentioned as being essential (80% see the importance as being high). Scholars asked for high-fidelity digital twins that represent the physical system in a temporally accurate way. Researchers expressed particular interest in using digital twins for both real-time simulation of the behavior of a real physical system and counterfactual reasoning. The responses indicated a desire for comprehensive modeling capabilities that could represent realistic operational scenarios, requiring twins not just for vehicles but also for humans and for the facilities and potentially also for the objects being studied (e.g., trees). Integration with real-world data and the ability to validate AI/ML models featured prominently in the requirements.
Potential users indicated the need for access to add AI/ML models to onboard software and ground control software systems (e.g., ground control operations software used by remote pilots). Thus, a modular software stack and APIs (application programming interface) to access such software remotely is essential. Further, about 80 % of respondents indicated access to AI/ML workspaces is important. Data management will also be an important capability of AIrTonomy (60% of users rate as important).
During discussions, it was emphasized that coordinated data sharing between users is most critical. Participants highlighted that the framework should be compliant with existing open source standards. Further, the “larger the data silo the better” (the size of petabytes). The synchronization of different data feeds in terms of time was also determined to be a critical requirement (e.g., human data, external data feeds from communication and networking infrastructures, vehicle data, etc.). Researchers also discussed the following questions: What are the minimum requirements in terms of data capture per experiment to ensure that digital twins and simulations are sufficiently accurate? Are surrogate models enough in certain conditions? Many expressed the need for high-performance computing resources and real-time data processing capabilities. The consistent demand for baseline functionality across all digital components suggests users expect reliable access to a comprehensive suite of computational tools, rather than exceptional capability in just a few areas.
Post-workshop Research Use Case Discussions
The AIrTonomy team continued interacting with potential users during webinars with multiple researchers from other universities, along with one-on-one discussions and additional submissions via the research use case survey. Such sessions allowed the research team to uncover additional use cases for AIrTonomy. After synthesizing across the submitted research questions, we classified them into 1) perception and navigation, 2) environmental sensing, 3) human-autonomy teaming, 4) cybersecurity and 5) digital twins. For selected use cases, we created a use case diagram that shows the flexibility in how participants could use the infrastructure moving from indoor testing to outdoor experiments.
Diagram for research use case on perception and navigation
Figure 20 shows an illustrative example for a research use case focused on perception and navigation, where scholars are designing and validating new data-driven ML models that can intelligently and in real-time learn from video, images, and other sensor input gathered during flight, not only for multi-target tracking but also for in-depth scene understanding and intelligent navigation [13–15], even if radar signals degrade. However, existing models, even if pre-trained on a large amount of existing data (e.g., satellite data), may fail when they are exposed to new data captured via multiple cameras on board AAVs when in motion or via new passive radars. Researchers on embodied AI from R1 institutions expressed a need for access to AIrTonomy’s infrastructure to design and verify multimodal mapping (e.g., video, audio, IMU, GPS) and navigation for aerial autonomy under bounded computational resources (power). For example, researchers focused on this emerging research topic of embodied AI, multimodal mapping and navigation could first train an ML model using data collected in our daily/weekly experiments and stored in our data and research services (2.5). During virtual flights, they validate the algorithm and emulate constraints (e.g., signal degradation) using our digital twin (2.1). Afterward, they move from simulations [108] to real-world experiments, following the notion of sim-to-real. Since our digital twin verified the model performance as being high, they can directly move to outdoor experiments in the PUC (1.1). They remotely add their models to the onboard software (2.3), schedule a flight team and emulate RF (radio frequency) signal degradation via the Purdue XTM (1.4) and the digital twin (2.1). The results of the experiments are then accessible in real-time for AIrTonomy users as a whole. With that, AIrTonomy provides flexibility while increasing experimental data volume and creating value for the community as a whole. As we progress, we will refine this workflow further with the goal of increasing volume, diversity, and fidelity of our data, ML model and digital twins.
Example use cases for digital twins developed post-workshop
Given the importance of digital twins for the field, we are presenting five examples of additional use cases identified, for each a similar use case diagram could be developed.
Post workshop use case 1 for digital twins
To develop robust models leveraging PINNs, the infrastructure must integrate physical laws into data-driven models, ensuring solutions remain consistent with real-world physics. For example, modeling aerodynamics and propulsion systems of wing-based UAVs requires high-fidelity data collection from motion capture systems, onboard sensors and environmental sensing devices. The infrastructure should facilitate high-performance computing for solving partial differential equations embedded in PINNs and provide synthetic data generation capabilities through digital twins [16, 17]. Real-time utilization of such models involves coupling them with reduced order models (ROMs) for safety-critical applications like model predictive control (MPC) [18, 19]. AIrTonomy’s infrastructure, with its continuous data streaming and ground-truth acquisition, enables researchers to validate and deploy these models effectively, creating a robust feedback loop between simulation and real-world experiments.
Post workshop use case 2 for digital twins
Developing robust reinforcement learning (RL) models for adaptive path planning and decision-making requires an infrastructure that supports dynamic simulation environments and real-world experimentation, with human operators actively providing input. AIrTonomy’s digital twin engine enables researchers to create realistic, high-fidelity scenarios where UAVs interact with obstacles like buildings, weather changes, and other aerial vehicles, allowing humans to fine-tune policy objectives during training. These simulations facilitate the integration of human feedback into the RL pipeline, ensuring that the learned policies align with operational goals and safety requirements. The infrastructure must also support real-time risk prediction, combining live data streams from UAV sensors with human input to adjust navigation policies dynamically. By transitioning from simulated environments to physical test beds, researchers can evaluate how human-in-the-loop interactions enhance the system’s adaptability in real-world scenarios. This iterative collaboration between human operators and RL agents ensures that decision-making remains safe, efficient and context-aware during deployment.
Post workshop use case 3 for digital twins
Online learning for digital twins requires an infrastructure capable of retraining models in real time to adapt to changing environmental conditions or operational dynamics. AIrTonomy’s infrastructure supports this by offering flexible computation options: edge-based retraining, which minimizes latency but requires high-performance onboard processors, and server-based retraining, which leverages centralized computational resources but depends on sufficient communication bandwidth. The infrastructure must ensure that digital twins maintain interoperability with real UAVs, enabling seamless transitions between the virtual and physical domains. For example, real-time updates to a digital twin may involve retraining perception or navigation models using live sensor data from UAVs to account for evolving conditions, such as weather or obstacle configurations. AIrTonomy’s integrated systems — combining edge processing capabilities, robust communication channels and high-fidelity backend twins — allow researchers to balance computational efficiency, model fidelity and real-time responsiveness. This capability ensures that digital twins remain accurate and actionable during missions, empowering adaptive operations and enhancing mission safety and efficiency.
Easy-to-use workflow for remote users
After the workshop, we used the input from participants and webinars to design a workflow model to respond to the request for ease-of-use requested during the workshop.
The workflow is shown in Figure 21. We envision that users can easily transition between virtual experiments for ML training or fine-tuning and simulations using digital twins, indoor experiments and outdoor experiments in PUP facilities (i.e., PURT, PUC and AAL), all supported by real-time data management. Researchers can start at any stage of their experimental process from sim-to-real, either sequentially — starting with ML modeling and virtual experimentation, progressing to indoor and then outdoor experiments — or directly with physical experiments to collect new data or evaluate ML models trained using their own resources. The results of these experiments are made accessible in real-time to all AIrTonomy users, enhancing experimental data volume and value for the community.
Operations Model, Business Model, and Sustainability of AIrTonomy
Business model and sustainability
Realizing sustainability is an essential goal of AIrTonomy given the need to quickly ramp up the number of monthly flights. Table 3 shows the planned flights in the different facilities assuming that the team would start constructing PUC fall 2025. By the end of 2027, the team plans to perform ~280 short flights (~30 min each) per month and support 400 virtual experiments (also ~30 min each) in the cloud. The goal is to support a large number of users.
Table 3: Proposed fee system
Tier | Trial | Hourly Rate | Annual Rate |
Cost | No fee (subsidized) | $500/hour | $7,000/year |
Capability | 1 week of experiment timeBased on facility availabilityCan run experiments in PURT, SOC and limited operations at PUP airfieldSubject to limit on data storage and amount of CPU/GPU resources | Cost based on hourly rate of experiment facilityCan schedule experiments in PURT, PUC, AAL and at PUP AirfieldPresent quota on data storage and CPU/GPU storage | Annual subscriptionCan schedule up to 17 experiments in PURT, PIC, and AALUp to 5 users from the same team (additional users will be extra charge)Preset quota on data storage and CPU/GPU storage |
During the workshop, we presented a draft fee structure to pay for the fixed and operational costs that such an infrastructure requires. It is shown in Table 3.
Payments and subscriptions may come from different types of organizational types having different research and development (R&D) focus areas: Researchers (engineers and scientists) in academia, industry and the public sector. Researchers in academia from R1 institutions are the primary audience for future NSF funding. Such users could leverage NSF research grants to pay the subscription fee for an hourly or annual rate. Over time, we also see other users playing an important role. R&D experts in industry, on the other hand, could use our facilities for internal R&D activities in collaboration with universities but also for testing prototypes and translation to product development (e.g., for testing before deploying unmanned systems in the real world or to strength-test those systems before seeking regulatory approval). Industry may also seek workforce development partnerships and to fine-tune new technologies. Federal and state agencies, like NASA, FAA, departments of transportation and economic development groups may partner with us on research grants but also for other purposes of designing and validating new policies and regulatory standards (e.g., study regulatory regimes, research safety concerns and study accidents with unmanned systems).
During the workshop we hosted a panel on the challenge of sustainability: Who and how are we paying for the sustainability of the infrastructure? After a presentation of the operations model and the envisioned fee structure, Brunswicker moderated a discussion among the following members:
- David Corman, Program Director leading Cyber-Physical Systems (CPS), Smart and Connected Communities (S&CC), and CIVIC Innovation Challenge Programs, National Science Foundation
- Damon Lercel, Assistant Professor, School of Aviation and Transportation Technology, Purdue University
- Ismail Guvenc, Professor of Electrical and Computer Engineering, PI of NSF AERPAW Project, North Carolina State University
- Kyle Snyder, Principal, Best Autonomous Insights
- Moderator & speaker: Sabine Brunswicker, Founding Director AIDA3, Purdue University
The discussions emphasized the importance of cross-site collaboration, as highlighted by Guvenc’s presentation on leveraging AIrTonomy’s benefits across different locations. Further, participants highlighted the opportunities to reach the envisioned capacity and onboard users through competitions, as well as testing the envisioned payment scheme. In addition, the panel raised the need to also clearly specify the GPU/CPU requirements of external users and consider relationships with other sites sponsored by NSF to access computational resources (e.g., ACCESS and others). Specifically, other midscale R1 infrastructures could provide support to meet two objectives: to increase the user base and leverage their computational resources. The AIrTonomy team has incorporated this suggestion and has started to build strong ties with various other sites (see Figure 2).
Long-term sustainability through industry engagement
The closing industry panel of the workshop focused on broader impacts of AIrTonomy beyond its benefits for scientists in R1 institutions, featuring a panel of CEOs from Purdue startups including GRYFN, Aerovy, Uniform Sierra Aerospace and Pierce Aerospace. The panel was titled: “Advancing aviation innovation: What it takes: Meet the CEOs from four promising Purdue startups advancing several next generation air mobility innovations to market.” The discussions addressed critical questions about infrastructure sustainability, funding models, and the pathway to market for next-generation air mobility innovations. It highlighted the practical challenges and opportunities in transitioning research innovations to commercial applications. Key topics that emerged during the panel discussion:
- Standardization: There was strong agreement that current regulatory frameworks are insufficient for managing advanced autonomous systems, particularly in urban and mixed-use airspace. New standards and regulations are needed and strong interaction with regulatory bodies (FAA) is required in the long run. Standards are also needed for the management of data and data sharing, along with software and interface standards, but also emerging hardware standards. This suggests, that AIrTonomy will play an important role in handling data and software interface standards.
- Data-driven safety validation protocols: The panel agreed that “we need more data.” This will also have a significant impact on the validation of autonomous systems. It was clearly stated that we do not need another “FAA corridor” but a test bed that offers new ways for testing and validation, moving away from single case studies to rigorous validation.
- Training and certification process: Panelists agreed that current training, workforce development and certification standards don’t adequately address the new capabilities for the future workforce of autonomous vehicles.
Broader Impacts
The workshop itself made broader impacts in multiple ways:
First, the workshop was designed to foster diverse perspectives, encouraging participation by underrepresented groups, including female students enrolled in Purdue’s engineering program supporting the workshop organization or participating in the poster session as author. For example, the female PhD student Li-Yu Lin, enrolled in Purdue’s Aeronautics and Austronautics program, took a leading role in the technical development and implementation of the indoor demonstration at PURT focused on autonomous taxiing.
The workshop also provided opportunities for career development and training for individuals in different career phases. Early-career researchers and students had opportunities to learn from experts and teach and grow in the form of poster sessions, speaking opportunities, hands-on demonstrations and participant research. Indeed, many posters presented at the event were co-authored and presented by students from various STEM disciplines, including engineering, computer science, meteorological and geological sciences, and also social sciences. Further, undergraduate and graduate students were actively involved in the preparation and implementation of the demonstrations.
Finally, the workshop reached a broader audience beyond scientists and engineers performing R&D related to aerial autonomy. Reaching outward into the community, the workshop and its pre-workshop launch event (during which the team revealed its new AAL) was covered by media and local TV. Both WLFI, a television station based in northwest Indiana, and the Purdue media team covered and reported on the event. Stories highlighted the potential impact of this infrastructure and included comments from students involved in research within AIrTonomy.
- Purdue press release of Smart Crossways Launch Event
- WLFI Coverage of Smart Crossways Launch Event
In the digital space, LinkedIn community building efforts have thus far received 7,060 total post impressions worldwide, with 685 interactions (including clicks, reactions, comments and reposts) for a strong engagement rate of 6.16%. The page has been visited 479 times by 234 unique visitors and has garnered 126 followers, including Saab CTO Julia Filiberti Allen, Samsung VP of Public Affairs Gene Irisari, Lockheed Martin ML Engineer Alec Williams and former Founding Director of the White House’s National Artificial Intelligence Initiative Office Lynne Parker.
The page’s visitors and followers are based in countries around the world, including the United States, India and the United Kingdom; they comprise a variety of job functions, including research, operations and business development, with a majority being senior-level or higher; and they work in an array of industries, such as higher education and airlines/aviation.
How To Cite This Report
Please cite this report as the following: Brunswicker S, Goppert, J, Hwang, I, Lercel, D, Song, C, Rashidian, C, Caesar, A, et al. (2024): AIrTonomy Workshop Report: A Cyber-Physical Research Infrastructure for Next-Generation Autonomous Aerial Vehicles, AIDA3, Purdue University. |
AIrTonomy Workshop Sponsors
References
1. AIDA3 Home. Purdue Computes, https://www.purdue.edu/computes/aida3/
2. AIDA3 (AI for Digital, Autonomous and Augmented Aviation) Purdue – LinkedIn. https://www.linkedin.com/in/aida3/
3. AIDA3 Purdue – LinkedIn. https://www.linkedin.com/company/aida3-purdue/
4. AIDA3 (2024) AIrTonomy Youtube Teaser Video. https://www.youtube.com/watch?si=lBr_7B_tsguPsV53&v=H4tXOhf01_U&feature=youtu.be
5. Brunswicker S (2024) AIrtonomy: A Cyber-Physical Infrastructure for Research on Aerial Autonomy – Workshop Report.
6. Brunswicker S (2024) AIrTonomy Workshop Report. https://www.purdue.edu/computes/aida3/airtonomy-online-report/
7. (2024) Foundational Research Gaps and Future Directions for Digital Twins. https://doi.org/10.17226/26894, https://www.nap.edu/catalog/26894
8. Fortune (2024) Unmanned Aerial Vehicle (UAV) Market Growth & Share 030. https://www.fortunebusinessinsights.com/industry-reports/unmanned-aerial-vehicle-uav-market-101603
9. Harel D, Marron A, Sifakis J (2020) Autonomics: In search of a foundation for next-generation autonomous systems. Proceedings of the National Academy of Sciences, 117(30):17491–17498. https://doi.org/10.1073/pnas.2003162117
10. National Academies (2021) Assessing and Improving AI Trustworthiness: Current Contexts and Concerns: Proceedings of a Workshop in Brief. https://doi.org/10.17226/26208
11. NSTC (2024) 2024 Critical and Emerging Technologies List Update.
12. DARPA (2022) Lifelong Learning of Perception and Action in Autonomous Systems. https://apps.dtic.mil/sti/citations/AD1183861
13. NATO (2020) Human-Autonomy Teaming: Supporting Dynamically Adjustable Collaboration.
14. Kanellakis C, Nikolakopoulos G (2017) Survey on Computer Vision for UAVs: Current Developments and Trends. Journal of Intelligent & Robotic Systems, 87(1):141–168. https://doi.org/10.1007/s10846-017-0483-z
15. Majumder S, Jiang H, Moulon P, Henderson E, Calamia P, Grauman K, Ithapu VK (2023) Chat2Map: Efficient Scene Mapping from Multi-Ego Conversations. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), :10554–10564. https://doi.org/10.1109/CVPR52729.2023.01017
16. He Y, Cisneros I, Keetha N, Patrikar J, Ye Z, Higgins I, Hu Y, Kapoor P, Scherer S (2023) FoundLoc: Vision-based Onboard Aerial Localization in the Wild. http://arxiv.org/abs/2310.16299
17. Raissi M, Perdikaris P, Karniadakis GE (2019) Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707. https://doi.org/10.1016/j.jcp.2018.10.045
18. Stachiw T, Crain A, Ricciardi J (2022) A physics-based neural network for flight dynamics modelling and simulation. Advanced Modeling and Simulation in Engineering Sciences, 9(1):13. https://doi.org/10.1186/s40323-022-00227-7
19. Eren U, Prach A, Koçer BB, Raković SV, Kayacan E, Açıkmeşe B (2017) Model Predictive Control in Aerospace Systems: Current State and Opportunities. Journal of Guidance, Control, and Dynamics, 40(7):1541–1566. https://doi.org/10.2514/1.G002507
20. Mi Y, Shao K, Liu Y, Wang X, Xu F (2024) Integration of Motion Planning and Control for High-Performance Automated Vehicles Using Tube-Based Nonlinear MPC. IEEE Transactions on Intelligent Vehicles, 9(2):3859–3875. https://doi.org/10.1109/TIV.2023.3342306
21. Westheider J, Rückin J, Popović M (2023) Multi-UAV Adaptive Path Planning Using Deep Reinforcement Learning. 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), :649–656. https://doi.org/10.1109/IROS55552.2023.10342516
22. Li Y, Zhang S, Ye F, Jiang T, Li Y (2020) A UAV Path Planning Method Based on Deep Reinforcement Learning. 2020 IEEE USNC-CNC-URSI North American Radio Science Meeting (Joint with AP-S Symposium), :93–94. https://doi.org/10.23919/USNC/URSI49741.2020.9321625
23. Zhang S, Li Y, Ye F, Geng X, Zhou Z, Shi T (2023) A Hybrid Human-in-the-Loop Deep Reinforcement Learning Method for UAV Motion Planning for Long Trajectories with Unpredictable Obstacles. Drones, 7(5):311. https://doi.org/10.3390/drones7050311
24. Li B, Liu W, Xie W, Zhang N, Zhang Y (2023) Adaptive Digital Twin for UAV-Assisted Integrated Sensing, Communication, and Computation Networks. IEEE Transactions on Green Communications and Networking, 7(4):1996–2009. https://doi.org/10.1109/TGCN.2023.3298039
25. ACCESS (2024) ACCESS. Access, https://access-ci.org/