13:30
Acoustic Detection and Identification of Drones
13:30
20 mins
|
Comparative Evaluation of Models and Features for Acoustic Drone Detection in Real-World Conditions
Martin Blass, Hannes Bradl, Franz Graf
Abstract: Acoustic sensors offer a promising approach for detecting unmanned aerial vehicles (UAVs), complementing visual or radio frequency-based methods and remaining effective in scenarios where other sensing technologies are limited. However, reliably detecting drone noise in real-world environments remains challenging due to a variety of background sounds, recording conditions, and operational constraints. This contribution investigates single-channel acoustic drone detection formulated as a binary classification task (UAV vs. non-UAV), using an extensive set of manually annotated field recordings. The study covers different classes of detection methods, ranging from classical machine learning algorithms to deep learning architectures which are commonly used in audio analysis and event detection. The models are driven by multiple acoustic representations, including manually engineered feature sets and time-frequency-based inputs, and are evaluated using a set of conventional classification metrics. The experimental design enables us to assess the influence of audio features, model types and hyperparameters on detection behavior under realistic conditions. We also examine fine-tuning methods for neural network models pre-trained on large-scale audio datasets, adapting them to the specific characteristics of acoustic drone signatures. Beyond detection performance, we pay particular attention to practical aspects which are relevant to real-world deployment, such as model size, number of parameters, and computational requirements. By jointly considering detection capability and efficiency-related characteristics, this work provides insights into design trade-offs for acoustic drone detection systems and identifies effective and feasible ways to deploy real-time systems in realistic environments.
|
13:50
20 mins
|
Performance comparison of MEMS and analogue microphones for acoustic drone localization
Nathan Itare, Jean-Hugh Thomas, Kosai Raoof
Abstract: The use of microphone arrays for source localization is a valuable tool to deal with threats caused by malicious or invasive drones. The localization performance depends on the signal processing techniques, but also on the design of the microphone arrays. Several factors are influential in the design: the number of microphones, the microphones’ disposition and their type. This study focuses on the influence of the type of microphone used in an array. Two types of microphones are investigated, yielding two different arrays. The first array is composed of MEMS microphones, which are compact and low-cost but have limitations in terms of sensitivity, and the second array used analogue microphones, which are more precise, but more expensive. One constraint of the study is the number of microphones used: only 10 microphones are employed in each array to perform the localization. Two elements are used to compare the arrays: the spectral content of the measured signals, and the accuracy of the direction of arrival. In the article, the signal processing needed to enhance the performance of the localization with MEMS is described, as well as the localization method used with both types of arrays. The localization method is based on delay and sum beamforming with a consideration of the drone acoustic signature. The comparison is performed with outdoor experimental measurements using a DJI Phantom IV.
|
14:10
20 mins
|
Improving acoustic drone detection generalization through pretraining and data augmentation
Paul Reuter, Mattes Ohlenbusch, Christian Rollwage
Abstract: Detecting unauthorized UAV flights is critical for surveillance, security and airspace management. Acoustic drone detection—relying on the distinctive propeller and motor sounds of UAVs—provides a low-cost, passive solution requiring no line of sight. A central challenge is generalization: reliably distinguishing drone signatures from ambient noise across previously unseen recording setups, environments and UAV types (out-of-domain). Inspired by advances in large-scale audio pretraining, we develop a compact DNN-based detector and enhance its generalization through two complementary strategies: First, we pretrain the model for broad sound-event classification to learn high-level acoustic representations and then fine-tune it on a diverse combination of in-house and public drone recordings. Second, we apply on-the-fly augmentations—drone pitch shifting, background noise mixing, microphone transfer-function simulation and spectrogram augmentation—to expose the model to varied acoustic conditions and reduce overfitting. An ablation study is conducted to quantify the impact of each augmentation. For evaluation, we set target false-positive rates (FPR) aligned with real-world surveillance needs and report true-positive rates (TPR) and ROC AUC on both in-domain data (public IDMT drone dataset) and out-of-domain data (public AuDroK collection). We further validate real-world usability by measuring false positives on public non-drone corpora (IDMT-TRAFFIC and ESC-50), demonstrating equally low FPR on unfamiliar backgrounds. Finally, we analyze detection performance as a function of source–microphone distance on the IDMT dataset.
|
14:30
20 mins
|
Implementation and Optimization of a Compact Deep Learning Architecture for Portable Acoustic Drone Detection
Hadrien Pujol, Julien Preuilh, Magali Arnaud, Thierry Mazoyer
Abstract: Counter-drone defense is becoming an increasingly critical topic within the defense industry. Acoem, through its subsidiary Metravib Defence—a long-standing industrial leader in acoustic threat detection and localization—has launched an intensive R&D program over the past few years. This program focuses on deep learning-empowered algorithms dedicated to drone detection, classification, and localization. The objective is to embed real-time processing onto miniaturized, low-power, fully passive acoustic sensors for operational use.
The work presented in this paper describes the optimization of a CNN, derived from previous BeamLearning-ID research, to fit within a specialized Neural Processing Unit (NPU). The goal is to detect drones with high false-alarm rejection and rapid reaction times. Beyond the CNN detector, the overall processing chain includes a time stabilizer and a high-resolution localization step, leveraging the system prototype's multichannel miniature acoustic array.
After a brief system overview, this paper focuses on CNN design and optimization with an emphasis on model transparency. We aim to draw parallels between the filter coefficients of trained convolutional layers and the underlying physics of signal processing. This approach helps mitigate the "black-box" effect often criticized in deep learning. Drawing on field tests involving over 20 hours of acoustic flight signals from various drone models and attack scenarios, we present a physically-guided optimization of the CNN. This ensures maximum performance while maintaining the compact network size required for real-time inference on embedded systems.
Finally, the CNN's ROC curves are analyzed to evaluate detection rates and false alarms, with comparisons to existing literature. We also demonstrate how time-stabilizing functions enhance alert robustness. The paper concludes with technical and operational perspectives on the future of AI-empowered acoustic detectors.
This work was performed using HPC resources from GENCI-IDRIS (Grant 2025-AD011013877R2])
|
14:50
20 mins
|
AeroFeathers: Biomimetic 3D Printed Fiber Surface Treatments for Quiet Drone Propellers
William Johnston, Bhisham Sharma, Janith Godakawela, Kara Hardy
Abstract: The high-frequency tonal and broadband noise generated by small unmanned aerial vehicle (UAV) propellers remains a major obstacle to their widespread use in noise-sensitive environments. Many existing bio-inspired noise-reduction strategies rely on rigid geometric modifications that only partially capture the mechanisms underlying silent biological flight and often introduce aerodynamic penalties. In contrast, this work introduces AeroFeathers, flexible fiber-based surface treatments fabricated directly onto propeller blades that can significantly reduce noise while preserving aerodynamic performance. An advanced additive manufacturing workflow is developed that enables the direct 3D printing of thin, flexible, hair-like fibers onto propeller surfaces using consumer-grade material extrusion printers. Custom toolpath generation allows precise control over fiber density, thickness, and placement, producing multi-scale compliant surface features inspired by the fringe-like and downy structures found in silent owl feathers. Several propeller configurations incorporating isolated and combined bio-inspired features were fabricated and evaluated under controlled laboratory conditions. Aeroacoustic measurements were conducted in an anechoic chamber using calibrated microphones, while thrust and power were simultaneously measured using a precision load-cell test stand. Testing was performed across rotational speeds representative of hover and low-speed cruise operation. The results show that propellers incorporating dense, flexible surface fibers achieve reductions of up to 5 dB in overall sound pressure level compared to an unmodified baseline, corresponding to a substantial decrease in radiated acoustic energy. Directional measurements indicate that noise reduction occurs across the forward hemisphere rather than through spatial redistribution. Importantly, these acoustic benefits are achieved with no measurable loss in thrust and no increase in power consumption. These findings demonstrate that compliant, fiber-based surface treatments provide a scalable and manufacturable approach to reducing UAV propeller noise at the source, offering a practical pathway toward quieter drones suitable for operation in acoustically sensitive communities.
|
|