- 无标题文档

中文题名:

 基于射频与视觉多模态融合的无人机定位方法    

姓名:

 谢文清    

学号:

 SF2216001    

保密级别:

 公开    

论文语种:

 chi    

学科代码:

 085404    

学科名称:

 工学 - 电子信息 - 计算机技术    

学生类型:

 硕士    

学位:

 工程硕士    

入学年份:

 2022    

学校:

 南京航空航天大学    

院系:

 计算机科学与技术学院/人工智能学院    

专业:

 电子信息(专业学位)    

研究方向:

 人工智能    

第一导师姓名:

 吴启晖    

第一导师单位:

 计算机科学与技术学院/人工智能学院    

完成日期:

 2024-12-30    

答辩日期:

 2025-03-16    

外文题名:

 

An RF-Visual Fusion Method for Precise UAV Positioning

    

中文关键词:

 无人机定位 ; 多模态融合 ; 记忆增强机制 ; 反无人机技术 ; 射频视觉信息融合     

外文关键词:

 UAV positioning ; multi-modal fusion ; memory enhancement mechanism ; anti-UAV technology ; RF-visual fusion     

中文摘要:

低空经济的快速发展和无人机技术的广泛应用正在深刻改变社会经济格局。然而,无人机 数量的迅速增长及其复杂应用场景,为监管和安全防控带来了前所未有的压力与挑战。同时, 随着无人机在民用与军事领域需求的不断提升,传统的单模态定位技术已难以应对复杂背景下 高精度定位的需求,定位精度、鲁棒性及实时性均面临瓶颈。多模态融合技术通过整合射频、 视觉等多源数据的优势,被视为突破这一瓶颈的关键。然而,现有多模态定位系统在复杂干扰 环境中普遍缺乏高效的融合策略和鲁棒性能,难以充分发挥多源数据的互补潜力,亟需进一步 研究与创新。针对上述问题,本研究基于无人机轨迹的时空关联特性,通过频域-空间域信息融 合,提出了一种新的基于射频与视觉的多模态无人机定位框架。本文主要贡献如下: 1.针对现有反无人机系统在复杂城市环境中面临的虚警率高、传感器盲区以及多径效应的 问题,本文利用无人机轨迹的空间相关性,设计了一种可解释且迁移性强的无线电测向异常检 测方法,以克服射频信号在多径效应和噪声干扰下的精度下降问题,并在此基础上提出了一种 基于射频与视觉数据级融合的无人机定位方法。实验结果从定位精度和实时响应速度两个方面 验证了本文提出方法的优越性。定位平均精度相比 SOTA 基线方法提升了 6.9%,召回率提升了 11.2%。此外,射频与视觉数据融合的准确率达到 96.4%,并具备较短的响应时间,显著提升了 抗干扰能力和鲁棒性。 2.针对多模态无人机目标定位算法在复杂背景干扰,遮挡和变化的照明条件下定位精度 低、鲁棒性差的问题,本文创新性地将记忆增强机制引入多模态融合框架中,通过利用历史射 频数据提取无人机动态时间特征,并结合跨模态注意力机制融合视觉图像的细粒度空间特征, 提出了一种射频与视觉特征级融合的记忆增强无人机定位方法。实验结果表明,该方法在遮挡 场景、光线过曝及复杂背景条件下的定位性能相较基线方法表现出明显优势。 3.针对跨模态无人机数据集稀缺,尤其缺乏高质量的射频与视觉标注数据的问题,本文搭 建了实际的基于射频与视觉的跨模态数据采集平台,采集了包含视觉图像与无人机射频信号的 同步数据集。同时,基于 ONNX 的推理引擎统一管理,本文实现了射频与视觉多模态融合定位 算法的软硬件一体化设计与高效部署。通过整合射频信号处理、视觉图像分析及多模态融合等 多个独立模块,克服了不同框架与模型间的兼容性问题,实现了多模态算法的高效推理与协同 执行,进一步提升了系统部署的实时性能与整体推理效率,为复杂场景下的无人机精准定位提 供了有力支撑。

外文摘要:

The rapid development of the low-altitude economy and the widespread application of unmanned aerial vehicles (UAVs) technology have profoundly changed the socioeconomic landscape. However, the rapid increase in the number of UAVs and the complexity of their application scenarios have brought about tremendous pressures and challenges in terms of regulation, safety prevention and control. Meanwhile, with the increasing demand of UAVs in civil and military fields, the traditional single-modal positioning technology has been difficult to cope with the demand for high-precision positioning in complex environments, facing bottlenecks in accuracy, robustness, and real-time performance. Multi-modal fusion technology is considered to be the key to break this bottleneck due to its ability to integrate the advantages of multi-source data such as radio frequency (RF) and vision. However, existing multi-modal positioning systems still generally lack efficient fusion strategies and robust performance in dynamic and complex environments, and it is difficult to fully utilize the complementary potentials of multi-source data. This necessitates further research and innovation. In order to solve these problems, this paper proposes a new multi-modal UAV positioning framework based on RF and vision by deeply fusing frequency domain and spatial domain information based on the spatial correlation and temporal correlation of UAV trajectories. The main contributions of this paper are as follows: 1.To address the challenges of high false alarm rates, sensor blind spots, and multi-path effects faced by existing anti-UAV systems in complex urban environments, this paper exploits the spatial correlation of UAV trajectories and designs an interpretable and highly transferable RF-based direction-finding anomaly detection method. This method mitigates the accuracy degradation of RF signals caused by multi-path effects and noise interference. Building on this, a data-level fusion-based RF-visual UAV positioning method is proposed. Experimental results validate the superiority of the proposed method in terms of positioning accuracy and real-time responsiveness, achieving a 6.9% improvement in average positioning accuracy and an 11.2% increase in recall compared to state-of-the-art (SOTA) baseline methods. Furthermore, the accuracy of RF-visual data fusion reaches 96.4% with a short response time, demonstrating stronger anti-interference capabilities and robustness. 2.To tackle the low positioning accuracy and poor robustness of multi-modal UAV positioning algorithms under complex background interference, occlusions, and varying lighting conditions, this paper innovatively introduces a memory-enhanced mechanism into the multi-modal fusion framework. By extracting dynamic temporal features of UAVs from historical RF data and integrating fine-grained spatial features of visual images through a cross-modal attention mechanism, a feature-level fusion-based memory-enhanced UAV positioning method is proposed. Experimental results show that this method significantly outperforms baseline approaches in occluded scenarios, overexposed lighting, and complex background conditions. 3.To address the scarcity of multi-modal UAV datasets, particularly the lack of high-quality RF and visual annotated data, this paper constructs a real-world RF-visual multi-modal data acquisition platform that collects synchronized datasets comprising visual images and RF signals. Additionally, by leveraging ONNX unified inference engines, this work achieves the integrated soft-hardware design and efficient deployment of RF-visual multi-modal fusion positioning algorithms. Through the integration of independent modules, including RF signal processing, visual image analysis, and multi-modal data fusion, the proposed method overcomes compatibility issues between different frameworks and models, enabling efficient inference and collaborative execution of multi-modal algorithms. This significantly enhances real-time performance and overall inference efficiency of the system, providing robust support for accurate UAV positioning in complex scenarios.

参考文献:

[1]Ajakwe S O, Kim D-S, Lee J-M. Drone Transportation System: Systematic Review of Security Dynamics for Smart Mobility[J]. IEEE Internet of Things Journal, 2023, 10(16): 14462-14482.

[2]Lv Z, Li Y, Feng H, et al. Deep Learning for Security in Digital Twins of Cooperative Intelligent Transportation Systems[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(9): 16666-16675.

[3]Dong J, Ota K, Dong M. UAV-Based Real-Time Survivor Detection System in Post-Disaster Search and Rescue Operations[J]. IEEE Journal on Miniaturization for Air and Space Systems, 2021, 2(4): 209-219.

[4]Güvenç İ, Ozdemir O, Yapici Y, et al. Detection, localization, and tracking of unauthorized UAS and Jammers[C]//2017 IEEE/AIAA 36th Digital Avionics Systems Conference (DASC). IEEE, 2017: 1-10.

[5]Yang Y, Yang F, Sun L, et al. Echoformer: Transformer Architecture Based on Radar Echo Characteristics for UAV Detection[J]. IEEE Sensors Journal, 2023, 23(8): 8639-8653.

[6]Farlik J, Kratky M, Casar J, et al. Radar Cross Section and Detection of Small Unmanned Aerial Vehicles[C]//2016 17th International Conference on Mechatronics - Mechatronika (ME). IEEE, 2016: 1-7.

[7]Herschfelt A, Birtcher C R, Gutierrez R M, et al. Consumer-Grade Drone Radar Cross-Section and Micro-Doppler Phenomenology[C]//2017 IEEE Radar Conference (RadarConf). IEEE, 2017: 0981-0985.

[8]Ma S, Zhang Y, Zhu D, et al. A Method for Improving Efficiency of Anti-UAV Radar Based on FMCW[C]//2021 IEEE 15th International Conference on Electronic Measurement & Instruments (ICEMI). IEEE, 2021: 109-113.

[9]Wang Y, Phelps T, Kibaroglu K, et al. 28 GHz 5G-Based Phased-Arrays for UAV Detection and Automotive Traffic-Monitoring Radars[C]//IEEE MTT-S International Microwave Symposium Digest. Philadelphia, PA, USA: IEEE, 2018: 895-898.

[10]Pieraccini M, Miccinesi L, Rojhani N. A Doppler Range Compensation for Step-Frequency Continuous-Wave Radar for Detecting Small UAV[J]. Sensors, 2019, 19(6): 1331.

[11]Wang C, Tian J, Cao J, et al. Deep Learning-Based UAV Detection in Pulse-Doppler Radar[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 1-12.

[12]Huang D, Zhang Z, Fang X, et al. STIF: A Spatial–Temporal Integrated Framework for End-to-End Micro-UAV Trajectory Tracking and Prediction With 4-D MIMO Radar[J]. IEEE Internet of Things Journal, 2023, 10(21): 18821-18836.

[13]Aldowesh A, Alnuaim T, Alzogaiby A. Slow-Moving Micro-UAV Detection with a Small Scale Digital Array Radar[C]//2019 IEEE Radar Conference (RadarConf). IEEE, 2019: 1-5.

[14]许稼, 彭应宁, 夏香根, 等. 空时频检测前聚焦雷达信号处理方法[J]. 雷达学报, 2014, 3(2): 129–141.doi: 10.3724/SP.J.1300.2014.14023

[15]Ogawa K, Tsagaanbayar D, Nakamura R. ISAR Imaging for Drone Detection Based on Backprojection Algorithm Using Millimeter-Wave Fast Chirp Modulation MIMO Radar[J]. IEICE Communications Express, 2024, 13(7): 276-279.

[16]Gong J, Yan J, Li D, et al. Interference of Radar Detection of Drones by Birds[J]. Progress In Electromagnetics Research M, 2019, 81: 1-11.

[17]Molchanov P, Harmanny R I A, de Wit J J M, et al. Classification of Small UAVs and Birds by Micro-Doppler Signatures[J]. International Journal of Microwave and Wireless Technologies, 2014, 6(3-4): 435-444.

[18]Liu J, Xu Q Y, Chen W S. Classification of Bird and Drone Targets Based on Motion Characteristics and Random Forest Model Using Surveillance Radar Data[J]. IEEE Access, 2021, 9: 160135-160144.

[19]Molchanov P, Egiazarian K, Astola J, et al. Classification of Small UAVs and Birds by Micro-Doppler Signatures[C]//Proceedings of European Radar Conference. October 2013: 172–175.

[20]Fioranelli F, Ritchie M, Griffiths H, et al. Classification of Loaded/Unloaded Micro-Drones Using Multistatic Radar[J]. Electronics Letters, 2015, 51(22): 1813–1815.

[21]Rahman S, Robertson D A. Time-Frequency Analysis of Millimeter-Wave Radar Micro-Doppler Data from Small UAVs[C]//Proceedings of Sensor Signal Processing for Defence Conference. December 2017.

[22]Yu Q, Rao B, Luo P. Detection Performance Analysis of Small Target Under Clutter Based on LFMCW Radar[C]//2018 IEEE International Conference on Signal and Image Processing (ICSIP). Shenzhen, China: IEEE, 2018: 121-125.

[23]Shao S, Zhu W, Li Y. Radar Detection of Low-Slow-Small UAVs in Complex Environments[C]//2022 IEEE 10th Joint International Information Technology and Artificial Intelligence Conference (ITAIC). IEEE, 2022: 1153-1157.

[24]Shi X, Yang C, Xie W, et al. Anti-Drone System with Multiple Surveillance Technologies: Architecture, Implementation, and Challenges[J]. IEEE Communications Magazine, 2018, 56(4): 68-74.

[25]Liu H, Fan K, He B. Acoustic Source Localization for Anti-UAV Based on Machine Learning in Wireless Sensor Networks[C]//2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA). IEEE, 2020: 1142-1147.

[26]Huang X, Yan K, Wu H-C, et al. Unmanned Aerial Vehicle Hub Detection Using Software-Defined Radio[C]//2019 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB). IEEE, 2019: 1-6.

[27]Klare J, Biallawons O, Cerutti-Maori D. UAV Detection With MIMO Radar[C]//IEEE International Radar Symposium (IRS). Prague, Czech Republic: IEEE, 2017: 1-8.

[28]Blanchard T, Thomas J H, Raoof K. Acoustic Localization and Tracking of a Multi-Rotor Unmanned Aerial Vehicle Using an Array With Few Microphones[J]. The Journal of the Acoustical Society of America, 2020, 148(3): 1456-1467.

[29]Chang X, Yang C, Wu J, et al. A Surveillance System for Drone Localization and Tracking Using Acoustic Arrays[C]//2018 IEEE 10th Sensor Array and Multichannel Signal Processing Workshop (SAM). IEEE, 2018: 573-577.

[30]Sedunov A, Sutin A, Sedunov N, et al. Passive Acoustic System for Tracking Low-Flying Aircraft[J]. IET Radar, Sonar & Navigation, 2016, 10(9): 1561-1568.

[31]Busset J, Perrodin F, Wellig P, et al. Detection and Tracking of Drones Using Advanced Acoustic Cameras[C]//Unmanned/Unattended Sensors and Sensor Networks XI; and Advanced Free-Space Optical Communication Techniques and Applications. SPIE, 2015, 9647: 53-60.

[32]Nie W, et al. UAV Detection and Localization Based on Multi-Dimensional Signal Features[J]. IEEE Sensors Journal, 2022, 22(6): 5150-5162.

[33]Nguyen P, Kim T, Miao J, et al. Towards RF-Based Localization of a Drone and Its Controller[C]//Proceedings of the 5th Workshop on Micro Aerial Vehicle Networks, Systems, and Applications. 2019: 21-26.

[34]Sousa M N, Thomä R S. Localization of UAV in Urban Scenario Using Multipath Exploiting TDoA Fingerprints[C]//2018 IEEE 29th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). IEEE, 2018: 1394-1399.

[35]Li Y, Shu F, Shi B, et al. Enhanced RSS-Based UAV Localization Via Trajectory and Multi-Base Stations[J]. IEEE Communications Letters, 2021, 25(6): 1881-1885.

[36]Nie W, Han Z-C, Zhou M, et al. UAV Detection and Identification Based on WiFi Signal and RF Fingerprint[J]. IEEE Sensors Journal, 2021, 21(12): 13540-13550.

[37]Zhao R, Li T, Li Y, et al. Anchor-Free Multi-UAV Detection and Classification Using Spectrogram[J]. IEEE Internet of Things Journal, 2024, 11(3): 5259-5272.

[38]Fokin G. AOA Measurement Processing for Positioning Using Unmanned Aerial Vehicles[C]//2019 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom). IEEE, 2019: 1-3.

[39]Liu T, Zhao H, Yang H, et al. Design and Implementation of a Novel Real-Time Unmanned Aerial Vehicle Localization Scheme Based on Received Signal Strength[J]. Transactions on Emerging Telecommunications Technologies, 2021, 32(11): e4350.

[40]Zhao J, Zhang J, Li D, et al. Vision-Based Anti-UAV Detection and Tracking[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(12): 25323-25334.

[41]Isaac-Medina B K S, Poyser M, Organisciak D, et al. Unmanned Aerial Vehicle Visual Detection and Tracking Using Deep Neural Networks: A Performance Benchmark[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 1223-1232.

[42]Luo X, Wan X, Gao Y, et al. JointLoc: A Real-time Visual Localization Framework for Planetary UAVs Based on Joint Relative and Absolute Pose Estimation[J]. arXiv preprint arXiv:2405.07429, 2024.

[43]丁逍,蒋鸿宇,郭有为,等.面向空对空场景的无人机识别与跟踪系统[J].工业控制计算机,2024,37(12):6-8.

[44]夏铭江. 复杂动态场景下红外无人机目标检测方法研究[D]. 西安电子科技大学, 2022. DOI:10.27389/d.cnki.gxadu.2022.001462.

[45]Liu H, Fan K, Ouyang Q, et al. Real-Time Small Drones Detection Based on Pruned YOLOv4[J]. Sensors, 2021, 21(10): 3374.

[46]Zhu X, Lyu S, Wang X, et al. TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured Scenarios[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 2778-2788.

[47]Han J, Cao R, Brighente A, et al. Light-YOLOv5: A Lightweight Drone Detector for Resource-Constrained Cameras[J]. IEEE Internet of Things Journal, 2024, 11(6): 11046-11057.

[48]刘观生,谷峥.基于改进YOLOV4-Tiny的UAV实时自动目标检测系统[J].贵阳学院学报(自然科学版),2024,19(04):83-87+94.DOI:10.16856/j.cnki.52-1142/n.2024.04.012.

[49]Svanström, Fredrik, Fernando Alonso-Fernandez, and Cristofer Englund. "Drone detection and tracking in real-time by fusion of different sensing modalities." Drones 6.11 (2022): 317.

[50]Wu G, Zhou F, Meng C, et al. Precise UAV MMW-Vision Positioning: A Modal-Oriented Self-Tuning Fusion Framework[J]. IEEE Journal on Selected Areas in Communications, 2023.

[51]Shi C, Lai G, Yu Y, et al. Real-Time Multi-Modal Active Vision for Object Detection on UAVs Equipped with Limited Field of View LiDAR and Camera[J]. IEEE Robotics and Automation Letters, 2023, 8(10): 6571-6578.

[52]Huang C, Petrunin I, Tsourdos A. Radar-Camera Fusion for Ground-Based Perception of Small UAV in Urban Air Mobility[C]//2023 IEEE 10th International Workshop on Metrology for AeroSpace (MetroAeroSpace). IEEE, 2023: 395-400.

[53]Ding S, Guo X, Peng T, et al. Drone Detection and Tracking System Based on Fused Acoustical and Optical Approaches[J]. Advanced Intelligent Systems, 2023, 5(10): 2300251.

[54]Yang Y, Yuan S, Yang J, et al. AV-FDTI: Audio-Visual Fusion for Drone Threat Identification[J]. Journal of Automation and Intelligence, 2024, 3(3): 144-151.

[55]Vakil A, Liu J, Zulch P, et al. A survey of multimodal sensor fusion for passive RF and EO information integration[J]. IEEE Aerospace and Electronic Systems Magazine, 2021, 36(7): 44-61.

[56]Gao W, Wu Y, Hong C, et al. RCVNet: A bird damage identification network for power towers based on fusion of RF images and visual images[J]. Advanced Engineering Informatics, 2023, 57: 102104.

[57]Yan X, Fu T, Lin H, et al. UAV Detection and Tracking in Urban Environments Using Passive Sensors: A Survey[J]. Applied Sciences, 2023, 13(20): 11320.

[58]Sun X, Ng D W K, Ding Z, et al. Physical Layer Security in UAV Systems: Challenges and Opportunities[J]. IEEE Wireless Communications, 2019, 26(5): 40-47.

[59]Schmidt R. Multiple emitter location and signal parameter estimation[J]. IEEE transactions on antennas and propagation, 1986, 34(3): 276-280.

[60]Redmon J. You only look once: Unified, real-time object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

[61]Liu W, Anguelov D, Erhan D, et al. Ssd: Single shot multibox detector[C]//Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14. Springer International Publishing, 2016: 21-37.

[62]Lin T. Focal Loss for Dense Object Detection[J]. arXiv preprint arXiv:1708.02002, 2017.

[63]Ren S, He K, Girshick R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE transactions on pattern analysis and machine intelligence, 2016, 39(6): 1137-1149.

[64]Tang S, Zhang S, Fang Y. HIC-YOLOv5: Improved YOLOv5 for small object detection[C]//2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024: 6614-6619.

[65]Ji C L, Yu T, Gao P, et al. YOLO-TLA: An efficient and lightweight small object detection model based on YOLOv5[J]. arXiv preprint arXiv:2402.14309, 2024.

[66]Zheng Z, Wang P, Ren D, et al. Enhancing geometric factors in model learning and inference for object detection and instance segmentation[J]. IEEE transactions on cybernetics, 2021, 52(8): 8574-8586.

[67]Redmon J. Yolov3: An incremental improvement[J]. arXiv preprint arXiv:1804.02767, 2018.

[68]Ge Z. Yolox: Exceeding yolo series in 2021[J]. arXiv preprint arXiv:2107.08430, 2021.

[69]Zhou Z, Zhu Y. KLDet: Detecting Tiny Objects in Remote Sensing Images via Kullback-Leibler Divergence[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024.

[70]Cai Z, Vasconcelos N. Cascade r-cnn: Delving into high quality object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 6154-6162.

[71]Sun P, Zhang R, Jiang Y, et al. Sparse r-cnn: End-to-end object detection with learnable proposals[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 14454-14463.

[72]Lyu C, Zhang W, Huang H, et al. Rtmdet: An empirical study of designing real-time object detectors[J]. arXiv preprint arXiv:2212.07784, 2022.

[73]Zhang H, Chang H, Ma B, et al. Dynamic R-CNN: Towards high quality object detection via dynamic training[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XV 16. Springer International Publishing, 2020: 260-275.

[74]Yuan X, Cheng G, Yan K, et al. Small object detection via coarse-to-fine proposal generation and imitation learning[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2023: 6317-6327.

[75]P. Sun et al., "Sparse R-CNN: An End-to-End Framework for Object Detection," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 12, pp. 15650-15664, Dec. 2023, doi: 10.1109/TPAMI.2023.3292030.

中图分类号:

 TP391    

馆藏号:

 2025-016-0103    

开放日期:

 2025-09-28    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式