为了实现无人机在变电站巡检时自主避障,提出一种利用编码标志作为控制点的无人机自主定位方法.首先讨论编码标志的设计与解码,然后利用灰度方向直方图(histogram of oriented gradient,HOG)特征和支持向量机(support vector machine,SVM)检测并跟踪图像中的编码标志,以提高算法运行效率,最后根据编码标志在电塔上的位置及其在图像上的对应坐标,由物点与像点的透视关系解算无人机位置.实验表明:HOG+SVM组合检测图像中的编码标志召回率为99%;解码算法在编码标志模糊、变形等极端条件下依然保持强鲁棒性,解码错误率仅为0.05%;无人机定位误差不超过±0.03 m,算法运行速度达10帧/s,可用于变电站的实际巡检.
In order to achieve the automatic obstacle avoidance of unmanned aerial vehicle (UAV) during substation inspection, this paper proposes a UAV autonomous positioning method by using encoded sign as control point. First, the design and decoding of encoded sign are discussed. Second, the encoded signs in the image are detected by using the SVM+HOG algorithm, and the tracking algorithm is used to track the encoded sign efficiency. At last, according to the location of encoded signs on the pylon and its corresponding coordinates on the image, the location of UAV can be calculated by using the projective relation between the object points and their corresponding image points. Experiments show that the set of SVM+HOG can detect the encode signs in the image at a recall rate of 99%, and the decoding algorithm remains strong robustness in signs blur, deformation and other extreme conditions, with decoding error rate of only 0.05%. The error of UAV positioning is not more than ±0.03 m, and the algorithm runs at a speed of 10 frames per second, fast enough for practical substation inspections.
[1] Lu S, Zhang Y, Su J. Mobile robot for power substation inspection:a survey[J]. IEEE/CAA Journal of Automatica Sinica, 2017:1-18.
[2] 付晶,邵瑰玮,吴亮,刘磊,季铮. 利用层次模型进行训练学习的线路设备缺陷检测方法[J]. 高电压技术, 2017, 43(1):266-275. Fu J, Shao G W, Wu L, Liu L, Ji Z. Defect detection of line facility using hierarchical model with learning algorithm[J]. High Voltage Engineering, 2017, 43(1):266-275. (in Chinese)
[3] Katrasnik J, Pernus F, Likar B. A survey of mobile robots for distribution power line inspection[J]. IEEE Transactions on Power Delivery, 2010, 25(1):485-493.
[4] Song Y, Wang H, Zhang J. A vision-based broken strand detection method for a power-line maintenance robot[J]. IEEE Transactions on Power Delivery, 2014, 29(5):2154-2161.
[5] Tian F, Wang Y, Zhu L. Power line recognition and tracking method for UAVs inspection[C]//International Conference on Information and Automation. IEEE, 2015:2136-2141.
[6] Kothari N, Gupta M, Vachhani L, Arya H. Pose estimation for an autonomous vehicle using monocular vision[C]//Indian Control Conference. IEEE, 2017:424-431.
[7] Guo R, Xiao P, Han L. GPS and DR integration for robot navigation in substation environments[C]//International Conference on Information and Automation. IEEE, 2010, 96(1):2009-2012.
[8] Blosch M, Weiss S, Scaramuzza D, Siegwart R. Vision based MAV navigation in unknown and unstructured environments[C]//International Conference on Robotics and Automation. IEEE, 2010:21-28.
[9] Eberli D, Scaramuzza D, Weiss S, Siegwart R. Vision based position control for MAVs using one single circular landmark[J]. Journal of Intelligent & Robotic Systems, 2011, 61:495-512.
[10] Shen S, Michael N, Kumar V. Autonomous multi-floor indoor navigation with a computationally constrained MAV[C]//2011 IEEE International Conference on Robotics and Automation. IEEE, 2011, 47(10):20-25.
[11] Wendel A, Irschara A, Bischof H. Natural landmark-based monocular localization for MAVs[C]//2011 IEEE International Conference on Robotics and Automation. IEEE, 2011, 19(6):5792-5799.
[12] Buyval A, Gavrilenkov M. Vision-based pose estimation for indoor navigation of unmanned micro aerial vehicle based on the 3D model of environment[C]//International Conference on Mechanical Engineering, Automation and Control Systems. IEEE, 2015:1-4.
[13] Kendoul F, Fantoni I, Nonami K. Optic flow-based vision system for autonomous 3D localization and control of small aerial vehicles[J]. Robotics and Autonomous Systems, 2009, 57(6):591-602.
[14] Strelow D, Singh S. Motion estimation from image and inertial measurements[J]. The International Journal of Robotics Research, 2004, 25(2):1157-1195.
[15] Mourikis A I, Trawny N, Roumeliotis S I, Johnson A E, Ansar A, Matthies L. Visionaided inertial navigation for spacecraft entry, descent, and landing[J]. IEEE Transactions on Robotics, 2009, 25(2):264-280.
[16] Pinies P, Lupton T, Sukkarieh S, Tardos J D. Inertial aiding of inverse depth SLAM using a monocular camera[C]//Proceedings of 2007 IEEE International Conference on Robotics and Automation. IEEE, 2007:2797-2802.
[17] 郝向阳,张振杰,刘松林,郭晓毅. 一种基于人工标志的室内视觉导航方法[J]. 导航定位学报,2013, 1(4):26-30. Hao X Y, Zhang Z J, Liu S L, Guo X Y. An approach of indoor visio navigation with artificial marks[J]. Journal of Navigation and Positioning, 2013, 1(4):26-30. (in Chinese)
[18] Xu Z, Shao G, Liang W, Xie Y, Zheng J. Automatic UAV positioning with encoded sign as cooperative target[J]. Transactions of Nanjing University of Aeronautics and Astronautics, 2017, 34(6):669-679.
[19] Greenhalgh J, Mirmehdi M. Real-time detection and recognition of road traffic signs[J]. IEEE Transactions on Intelligent Transportation Systems, 2012, 13(4):1498-1506.
[20] Dalal N, Triggs B. Histograms of oriented gradients for human detection[C]//IEEE Computer Society Conference on Computer Vision & Pattern Recognition. IEEE, 2005, 1(12):886-893.
[21] Uijlings J R R, van de Sande K E A, Gevers T, Smeulders A W M. Selective search for object recognition[C]//International Journal of Computer Vision. ACM, 2013, 104(2):154-171.
[22] Felzenszwalb P F, Huttenlocher D P. Efficient graph-based image segmentation[J]. International Journal of Computer Vision, 2004, 59(2):167-181.
[23] Kim K S, Jang D S, Choi H I. Real time face tracking with pyramidal Lucas-Kanade feature tracker[C]//International Conference on Computational Science & ITS Applications-Iccsa 2007, 2007:1074-1082.
[24] 李德仁,袁修孝. 误差处理与可靠性理论[M]. 2版. 武汉:武汉大学出版社,2012:234-235.