Previous Abstract Return to Session C5 Next Abstract

Session C5: Navigation Using Environmental Features

An Environment Recognition Method for MAVs using a Smartphone
Yaning Wang, Li Fu, Lingling Wang, Yandong Wang, School of Automation Science and Electrical Engineering, Beihang University, China
Location: Windjammer

In the field of micro-aerial-vehicles, Positioning and navigation depends on the environment which it is located in. MAVs often go through different environments in a flight task, which cause a problem that what navigation method MAVs should adopt and where the different navigation methods MAVs should change. Much navigation methods in different environments have been proposed in recent researches. However, the research of environment recognition is still in its infancy. In general, navigation and positioning of MAVs is more dependent on the global navigation satellite systems (GNSS) in open area such as outdoor. Inertial Measurement Unit (IMU) and vision-aided inertial navigation are widely used in GNSS-denied environments such as deep indoor. While in the area such as outer indoor and deep urban, multiple techniques are often used in combination. Because of the inherently different techniques and different sensors application in different environments for MAVs, how to locate whether the MAV is indoor, outdoor or intermediate becomes a basic task of MAVs navigation. High accuracy in the recognition protects the safety of flight and MAVs. Once the MAV make a mistake, it will cause a risk of flight. In order to change navigation method and sensors in the suitable occasion, it is necessary to use an effective environment recognition method to provide environment information to MAVs.
In previous work, information which is used to recognize different environments usually derive from GNSS signal, the strength of signal shows the reliability of GNSS. Some mathematical models are used to analyze the relationship between GNSS signal and environment, which perform well in recognition between indoor and outdoor, while get a controversial result in the recognition of intermediate.
In this paper, an environment recognition method that can instruct where and when the MAV should change navigation method has been proposed. In order to get higher accuracy, multiple sensors provided by a smartphone are used with their information combined, such as GNSS, magnetometer and camera. According to the data collected in different environments, GNSS signal can be obviously used as an indicator between indoor and outdoor, while camera and other sensors can be used in the recognition of intermediate. A two-level recognition system is established via mathematical model and deep-learning convolutional neural network. The first level of the system constructed by a mathematical model using GNSS signal gives the benchmark confidence rate of three different environments. The second level of system works when the confidence rate is low which provided by the first level. A confusion of convolutional neural network trained and tested by environment pictures and mathematical model constructed by other related sensors data is used in the second level. Convolutional neural network model can automatically find the abstract features in pictures of different environments, give MAVs ability to recognize current environment. A convolutional neural network structure with convolution layers, pool layers and full connected layers is proposed in the paper, using some optimizations to get better performance such as dropout and different activation functions. A confusion of multi-angle image which took by smartphone is used as the network input. The second level of system outputs final confidence probability to decide whether the MAV need to change its navigation method.
In order to validate the effectiveness of the system, a platform has been established. Environment data was collected using a Hua Wei Mate9 Pro smartphone, which contains multiple sensors such as accelerometer, gyro, light intensity sensor, magnetometer and so on. The smartphone is fixed on a gimbal which is controlled by a storm32bgc controller. The gimbal is installed above a DJI S1000+ airframe which is controlled by a pixhawk to get sparse filed of environment. The smartphone collect sensors’ real-time data and take pictures in different angles. Then a model is constructed and a network is trained and tested. The two-level system using multi sensors is compared with the single system only using GNSS signal. It is shown that the test accuracy of the two-level system is obviously higher than the single one. Therefore, the system proposed in the paper can meet actual need of MAVs in adaptive navigation.



Previous Abstract Return to Session C5 Next Abstract