Previous Abstract Return to Session A2 Next Abstract

Session A2: Small Size or Low Cost Inertial Sensor Technologies

Using a Mobile Range-Camera Motion Capture System to Evaluate the Performance of Integration of Multiple Low-Cost Wearable Sensors and Gait Kinematics for Pedestrian Navigation in Realistic Environments
Chandra Tjhai and Kyle O’Keefe, Position, Location, and Navigation (PLAN) Group, University of Calgary, Canada
Location: Big Sur

This paper investigates multiple wearable low-cost inertial sensors as a viable solution to indoor pedestrian navigation. Many algorithms have been developed to solve pedestrian navigation problems. Since the navigation satellites are not available in indoor environments, researchers have focused on inertial-based sensors. However, self-contained sensors often produce navigation solutions that drift quickly due to sensor error accumulation. Other aiding systems like wireless local area networks or beacon systems have been used to improve the performance but such systems require infrastructure and computational expense.
Inertial-based pedestrian dead-reckoning algorithms, which have been developed to date, can be divided into several categories based on the sensor location: foot-mounted, waist-mounted, in-pocket and hand-held. However, it is very difficult to determine which algorithm is suitable or the best for certain environments because the determination requires that all measurements from different locations are collected on a unified platform.
Technology advancement in micro-electromechanical systems (MEMS) has resulted in a wide range of inertial and magnetic sensors in terms of size, quality and cost. Many prior works have focused on pedestrian navigation using higher-end consumer grade MEMS sensors (costing on the order of thousands of dollars). Recent development in MEMS has enabled the mass production of very low cost off-the-self inertial and magnetic sensors. Using this off-the-shelf sensor, a system with multiple wearable sensors can be developed to realize an infrastructure-free navigation system.
Pedestrian motion can also be estimated through the knowledge of the gait kinematics that describes the motion of each lower limb segment during walking. Having a multi-wearable sensor system, it is possible to measure the motion of lower limbs by mounting a wearable sensor on each segment. However, evaluating the performance of each sensor module is difficult because a reference solution is required.
Gait kinematics can be measured using a motion capture system. Traditionally, the motion capture is performed using optical metnods which deploy optical markers on the test subject. These types of systems are very expensive but are useful when high accuracy results are required. Recently, motion capture has been deployed in the computer gaming industry to use the human body instead of handheld game controllers for user/game interaction. These types of motion capture system either operate by projecting an infrared pattern onto the users or by employing range-cameras to measure the 3-D location of each pixel and in both cases do not require putting markers on the test subject. A limitation of both optical and range-camera systems is that they are typically limited to a laboratory, or living room, environment such as a treadmill walk or a similar controlled environment. Inertial sensors have also been used in motion capture applications such as clinical studies for assessments of patients, ambulatory measurements and performance monitoring for athletes. In these cases the inertial sensor is only used to estimate gait parameters from acceleration and angular rate data.
The objective of this paper is to compare the performance of a pedestrian navigation solution obtained from a set of low-cost wearable sensors coupled with a gait model to a higher cost reference solution. Additionally, this paper aims to demonstrate the feasibility of a mobile range-camera motion capture system as a source of the reference trajectory. To realize these objectives, a sensor data logger and mobile range-camera systems are developed and tested.
A sensor data logger developed in the paper is used to record inertial and magnetic field measurements from a set of seven wearable sensors. The selected electronics include a STMicroelectronics Nucleo-F746ZG microcontroller, Invensense MPU-6050 six degree-of-freedom IMUs, and Invensense MPU-9250 nine degree-of-freedom motion sensors. Raw data from all sensors are read using two-wire connections and send to a Raspberry Pi computer for recording. The entire system can be powered by a USB power bank.
To generate reference solutions for each wearable sensor, a mobile range-camera motion capture system consisting of two SwissRanger 4000 ToF range cameras is developed. These cameras are mounted on a cart and pushed to follow a test subject in order to capture the walking motion. The position of the mobile motion capture system is tracked by a tactical grade INS/GNSS device. This motion capture system provides joint angle and step length measurements.
Experiments are conducted where a sensor wearing person is asked to walk in different environments. Human lower limbs can be modelled as solid links that are connected with joints. Since there are seven lower limb segments, the wearable system requires seven sensors to be distributed on the foot, shank, and thigh of each leg, and one on pelvis that connects the two legs. One experiment is conducted in a laboratory setting where the sensor wearing subject is walking on a treadmill while also being monitored by a Vicon motion capture system. This experiment is used to test the feasibility of the wearable multi-sensor system to estimate joint angles and step-sizes. The second experiment involves walking in a realistic environment where the experimental mobile range-camera motion capture system is used to generate the reference solution.
Presentation of the results focuses on the performance of sensor fusion algorithms. Results of the treadmill walk experiment are used to test sensor fusion algorithms in estimating joint angles and step-sizes. Meanwhile, the results from the second experiment are used to evaluate the sensor fusion algorithms as a pedestrian navigation system. Different sensor configurations and different sensor fusion algorithms are compared.
Initial step-size estimation, using four wearables and simple orientation estimation, shows that the average step length error is about 5 cm. The anticipated results include the performance of the improved step-size estimation algorithm as well as the comparison between different pedestrian dead-reckoning and sensor fusion algorithms with mobile range-camera motion capture.



Previous Abstract Return to Session A2 Next Abstract