Abstract: | In this paper, we aim to enhance the first-person indoor navigation and scene understanding experience by fusing inertial data collected from a smartphone carried by the user with the vision information obtained through the phone’s camera. We employed the concept of vanishing directions together with the orthogonality constraints of the man-made environments in an expectation maximization framework to estimate person’s orientation with respect to the known indoor coordinates from video frames. This framework allows to include prior information about camera rotation axis for better estimations as well as to select candidate edge-lines for estimation of hallways’ depth and width from monocular video frames, and 3D modeling of the scene. Our proposed algorithm concurrently combines the vision-based estimated orientation with the inertial data using a Kalman filter in order to refine estimations and remove substantial measurement drift from inertial sensors. We evaluated the performance of our vision-inertial data fusion method on an IMU-augmented video recorded from a rotary hallway in which a participant completed a full lap. We demonstrated that this fusion provides virtually drift-free instantaneous information about the person’s relative orientation. We were able to estimate hallways’ depth and width, and generate a closed-path map from the rotary hallway over a roughly 60-meter lap. |
Published in: |
2018 IEEE/ION Position, Location and Navigation Symposium (PLANS) April 23 - 26, 2018 Hyatt Regency Hotel Monterey, CA |
Pages: | 1213 - 1222 |
Cite this article: | Farnoosh, Amirreza, Nabian, Mohsen, Closas, Pau, Ostadabbas, Sarah, "First-Person Indoor Navigation via Vision-Inertial Data Fusion," 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, April 2018, pp. 1213-1222. https://doi.org/10.1109/PLANS.2018.8373507 |
Full Paper: |
ION Members/Non-Members: 1 Download Credit
Sign In |