We develop a method for ego-positioning with a low-cost monocular camera. To reduce the computational and memory requirements as well as the communication loads, we tackle the model compression task as a weighted k-cover problem for better preserving the critical structures. For real-world vision-based positioning applications, we consider the issue of large scene changes and introduce a model update algorithm to address this problem. Experimental results show that sub-meter (~30 cm) accuracy can be achieved in real scenes.
(a) Setup for acquiring image sequences. (b) Example of results of video evaluation. (Bottom-right) Image from smartphone. (Bottom-left) Image from camera on third floor. (Upper-left) Positioning results.