The GPS localization services are very common nowadays as we use this technology almost everywhere – well, at least where it is possible to use it. GPS-enabled smartphones and dedicated electronic navigation assistants find their use in our cars; we play computer games with geo-tagging functions, find our friends’ locations when checking our social network account, etc. But, as I mentioned, not all places are equally ‘GPS-friendly’.
Let’s consider a densely populated urban area. Probably most of you have encountered a situation when your GPS receiver start-up time was irritatingly long? The reason is that the reflections from buildings and partially or entirely obstructed direct line-of-sight to the satellites limits strength of the positioning signals and thus decrease the precision of location estimation. Certainly, most of the time the GPS equipment is used in vehicles and demonstrates a great performance there, but it is slightly a different story when it comes down to people who are more dependent on localization technologies like blind and persons with greater degree of visual impairment.
Well, the scientists and engineers are well-aware of this problem and are constantly working on improving localization technologies. Most of prospective ideas include some sort of combination of GPS and other technologies which provide additional information about exact location of the navigational device. Some methods have been around for a while – for example, using additional cellular network data or estimating your location based on your movement speed. And yet another novel approach could be implemented by using images collected by Google Street View service and the camera of your smartphone, say the scientists from the University of Illinois at Chicago.
In their article published on arXiv.org this week the authors described a method for better localization based on image retrieval and image recognition algorithms. The idea is not new and involves using smartphone camera to capture image of user location and comparing it to images stored in database and containing appropriate GPS-tags. “In fact those approaches are based on searching for the best match for a query image in a database of Geo-reference images with accurate GPS coordinates. One of these references is Google Street View”, explain the authors of the current work.
According to the scientists, a number of scientifically successful implementations of this idea already are available. Most of them are based on scene recognition using so-called feature extraction and matching algorithms. Despite the fact that such algorithms are quite efficient, the scale of existing datasets usually is enormous in order to ensure an accurate position estimation. For this reason additional user position-related information is used to limit the computational workload. Often the subset of initial image database is used based on very approximate location data; in some cases those databases are customized to include only application-specific images limited by the scope of particular studies.
The authors of the current study opted to download the Google Street View images directly and used them as a database. In addition to the best-match image selection using existing algorithms the scientists used the information about camera orientation in the moment of image capture. For this purpose the data acquired from the phone’s inertial measurement unit was collected in order to limit the range of candidate reference images in the database.
“We tried to utilize this fact that nowadays most of mobile devices such as smartphones are equipped by differential inertial sensors. For accurate localization instead of searching through a city scale dataset it would be better to limit search space”, say the authors. They note that the system they developed proves the feasibility of this concept and could be used to increase the convenience of navigation for people with visual disabilities.
However, some work has to be done before the development could be used in the market. “Although it provides reasonable results but it fails in some of cases especially when the quality of images is not good in the dataset”, the researchers say. To address this issue the team plans to implement the most recent feature extraction and best match detection algorithms
Written by Alius Noreika