Home Page Image


Project Abstract

As mobile technology develops at rapid rates with annual cell phone sales in the millions, the push for the integration of more complex computation algorithms on these devices also increases.  With camera phones becoming a standard and an estimated 50% of all phones having this capability by the end of 2006, the ability to use this feature for more than simple applications of photo capture and mail is more and more desired.  In this project, we sought to incorporate the functionality of mobile cameras with the complex computation and algorithms of computer vision to address the problem of location recognition. 

The project involved two stages, each of which had unique computation and hardware challenges.  In the first stage, the problem of object and location recognition from a computer vision perspective was addresses.  The goal was to develop a system to analyze feature components of an image and compare these descriptors to an established database to recognize important landmarks on the University of Pennsylvania campus.  A combination of SIFT descriptor comparison using a k-means clustering technique and the intersection of color histograms were used to determine similarity.  The second part of the project addresses the hardware components of designing a system to transmit data from mobile devices to a server where the computation with the program in the previous component occurs.  Applications of this technology include museum, amusement park and city guides, location recognition for lost tourists, auto-tagging by location on online web sites, and real-time direction/navigation systems.