Seeing
Eye Phone - Navigation
Guidance for Visually
Impaired
|
|
In order to help the
visually impaired as they navigate unfamiliar
environment such as public buildings, this paper
presents a novel smart phone, vision-based indoor
guidance system called Seeing Eye Phone. This
system consists of a smart phone and a server. The
smart phone captures and transmits images of the
user’s surroundings to the server. The server
processes the phone images to detect and describe
2D features by SURF and then matches them to the
2D features of the stored map images that include
their corresponding 3D information of the
building. After features are matched, Direct
Linear Transform runs on a subset of
correspondences to find a rough initial pose
estimate and the Levenberg-Marquardt algorithm
further refines the pose estimate to find a more
optimal solution. With the estimated pose and the
camera’s intrinsic parameters, the location and
orientation of the user is calculated using 3D
location correspondence data stored for features
of each image. Positional information is then
transmitted back to the smart phone and
communicated to the user via text-to-speech. This
indoor guiding system uses efficient algorithms
such as SURF, homographs, multi-view geometry, 3D
to 2D reprojection to solve a very unique problem
that will benefit the visually impaired. The
experimental results demonstrate the feasibility
of using a simple machine vision system design to
accomplish a complex task and the potential of
building a commercial product based on this
design.
|
Collaborators: |
Dr. Dong Zhang, Sun Yat-sen University, Guangzhou,
China |
Graduate
Students:
|
Brandon
Taylor, Jonathan Anderson
|
Publications:
-
D.
Zhang, D.J. Lee, and B. Taylor, “Seeing
Eye Phone: A Smart Phone-based Indoor
Guidance System for the Visually Impaired,
“Machine Vision and Applications Journal, vol.
25/3, p. 811-822, April 2014.
-
B.
Taylor, D.J. Lee, D. Zhang, G.M. Xiong, “Smart
Phone-based Indoor Guidance System for the
Visually Impaired,” IEEE International
Conference on Control, Automation, Robotics
and Vision (ICARCV), p. 871-876, Guangzhou,
China, December 5-7, 2012.
|
(Click
image to view.)
|
|
|