BYU Home page BRIGHAM YOUNG UNIVERSITY  
Search BYU 
BYU - Electrical and Computer Engineering

Robotic Vision Lab

Research Lab

Department of Electrical and Computer Engineering
Room 430 CB
Provo, Utah 84602
Tel: 1 + 801-422-4119

MainRobotic VisionMachine VisionMedical ImagingClass Projects
Robotic Vision
news imagePixel Light
BYU Robotic Vision Lab (RVL) implemented a real-time vision algorithm for simulation of automobile headlight detection and localization for the next generation automobile pixel lighting control. The output of this vision algorithm is used to detect headling of the incoming vehicle and dim the pixel lighting to avoid blinding the driver in the incoming vehicles. Click on "more" to watch a demo video.
news imageSeeing Eye Phone
We have developed an indoor natigation system to assist visually impaired person to navigate unfamiliar environment such as public building. This Seeing Eye Phone system consists primarily of a server and a smart phone. The smart phone takes pictures at regular intervals as the u ser moves and sends them to the server along with a time stamp and its most recently known position. The user can also speak a simple predefined voice command, such as asking for their current location or how to get to their desired destination. The command is interpreted by the use of voice recognition software and passed on to the server. The server matches the input images to the map images near the most recently known location in the database. Once a match is found the server calculates the camera pose in the 3D real world coordinate system and then uses the floor plan to find a route to the desired destination. It sends the location and directions back to the phone using a text-to-speech function to direct the user to his desired destination.
news imageReal-time Stereo Vision
We have developed a simple but efficient stereo vision algorithm that is based on matching the shape of two intensity profiles from the left and right cameras. It is suitable for hardware implementation or run on a resource limitted systems. This algorithm has been used for obstacle avoidance and gesture recognition for various robotic vision application.
news imageReal-time Embedded Vision Sensor
We have developed a custom FPGA board for small unmanned vehicles (UVs). The FPGA chip is the sole computational support on the vehicle, so it performs all processing associated with sensing, communication, and control. A user can provide directives to the UV, via a base station, a laptop or desktop computer that communicates wirelessly with the UV. The user can cause the UV to autonomously track and follow another vehicle or object by selecting an area in the image displayed on the base station. To support this functionality, visual feedback must be provided to the user at the base station. This is accomplished by converting each digital image captured by the FPGA vision sensor to an analog representation and transmitting it as standard NTSC video that is received, digitized, and displayed on the base station. The payload constraints imposed by small UVs can be severe. Image sensors are small and lightweight, but it is difficult to provide the necessary computational power on the vehicle to process video in real-time at frame rate. Our system supports other sensors, but the principal source of information from the environment in this work is a camera mounted on the vehicle. Our platform uses a Virtex-4 FX60 FPGA that includes two PowerPC CPUs on chip, in addition to configurable logic resources. Thus, our application has two forms of computational support: conventional processors running compiled C code, and custom hardware implemented in the FPGA fabric written in VHDL.
news imageQuad-rotor Helicopter
In order to empirically test the completed vision sensor, a hovering micro-UAV platform was designed and built. Design specifications were set to require the desired platform to have a total payload capacity of 5 lb at the current elevation of the testing location of 4,500 ft and achieve a flight time of 30 min. A quad-rotor design was selected and was built from mostly off-the-shelf components resulting in the platform shown on the left. The goal is to use the readily available sensors in a smartphone such as the GPS, the accelerometer, the rate-gyros, and the camera to support vision-related tasks such as flight stabilization, estimation of the height above ground, target tracking, obstacle detection, and surveillance. An Android smartphone is connected through the USB port to an external hardware that has a microprocessor and circuitries to generate pulse-width modulation signals to control the brushless servomotors on the quad-rotor. The high-resolution camera on the smartphone is used to detect and track features to maintain a desired altitude level.
Maintained by The ECEn Web Team. Based on v. 3.8 of the ECEn web templates (view XML, see other formats).
Copyright © 1994-2005. Brigham Young University. All Rights Reserved.