Search BYU 
Contact | Help
BYU - Electrical and Computer Engineering

Robotic Vision Lab

Research Lab

Department of Electrical and Computer Engineering
Room 430 CB
Provo, Utah 84602
Tel: 1 + 801-422-4119

MainRobotic VisionMachine VisionMedical ImagingClass Projects
Robotic Vision Lab
news imageWhat there is to see
The Robotic Vision Lab endeavors to reach new frontiers in the field of robotic vision, through building a strong foundation of essential basics to both graduate and undergraduate students while encouraging creativity and dedication to expand on those basics. As computing technology advances in the last two decades, computer vision for many industrial, military, and surveillance applications has become reality. Vision computation plays a critical role in many robot related applications such as manufacturing automation, obstacle detection, and autonomous navigation. Most researchers focus on developing sophisticated mathematical models and algorithms to solve vision-related problems, which require extensive computational power. This approach, although successful, is not suitable for many embedded vision applications such as micro unmanned air and ground vehicles (UAV and UGV) that have strict size, power, and processing speed requirements. For navigation and obstacle avoidance of these unmanned vehicles, a rough, quickly calculated estimation is arguably more useful than a more accurate, but slowly calculated estimate. There are many machine vision applications that could benefit from real-time three-dimensional information, especially if provided by lightweight, low power, passive sensors such as vision sensors. Basic low-quality image sensors can be found on systems from cell phones to entertainment game consoles to security systems to high-tech micro unmanned aerial vehicles. Each of these systems may have limited computational resources for many possible reasons: constraints on weight, size, power, and cost, or perhaps the dedication of the bulk of computing resources to a different primary task. Tasks such as obstacle avoidance, pose identification, and landscape mapping could be implemented on resource-constrained systems using three-dimensional information to realize applications like device-less human control, autonomous vehicle control, augmented video surveillance, threat analysis, handicap assistance, and a host of other applications. The development of new technologies aids the advancement of these applications, but even after they are realized, most will continue to benefit from the employment of algorithms that allow tradeoffs in accuracy and required resources.
Maintained by The ECEn Web Team. Based on v. 3.8 of the ECEn web templates (view XML, see other formats).
Copyright 1994-2005. Brigham Young University. All Rights Reserved.