Display XML

This page is currently set to use the 3.8 version of the BYU ECEn Department XML/XSLT web templates. For more information about the ECEn Templates you can take a look at the documentation, or contact the web team.

What is This Page?

This is an HTML-ified version of the final assembled XML for the page, after all includes and some escapes have already been processed. To download the actual raw assembled XML---or the individual files of which this XML was orgininally composed, go to the Other Views/Formats Page page.

You can also go back to the normal page view.

The XML...

<page>
<title>
<text>BYU ECEn Department: Robotic Vision Lab</text>
</title>
<name>Robotic Vision Lab</name>
<now-showing-tab>Robotic Vision</now-showing-tab>
<style>main page</style>
<tabs>
<tab>
<name>Main</name>
<related-page>index.phtml</related-page>
</tab>
<tab>
<name>Robotic Vision</name>
<related-page>robotic_vision.phtml</related-page>
</tab>
<tab>
<name>Machine Vision</name>
<related-page>machine_vision.phtml</related-page>
</tab>
<tab>
<name>Medical Imaging</name>
<related-page>bio_medical.phtml</related-page>
</tab>
<tab>
<name>Class Projects</name>
<related-page>class_projects.phtml</related-page>
</tab>
</tabs>
<features>
<feature>
<highlight-picture-file>Support_Info//images/rvlab2c.JPG</highlight-picture-file>
<content>
<picture>
<file>Support_Info/images/ecen_logo_halfsize.jpg</file>
<alternate-text>BYU - Electrical and Computer Engineering</alternate-text>
<style>positioned left</style>
<style>text wraps</style>
</picture>
<line-break />
<special-text>
<style>feature title</style>
<text>Robotic Vision Lab</text>
</special-text>
<line-break />
<line-break />
<special-text>
<style>feature sub-title</style>
<text>Research Lab</text>
</special-text>
<line-break />
<line-break />
<text>Department of Electrical and Computer Engineering</text>
<line-break />
<text>Room 250 B34</text>
<line-break />
<text>Provo, Utah 84602</text>
<line-break />
<text>Tel: 1 + 801-422-4119</text>
</content>
</feature>
</features>
<menu>
<style>main menu</style>
<sub-menu>
<name>SYBA Descriptor</name>
<item>
<name>Algorithms</name>
<related-page>Robotic_Vision/SYBA/SYBAFeature.html</related-page>
</item>
<item>
<name>Visual Odometry</name>
<related-page>Robotic_Vision/SYBA/VisualOdometry.html</related-page>
</item>
<item>
<name>UAV Target Tracking</name>
<related-page>Robotic_Vision/SYBA/UAVTargetTracking.html</related-page>
</item>
<item>
<name>Broadcast Video Analysis</name>
<related-page>Robotic_Vision/SYBA/BroadcastVideo.html</related-page>
</item>
<item>
<name>Advanced Driver Assistaance Systems</name>
<related-page>Robotic_Vision/SYBA/ADAS.html</related-page>
</item>
<item>
<name>BYU Feature Matching Dataset</name>
<related-page>Robotic_Vision/SYBA/BYUFeatureMatching.html</related-page>
</item>
</sub-menu>
<sub-menu>
<name>BASIS Descriptor</name>
<item>
<name>Algorithms</name>
<related-page>Robotic_Vision/Feature/Feature.html</related-page>
</item>
<item>
<name>FPGA Implementation</name>
<related-page>Robotic_Vision/Feature/FeatureImp.html</related-page>
</item>
<item>
<name>UAV Applications</name>
<related-page>Robotic_Vision/Feature/FeatureApp.html</related-page>
</item>
<item>
<name>Idaho Dataset</name>
<related-page>Robotic_Vision/Feature/IdahoDataSet.html</related-page>
</item>
</sub-menu>
<sub-menu>
<name>Stereo Vision</name>
<item>
<name>Algorithms</name>
<related-page>Robotic_Vision/StereoVision/StereoVision.html</related-page>
</item>
<item>
<name>FPGA Implementation</name>
<related-page>Robotic_Vision/StereoVision/StereoVisionImp.html</related-page>
</item>
<item>
<name>Applications</name>
<related-page>Robotic_Vision/StereoVision/StereoVisionApp.html</related-page>
</item>
</sub-menu>
<sub-menu>
<name>Optical Flow</name>
<item>
<name>Algorithms</name>
<related-page>Robotic_Vision/OpticalFlow/OpticalFlow.html</related-page>
</item>
<item>
<name>FPGA Implementation</name>
<related-page>Robotic_Vision/OpticalFlow/OpticalFlowImp.html</related-page>
</item>
<item>
<name>Applications</name>
<related-page>Robotic_Vision/OpticalFlow/OpticalFlowApp.html</related-page>
</item>
</sub-menu>
<sub-menu>
<name>3D Vision Applications</name>
<item>
<name>Seeing-Eye-Phone</name>
<related-page>Robotic_Vision/SeeingEyePhone/SeeingEyePhone.html</related-page>
</item>
<item>
<name>Obstacle Avoidance</name>
<related-page>Robotic_Vision/ObstacleAvoidance/ObstacleAvoidance.html</related-page>
</item>
<item>
<name>Structure from Motion (SFM)</name>
<related-page>Robotic_Vision/StructureFM/StructureFM.html</related-page>
</item>
</sub-menu>
<sub-menu>
<name>Unmanned Ground Vehicle</name>
<item>
<name>Real-time Target Tracking</name>
<related-page>Robotic_Vision/TargetTracking/TargetTracking.html</related-page>
</item>
<item>
<name>Semi-Autonomous Vehicle Intelligence (SAVI)</name>
<related-page>Robotic_Vision/SAVI/SAVI.html</related-page>
</item>
<item>
<name>Intelligent Ground Vehicle (IGV)</name>
<related-page>Robotic_Vision/UGV/UGV.html</related-page>
</item>
</sub-menu>
<sub-menu>
<name>Unmanned Quad-rotor Helicopter</name>
<item>
<name>Platform</name>
<related-page>Robotic_Vision/HelicopterPlatform/HelicopterPlatform.html</related-page>
</item>
<item>
<name>Flight Stablization</name>
<related-page>Robotic_Vision/StablizeFlight/StablizeFlight.html</related-page>
</item>
</sub-menu>
<sub-menu>
<name>Helios Robotic Vision Platform (HRVP)</name>
<item>
<name>Helios RVP Information</name>
<related-page>Robotic_Vision/Helios/index.phtml</related-page>
</item>
</sub-menu>
</menu>
<news>
<title>
<text>Robotic Vision</text>
</title>
<article>
<title>
<text>Seeing Eye Phone</text>
</title>
<highlight-picture-file>Robotic_Vision/SeeingEyePhone/pic1small.jpg</highlight-picture-file>
<content>
<text> We have developed an indoor natigation system to assist visually impaired person to navigate unfamiliar environment such as public building. This Seeing Eye Phone system consists primarily of a server and a smart phone. The smart phone takes pictures at regular intervals as the u ser moves and sends them to the server along with a time stamp and its most recently known position. The user can also speak a simple predefined voice command, such as asking for their current location or how to get to their desired destination. The command is interpreted by the use of voice recognition software and passed on to the server. The server matches the input images to the map images near the most recently known location in the database. Once a match is found the server calculates the camera pose in the 3D real world coordinate system and then uses the floor plan to find a route to the desired destination. It sends the location and directions back to the phone using a text-to-speech function to direct the user to his desired destination. </text>
</content>
</article>
<article>
<title>
<text>Real-time Stereo Vision</text>
</title>
<highlight-picture-file>Robotic_Vision/StereoVision/pic9small.jpg</highlight-picture-file>
<content>
<text> We have developed a simple but efficient stereo vision algorithm that is based on matching the shape of two intensity profiles from the left and right cameras. It is suitable for hardware implementation or run on a resource limitted systems. This algorithm has been used for obstacle avoidance and gesture recognition for various robotic vision application. </text>
</content>
</article>
<article>
<title>
<text>Real-time Embedded Vision Sensor</text>
</title>
<highlight-picture-file>Robotic_Vision/OpticalFlow/pic4small.JPG</highlight-picture-file>
<content>
<text> We have developed a custom FPGA board for small unmanned vehicles (UVs). The FPGA chip is the sole computational support on the vehicle, so it performs all processing associated with sensing, communication, and control. A user can provide directives to the UV, via a base station, a laptop or desktop computer that communicates wirelessly with the UV. The user can cause the UV to autonomously track and follow another vehicle or object by selecting an area in the image displayed on the base station. To support this functionality, visual feedback must be provided to the user at the base station. This is accomplished by converting each digital image captured by the FPGA vision sensor to an analog representation and transmitting it as standard NTSC video that is received, digitized, and displayed on the base station. The payload constraints imposed by small UVs can be severe. Image sensors are small and lightweight, but it is difficult to provide the necessary computational power on the vehicle to process video in real-time at frame rate. Our system supports other sensors, but the principal source of information from the environment in this work is a camera mounted on the vehicle. Our platform uses a Virtex-4 FX60 FPGA that includes two PowerPC CPUs on chip, in addition to configurable logic resources. Thus, our application has two forms of computational support: conventional processors running compiled C code, and custom hardware implemented in the FPGA fabric written in VHDL. </text>
</content>
</article>
<article>
<title>
<text>Quad-rotor Helicopter</text>
</title>
<highlight-picture-file>Robotic_Vision/HelicopterPlatform/pic1small.JPG</highlight-picture-file>
<content>
<text> In order to empirically test the completed vision sensor, a hovering micro-UAV platform was designed and built. Design specifications were set to require the desired platform to have a total payload capacity of 5 lb at the current elevation of the testing location of 4,500 ft and achieve a flight time of 30 min. A quad-rotor design was selected and was built from mostly off-the-shelf components resulting in the platform shown on the left. The goal is to use the readily available sensors in a smartphone such as the GPS, the accelerometer, the rate-gyros, and the camera to support vision-related tasks such as flight stabilization, estimation of the height above ground, target tracking, obstacle detection, and surveillance. An Android smartphone is connected through the USB port to an external hardware that has a microprocessor and circuitries to generate pulse-width modulation signals to control the brushless servomotors on the quad-rotor. The high-resolution camera on the smartphone is used to detect and track features to maintain a desired altitude level. </text>
</content>
</article>
</news>
</page>