Application: Vision Sensor Fusion
Introduction

Vision systems have become popular for remote vision sensing in geographically distributed environments due to the vast amount of information they provide. Mobile agent technology is a salient solution in vision sensor fusion since it increases power efficiency by reducing communication requirements and increases fusion processing by allowing in-situ integration of on-demand visual processing and analysis algorithms. Mobile agents can dynamically migrate between multiple vision sensors and combine necessary sensor data in a manner specific to the requesting system.

Required Packages for Executing Example Code:

In order to run the examples below, the Ch Robot, Ch OpenCV, and Ch GAUL, must first be installed on the controlling computer.

Example 1: Part Localization in Assembly Automation
Description

An automation agent containing the desired control functions that make up the first automation task is sent to the automation cell. Once the agent has been received, it begins its execution on the automation cell computer. The mobile agent code simulates operations of an automation assembly. When the automation work cell is started, the two robots and the conveyor system are moved to their ready positions after initialization and calibration. Then, the IBM 7575 robot is used to place the camera over the area where the parts to be picked up are expected to be. The automation agent generates a new agent containing the required object recognition algorithm and sends it to the vision system. The vision agent accesses the camera hardware and acquires the position of the part relative to the IBM 7575 body reference frame. The vision agent then migrates back to the assembly cell and uses FIPA communication to relay part positions to the assembly agent. The IBM 7575 then pickups up the first part from the acquired location and moves the camera to the desired drop off location. Once again, a new vision agent is deployed to the vision system with a drop off location recognition algorithm. After the vision agent returns, the drop off location is relayed back to the assembly agent and the part is lowered into the drop off container on the conveyor system. Once a part has been placed, the conveyor system will rotate. As the conveyor is rotating into its preset position, the IBM 7575 will then move back to the next acquired pickup location while the Puma 560 moves to pickup a part from the conveyor. After the Puma 560 has picked up a part, it moves to a drop off location and positions the piece. The assembly cycle simulates the assembly operations of a part. Once a part has gone though its initial stages of manufacturing, it is placed on a conveyor system and brought to either the next stage in fabrication or packaging. The motion of the robots are synchronized to ensure that a robot is fully stopped before proceeding with the next control command.

Agent Code
Mobile Assembly Agent: View
Mobile Code in the Assembly Agent: View Download

Mobile Vision Agent: View
Mobile Code in the Vision Agent: View Download

Running the code requires the installation of the Ch Robot package availiable here.
Example 2: Tier-Scalable Planetary Reconnaissance
Description

There has been a fundamental shift in remote extraterrestrial planetary reconnaissance from segregated tier reconnaissance methods to an integrated multi-tier and multi-agent hierarchical paradigm. The use of a cooperative multi-tier paradigm requires a flexible architecture that not only provides a mechanism for hardware access but also an agile vision fusion mechanism for vertical and horizontal integration of all vision sensor components.

Experiment

This experiment simulates a tier-scalable planetary reconnaissance mission. The main experimental objective is to have a mobile robot with specialized equipment locate and take mineral samples of desirable rocks. However, the mobile robot is sensor limited and incapable of locating a desirable target on its own. The mobile robot will utilize the visual system of a manipulator robot exploring the same area and the visual system of an aerial robot taking topological images in order to choose and localize acceptable rocks for sampling. The main purpose of this case study is not on the actual algorithms used to implement object detection or path planning but to show how mobile agents can be utilized to integrate information obtained from remote vision systems. The experiment initiates with the Khepera III mobile robot approaching the designated area of rock sampling. The Khepera III mobile robot is running a Mobile-C agency and is under the control of a mobile agent with the objectives of searching for and sampling proper rocks. The mobile agent migrates itself to the mobile manipulator robot where it utilizes the available Ch OpenCV commands and runs through the field and target object detection algorithms. From there, it migrates to the overhead aerial robot with the field and object data obtained from the mobile manipulator robot, runs the field object detection algorithm on the aerial view and synchronizes the images to locate the target. The mobile agent then populates a binary map of the area where objects are defined as being 1 and the other areas as being 0. Afterward, the mobile agent runs the genetic path planning algorithm using the Ch GAUL package producing way points for the Khepera III mobile robot. Finally, the mobile agent migrates back to the Khepera III mobile robot and displays the way-points.

Agent and Code
Mobile Robot Agent: View
Required Images: sideview.png topview.png

Running the code requires the installation of the Ch GAUL and Ch OpenCV packages.
http://iel.ucdavis.edu/projects/chgaul
http://www.softintegration.com/products/thirdparty/opencv