You are here

Robot Design

Back

Design

The overall design concept focuses on ease of control, neat appearance and flexibility of adjusting to surrounding environment.

Hardware

The robot’s structure is based on easy-to-disassemble concept in order to adapt the height and position of devices.
The robot’s dimension is fit to work in general scenery. Its height is around 1-2 feet from table which is manually modifiable by its mechanic. We use PIONEER P3-DX as the robot’s base with its 44cm x 38cm workspace and extend the base with aluminum structure where all devices fit in.

According to the P3-DX, total weight include all device is 40 kg with 16.5cm diameter drive wheels. The two motors use 38.3:1 gear ratios and contain 500-tick encoders. This differential drive platform is highly homonymic and can rotate in place moving both wheels, or it can swing around a stationery wheel in a circle of 32cm radius. A rear caster balances the robot. The robot can move at speeds of 1.6 mps

Device

Notebook : Lenovo T400 Core 2 Duo P8600 2.40GHz

We have 1 or 2 notebooks on the robot to receive environment input and signal output response through the LCD screen and robot’s mechanic.

Laser ranger finder : HOKUYO URG-04LX

[img_assist|nid=41|title=Hokuyo|desc=|link=none|align=left|width=76|height=100]
HOKUYO URG-04LX, with 20 to 4000 mm range and240 degree of scanning area. The laser, which located near the wheel and the floor, provides information of room’s obstacles such as walls, tables, chairs. This information is used by navigator module.

Pan-Tilt-Zoom Camera : canon VC-C4 communication

we use Pan-Tilt-Zoom canon VC-C4 communication camera, with its 16x zoom , center mounted for a smoother pan/tilt and reduced vibration; 460TVL of resolution; height of 3.5”, and weight of 1 lb. range of 340 degrees and a tilt range of +10 degrees/-90 degrees. The camera is located at the head of the robot as the eyes to see the environment around. Images from PTZ camera is used by human and object tracker.

Robotic arm : Katana

Katana will be used with 5-6 degrees of freedom and 517 mm operation space. Currently we are ordering this robot.

Microphone & Speaker : wireless headphone

The robot uses a wireless headphone in order to receive command from user’s voice. Speaker is placed on the robot to speaker out to user.

Software

Our software is divided into 3 layers: Input layer, Strategy layer, and Output layer. Each layer consists of several modules as shown in figure 1. We use couple notebooks, one for the Input layer, and other one for Strategy layer, and Controller layer. Each module in Input layer is separated into their own thread because of their heavy processor consuming. Our software also includes GUI which provides us easier ways to test our robot. We implement our GUI with OpenGL and Win32 base application.

Input layer

Our robot has the following sensors: a microphone, laser range finders, an arm, motor encoders (Pioneer P3-DX), and a PTZ camera. In each module in Input layer, raw data is read from each sensor, and then trigger a new loop of execution, and outputs are stored in their own module for the Strategy layer. The frame rates of another 2 layer depend on the Navigator module which has slowest frame rate at 10 fps.

Input layer consist of three modules: Speech Recognizer, Navigator, and Human and Object Tracker. Speech recognizer module is divided in two parts, speech recognition and voice recognition. Speech recognition recognizes what is the meaning of the voice and voice recognition identifies the speaker. Speech recognition provides speech command id. Navigator module is implemented using polar matching and laser base EKF SLAM approach. The navigator module provides mapping, position and direction of the robot. Human and Object Tracker module is implemented using SIFT and Optical Flow algorithm to track and several techniques to recognize humans and objects. We mainly use OpenCV as a tool in this module because using implemented image processing algorithms in OpenCV help us implementing our algorithm faster. Human and Object Tracker module provides human id and object id.

Strategy layer

There are three modules in this part: Manager, Task, and Skill. All modules work together. Manager, which is the top level of strategy, manages a task queue. Task is implemented as state machine to control behavior of our robot. Skill generates the appropriate commands for our robot via Output layer.

Output layer

Output layer consists of two modules: Robot Controller module and Emotion module (Display and Speak). All constrains are applied to control data from each module. Robot Controller module controls the mobility of the arm, the robot and the PTZ camera. This module generates paths, trajectory, and control of robot by analyzing the map data. Emotion module generates and synchronizes the friendly face and speeches for our robot.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer