Research Movies 2012

Detection of the person who needs medical aid by a mobile robot equipped with Thermography and Three-Dimensional SOKUIKI sensor

OGATA, Kazuki

To determine whether the person needs medical aid, robots have to recognize vital signs of that person. We propose a method for detecting fallen person by using thermography and point cloud data of three-dimensional SOKUIKI sensor. And also we propose a method for detecting breath of fallen person by using three-dimensional SOKUIKI sensor. In the detection of fallen person, 3D point cloud of a person is extracted based on the detection of body temperature. The position and orientation of the body are estimated from the person's 3D point cloud. For the detection of breath of fallen person, we made preliminary experiment to confirm that breathing can be measured from the cross-sectional shape of the torso. In this method, it is detected whether person is breathing by calculating power spectrum of SOKUIKI sensor data. To realize whole process of fallen person detection and breath detection, we implemented a motion planning of a mobile robot. The robot moves to the position of the body by using proposing fallen person detection, and detects the person's breathing.

Moving Obstacle Avoidance for Robot Moving on Planned Path

Yeow Li Sa

A multiple robot environment with decentralized system requires moving obstacle avoidance for robot navigation to prevent collisions between robots. There are many obstacle avoidance methods developed, but most of them are focusing on getting to the goal within the shortest route. In reality, it is necessary to define the accessible area of the robot in a complex indoors environment which may consist of door entrance, descending stairways and others. It is undesirable for a robot to move into these areas where collision is hard to detect or where there is no actual foothold. A method to closely follow the global designated path by setting subgoals and path boundary manually is described. The local path algorithm is based on Tsubouchi et al's method. This method whereby only collision in the front of the robot path is avoided and the robot moves only with the same velocity is enhanced by making the robot move in a range of velocities and thus being able to avoid obstacle staying closely to the designated path. To allow the robot detect obstacle from not only front but also back and sideways, an obstacle detection method using 2 laser range sensors is presented.

Winning Entries of Demo Program Contest 2012

1st Place: Avoid Overloading by Cooperative Work!


Intelligent home appliances talk each other by human unnoticeable method in your house. It may be distrustfulness and alienation for some users. Communication by audible sound might increase a sense of affinity, like R2-D2. In this demonstration, two robots turn on electrical equipments according to user\'s directions. The total power supply is protected by circuit breaker. They ask the consuming power of the other robot to know the amount of available power capacity by audible sound. And as necessary, robots turn off some equipments before turning on the desired one to prevent breaker trip.

2nd place: Akacyanpion

Gavin Paul

A baby is crying for her milk. You're busy and your hands are full. A robot system is needed that can prepare the bottle (shake it) and hand it over to a smaller worker mobile robot which can deliver it to the baby. The system is called AKACYANPION. AKACYANPION is comprised of a light-weight Speego Yamabico Mobile robot, an Xtion RGBD camera, a pan and tilt unit for versatile maneuvering of the camera, and an Exact Dynamics iArm manipulator. When the bottle is required, the AKACYANPION Coordinator computer commands the mobile robot to go and fetch the bottle. The Object Detector program finds the bottle on the table, then the iArm picks up the bottle and hands it over to the Mobile Robot, which delivers it back to the crying baby.

3rd place: Say Cheese!

OGATA, Kazuki

Shooting a picture of yourself is difficult, so it is handy to have a robot automatically takes a picture of you. In this demonstration, the robot can take a picture of a person's face with a digital camera automatically. From the LRF data, the robot detects the part which is similar to a human figure, and then it moves back and forth repeatedly, making adjustment to get the figure completely in the angle of view. Then, the LRF sensor is rotated in the tilt direction to obtain the 3D data of the figure. With that, the person's height can be detected and it determines the angle of elevation for the digital camera to capture the full-length and close-up picture respectively. As the robot scans a person's height then calculates the position of the face, it can take a picture of anyone regardless of his/her height.

4th place: Autonomous Navigation Using Speech Recognition on a Mobile Phone

HARA, Yoshitaka

In 2006, I have developed a remote control system for a mobile robot via a Web browser on a mobile phone. This time I improved this system, and developed an autonomous navigation system for a mobile robot, in which the destination is determined by recognition of a user's speech on a mobile phone. Processing contents is as follows; First, a smartphone (Google Android) recognizes the user's speech to determine a destination of a robot. Then, the destination information is sent to the PHP system on a Web server. The Robot receives the information from the server and plans a path to the destination using the A* algorithm on the map generated by the Rao-Blackwellized Particle Filter SLAM in advance. During autonomous navigation, the Adaptive Monte Carlo Localization is performed, and the robot avoids obstacles using the Dynamic Window Approach.

5th place: Box Collecting Robot


In this demonstration, the robot collects cereal boxes which are put in 1.5m x 3m area. The robot judges position and direction of cereal boxes from lazer range sensor's data, and links itself to the box by using S-hooks. After that, the robot brings the box outside the area and uncouples it. To link the robot to boxes, the robot had to knew accurate position and direction of boxes, so I spent a lot of my time to get it.