氏名: 劉 詠梅 (489634079)

論文題目: Mobile Robot Navigation Based on Signboard Information in Scenes


論文概要

To navigate in a complex workspace effectively, a mobile robot must know where it is. As a mobile robot moves through its environment, its actual position and orientation always differ from the position and orientation that it is commanded to hold. Wheel slippage is a major source of the error. Therefore, sensory feedback is needed to locate the robot in its environment. In our research, we deal with visual information that is obtained by the robot.

A human traveler, who uses a map to navigate through an environment that contains landmarks, often takes the following steps to identify his(her) location in the environment:
1. identify surrounding landmarks in the environment;
2. measure the relative position and orientation between the landmarks and himself (herself);
3. find the corresponding landmarks on the map and decide his location.

In the real life, landmarks are often marked in maps in the form of characters,and we often identify these landmarks by means of signboard recognition. Character information can also be considered as a valuable and reliable clue in robot navigation system. Here, we develop a method for navigating a mobile robot in the environment using character information in scene as landmarks. We try to realize this in the following three stages.

1. Recognition of characters in a scene image
Recognition of characters in a scene is much more difficult than recognition of characters in document images due to complexity of background, poor illumination condition and variety of characters. Here, we present a method to recognize Japanese characters, written on signboards and traffic (road) signs, in grey-scale scene images. Compared with other objects in scene images, Japanese characters (Japanese Kana and Chinese characters) written on signboards and traffic (road) signs have the following features: high spatial frequency, high grey-level contrast, constant aspect ratio, and geometrical features of character arrangement. We use these features to extract character components from scene images. First, we extract subregions with high spatial frequency and great variance in grey-level as candidates of character components from an input scene image. Then, we select characters by using several heuristics such as constraints of size and shape, bimodality of an intensity hi! ! stogram, alignment and proximity of characters. Extracted character lines are recognized by template matching.

We conducted experiments on 40 outdoor and 20 indoor real scene images. As a result, character lines are detected at a high rate of over 80%.

2. Estimating the camera orientation & rectification of distorted signboard image
The information of camera viewing direction can help us know the relative orientation between the robot and the signboard, and rectify a distorted image of the signboard. We try to estimate the relative orientation of the camera and a signboard from a single view of the signboard using two methods. 1) Method based on vanishing points. The convergence of three-dimensional parallel lines under perspective projection is a clue for infering information about a three-dimensional scene. We calculate the vanishing points of a signboard edges, and use it to estimate the relative orientation of the signboard plane and the camera viewing direction from a single view. 2) Method based on coordinates of four vertices of a signboard. On the assumption that the shape of a signboard is rectangular, we calculate the relative coordinates of the four vertices of a signboard in the camera coordinate system, based on these, we can obtain the normal vector of the signboard plane.

Once the viewing direction of a camera is known, it is possible to rectify a signboard image with distortion. We develop two corresponding approaches to do this. One is realized by back-projecting the distorted signboard image to a plane which is parallel to the signboard in the 3-D world. The other method is realized by finding the transformation matrix from camera coordinate system to object coordinate system. Then a signboard image without distortion is obtained by calculating the coordinates of each point in the object coordinate system.

Experimental results show the effectiveness of both the estimating and reconstructing methods.

3. Using signboard information in robot navigation system
We conducted robot navigation experiments in indoor environment using signboards as landmarks. Given an environment map that contains signboard information, the task for the robot is to move autonomously from the start to the destination in a hallway. By signboard recognition, the robot can determine its actual position in the environment and adjust its moving direction to the proper one. This shows signboard information can be used as an effective clue for giving a robot the self-localization capability.


目次に戻る


asakura@nuie.nagoya-u.ac.jp