Cart-O-matic

Contact: Philippe Lucidarme

LISA (Laboratoire des Systèmes Automatisés)
EA 4094 - University of Angers
62 avenue notre Dame du Lac
49000 Angers, France
LORIA (laboratoire lorrain de recherche en informatique et ses applications)
UMR 7503 – Université Henri Poincaré, Nancy 1
Campus scientifique - BP 239
54506 Vandoeuvre-lès-Nancy Cedex, France
INRIA Maia team (contacts : F. Charpillet, O. Simonin)


Description

This projet is part of the French robotics contest Defi CAROTTE organized by the General Delegation for Armaments (DGA) and French National Research Agency (ANR). This aim of the Cart-O-matic project is to design and build a multi-robot system able to autonomously map an unknown building and to recognize various objects inside. The scientific issues of this project deal with Simultaneous Localization And Mapping (SLAM), multi-robot collaboration, and object recognition and classification. The research teams involved in this project have developed innovative approaches to each of these fields.



Gallery

2010 ::: Pimprenelle



2011 ::: Mini Pekee



2012 ::: MiniRex



2012 ::: Test area



2012 ::: Bourges



2012 ::: Défi CAROTTE



Robot specifications

The robots used for this projet, named MiniRex(MINI Robot d'EXploration - Miniature exploration robot) have been designed and built at the University of Angers. Each robot is controlled by a PC (pITX PC processor ATOM 1.6GHz). Lithium Polymer batteries allow more than 6 hours of operation. Several sensors allow its safe navigation (inclinometer, ultrasonic range sensors ...).
Click to enlarge

Simultaneous Localization And Mapping (SLAM)

The first scientific issue of the project deals with localization and mapping. Several algorithms already exist, mainly based on Kalman Filtering (KF) or Iterative Closest Point (ICP). These methods were not well suited within the framework of this project. A novel approach has been developed based on the computation of local minima in order to minimize the distance between the LIDAR pattern and the known map. This new algorithm named Slam-O-matic provides high quality results and can be applied to multi-robot systems. The algorithm can even be applied to relocalization and the kidnapping problem. A patent has been filed concerning SLAM-O-MATIC, more information about licencing can be supplied on demand.

100m long building
80m long map without loop enclosure

Multi-robot collaboration

The main characteristic of the Cartomatic project is to propose a multi-robot approach for the Carotte challenge. Our motivation is the definition of a robust and cooperative solution. Indeed a fleet of autonomous mobile robots can be more quick to explore the environment than a single one and robots can help each other to avoid dangerous areas. For this purpose we explored a decentralized approach where each robot considers the location of the others and the current co-built map to decide which place of the environment to explore. The objective is to ensure an efficient spatial distribution of the robots while they cooperate to build the whole map. Our approach relies on a frontier allocation algorithm which forces the distribution of the robots among the frontiers.

Communication and collaboration for building the map
While moving, each robot Robots also mark their map with dangerous/obstacle areas (perceived by U.S., Kinect, etc.).

The multi-robot exploration algorithm
Each robot is allocated to a frontier following a new criteria that determines the interest of a frontier for a robot. We define the position (or rank) of a robot to a frontier as the number of robots closest than it to this frontier. A robot is then allocated to the frontier for which it has the lower position. As a consequence several robots will be distributed towards different directions rather than towards the closest frontiers. The figure below illustrates the allocation with two standard algorithms, i.e. Closest frontier (A) and Greedy approach (B), and our MinPos algorithm (C).

(A) Closest Frontier
(B) Greedy approach
(C) MinPos algorithm

The computation of this criteria needs to evaluate the distance from each robot to each frontier. For this purpose we use a wavefront computation (Barraquand-Latombe 91) from each frontier, illustrated by gradient colors in the pictures.
The following pictures give an illustration of the trajectories performed by four robots in the LORIA Arena and the resulting map. Colors of the trajectories show that there is few redondancy of visited areas.



Map built by 4 robots in the LORIA arena

Performance study
The following figures present some comparative results between our approach and other decentralized ones. The first figure (A) shows the exploration time to fully map a maze while varying the number of robots. The second figure (B) shows the results for the stable environment and an illlustration of the potential fields computed with MinPos (showing areas of "first position").

A. B.

Comparison of our MinPos approach with the Closest Algo. and the Greedy approach (Burgard et al.) on maze and stable environments

Trajectory planning
To compute robot trajectories we defined a 4D planning technique relying on the A* algorithm. The 4 dimensions (x,y,theta,s) are robot location, orientation and speed. To optimize the computation we pre-calculate a distance heuristic. The following picture illustrates the type of trajectories obtained.



Object recognition

The last (but not least!) scientific issue is object recognition: the robots have to locate and identify objects in the environment (fan, bottles, chairs, plants ...). A Microsoft Kinect sensor is mounted on each robot. While navigating and exploring the environment, the robots perfom 3D captures. These captures are analysed to locate objects. The first step of object recognition is based on a region growing on the depth map (from the Kinect). Each candidate is processed and feature parameters are computed. The parameters are compared with a pre-processed objects database and a classifier selects the closest object in the search space. A new classifier, named Class-O-matic, has been proposed; this classifier isolates and keeps only the essential points from training. Some illustrations of the classifier's performance can be seen in the following figures.

RGB image from the Kinect
3D view of the scene
Depth map
Object segmentation

Class-O-matic example ::: two spiral benchmark

Search space approximation with 100 points
Search space approximation with 1000 points
Search space approximation with 10000 points
Search space (benchmark)

Class-O-matic example ::: checker benchmark

Search space approximation with 1000 points
Search space approximation with 10000 points
Search space approximation with 100000 points
Search space (benchmark)

Class-O-matic result : orange cylinder
Class-O-matic result : white box

Mission result example

At the end of the mission, the data from each robot is centralized and a single map is created. Rooms of the environment are extracted and the results are gathered into a single file as illustrated in the following figure. All the captures are collated and integrated to create a single 3D map.

Mission results : map, room and object recognition


3D map

Publications and reports

P. Lucidarme and S. Lagrange (2011) Slam-O-matic : SLAM algorithm based on global search of local minima , FR1155625 filed on June 24, 2011.
R. Guyonneau, S. Lagrange, L. Hardouin and P. Lucidarme (2011) Interval analysis for kidnapping problem using range sensors, SWIM 2011.
R. Guyonneau, S. Lagrange and L. Hardouin (2012) Mobile robots pose tracking: a set-membership approach using a visibility information , ICINCO 2012.
R. Guyonneau, S. Lagrange and L. Hardouin and P. Lucidarme (2012) The kidnapping problem of mobile robots : a set membership approach , CAR 2012.
A. Bautin, O. Simonin and F. Charpillet (2011) Towards a communication free coordination for multi-robot exploration , CAR 2011.
A. Bautin, O. Simonin and F. Charpillet (2011) Stratégie d'exploration multi-robot fondée sur les champs de potentiels artificiels , JFSMA 2011, Extended version pre-selected for RIA 2012 (Revue d'Intelligence Artificielle).