Learning of a Control Task for a LEGO Mindstorms Mobile Robot
 
Benjamin Haibe-Kains
 
Thesis promoted by
Gianluca Bontempi, Computer Science Department of Université Libre de Bruxelles. Email: gbonte@ulb.ac.be
 
 

In the robotic field, the most common robots are not autonomous and specialized: these are used in assembly lines and painting for example. The control task is very specific and the environment is structured.  However, in autonomous robotics, the robots have to evolve in the real world with its inherent complexity. The sensory information and the hardware management are complex. Thanks to research, some specific techniques have been developed in order to obtain interesting behaviors. The learning of sensorimotor relations, i.e. the coordination between perceptions and movements, is the basis of complex behaviors.
 
This thesis is made up of two parts: firstly, the development of a specific robotic platform (LEGO Mindstorms) and secondly, its use for the realization of a simple task.
 
 
In the first part, a mobile robot was built in LEGO Mindstorms. Tools like a specific kinematics management and a new protocol of communication allowed the platform exploitation. 
 
 
 

 
 
 
In the second part, we have studied the use of a local learning method in the control politic for the realization of a task. Because of the poverty of sensory information and the inaccuracy of movements, a simple control task was studied by the comparison of several control politics. The task assigned to the robot consists in finding of a fixed light source by using a differential drive system for the kinematics and two LEGO Mindstorms light sensors for the sensory information. This control politic can be seen as a single behavior in a behavior-based architecture of control proposed by R. Brooks in 1986.
 
 
  

 
 
 
Three control politics are studied: (i) a simple control “hand-written”, (ii) a parametric control using a statistical treatment of sensory information and (iii) a control based on the learning of the sensorimotor relation. The first two politics were only developed to give a better view of the politic based on learning. Lazy Learning is a local learning method developed by my promoter G. Bontempi. Here is the control scheme using Lazy Learning:
 

 
 
 
In this thesis, the two learning methods are on the one hand, the Lazy Learning, and on the other hand a global linear modeling. According to experiments in real environment, the control politic using Lazy Learning seems to be the most efficient one. A model validation and some experiments have highlighted the good performances of the Lazy Learning method in comparison to the global linear model. Lazy Learning seems to be a very good candidate for control in autonomous robotics.
 
This thesis is written in French and can be downloaded here. Thesis defense can be downloaded here.