Introduction
SLAM was implemented for an indoor environment, by integration of the IMU orientation and odometry information from a walking humanoid with LIDAR. A 2D occupancy grid map was constructed, and robot poses were predicted and updated by Monte Carlo Localization (MCL)/Particle Filter Localization. Then, given camera and depth imagery, a 2D texure map for the floor was built based on estimated robot poses.
My SLAM algorithm has a pretty satisfactory results and strongly competitive computation speed. Please refer to my Github for more details.
Videos of SLAM
The process of SLAM and the odometry in red (raw odometry in blue) were presented.
Videos of Texture Mapping
The videos of generating the texture maps were as follows.
Videos of Ground Detection
The videos of the ground detection results in all RGB images were shown as follows.