Skip to content

Qianhongbo/Visual-Inertial-SLAM

Repository files navigation

Visual-Inertial SLAM

Summary

Enabling the autonomous vehicle to perceive the environment is the core technology in autonomous driving. In this project, In this project, I used data from stereo camera with feature extractions from images and IMU measuring linear velocity and angular velocity to implement the Visual Inertial Simultaneous Localization an Mapping(VI-SLAM). Following the instruction, I did IMU Localization via EKF Prediction first to get the trajectory. And then I did Landmark Mapping via EKF Update to update the landmarks on the map. Finally, I combined these two steps to implement an IMU update step based on the stereo-camera observation model to obtain a complete visualinertial SLAM algorithm.

Data

Under the code folder.

Requirement

numpy== 1.18.5

matplotlib == 3.2.1

transforms3d == 0.3.1

Visual-Inertial SLAM

To run the code, run the scripts

IMU_localization_EKF_prediction.py

Landmark_Mapping_via_EKF_Update.py

Visual_Inertial_SLAM.py

Result

IMU_localization_EKF_prediction

Landmark_Mapping_via_EKF_Update

截屏2022-03-16 00.28.12

Releases

No releases published

Packages

No packages published

Languages