Skip to content

Landmark detection and tracking project for Computer Vision Nanodegree.

License

Notifications You must be signed in to change notification settings

josancamon19/landmark_detection_and_tracking

Repository files navigation

Udacity Computer Vision Nanodegree

Project Landmark detection and tracking

In this project, I implemented SLAM (Simultaneous Localization and Mapping) for a 2 dimensional world! I combined what I learned about robot sensor measurements and movement to create a map of an environment from only sensor and motion data gathered by a robot, over time. SLAM gives you a way to track the location of a robot in the world in real-time and identify the locations of landmarks such as buildings, trees, rocks, and other world features. This is an active area of research in the fields of robotics and autonomous systems.

*Below is an example of a 2D robot world with landmarks (purple x's) and the robot (a red 'o') located and found using only sensor and motion data collected by that robot. This is just one example for a 50x50 grid world.

The project was broken up into four Python notebooks; the first two are for exploration of provided code, and a review of SLAM architectures, only Notebook 3 and the robot_class.py file contains the code built:

Notebook 1 : Robot Moving and Sensing

Notebook 2 : Omega and Xi, Constraints

Notebook 3 : Landmark Detection and Tracking

About

Landmark detection and tracking project for Computer Vision Nanodegree.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published