Skip to content

federicomariamassari/udacity-sfend

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 

Repository files navigation

Udacity Sensor Fusion Engineer Nanodegree

My Udacity Sensor Fusion Engineer Nanodegree projects, in C++.

Certificate of Completion

Core Projects

Environment

  • Ubuntu 20.04-5 LTS Focal Fossa running on UTM Virtual Machine on MacBook Pro M1 Max (aarch64)
  • Point Cloud Library 1.11 (Built from Source)
  • OpenCV 4.2.0 (Built from Source)
  • MATLAB R2023b Home License (on Apple Silicon), Signal Processing Toolbox

Acquired familiarity with: Point Cloud Library (PCL), the Eigen library.

Overview

Filter, segment, and cluster raw LiDAR data to detect vehicles and obstacles on the road.

In this assignment I learn how to process point clouds from LiDAR scans in order to identify vehicles and other obstacles in a driving environment. I first reduce cloud size using voxel (volumetric pixel) grid and region of interest techniques, then separate road and obstacles via RANSAC (RANdom SAmple Consensus), group points belonging to the same objects using Euclidean clustering with 3-dimensional KD-Trees, and finally enclose the obtained clusters within either regular or minimum (PCA-based) bounding boxes.

Link to code | Starter Code from Udacity

How to Build and Run the Project

Clone the repository locally, for example inside /home/$whoami/workspace (with $whoami the username of the current user). Ensure PCL and associated Viewer are installed correctly, then build and run the main project as per below commands. To build and run quiz instead, see the project's README file.

cd /home/$whoami/workspace/udacity-rsend/projects/p1
mkdir build && cd build
cmake ..
make
./environment

Output

A stream of incoming obstacles, encapsulated in PCA bounding boxes, is rendered in the city block scene below. Udacity's self-driving vehicle Carla is the purple block at the center of the screen, with LiDAR mounted on top.

PCA Bounding Boxes

Acquired familiarity with: OpenCV 4.x, Gnumeric.

Overview

Learn to detect, describe, and match features in 2D camera images.

In this computer vision application, I implement a two-dimensional feature tracking algorithm to monitor objects in a sequence of images using OpenCV. After progressively loading the pictures in a data ring buffer, I use classic and modern techniques to detect keypoints, calculate their descriptors, match the features between consecutive frames, and finally evaluate the performance of each detector-descriptor combination in terms of speed and accuracy.

Link to code | Starter Code from Udacity

SIFT Keypoint Detection

How to Build and Run the Project

As a prerequisite, build OpenCV 4.2.0 from source to enable patented algorithms SIFT and SURF. Then build and run as follows:

cd /home/$whoami/workspace/udacity-rsend/projects/p2
mkdir build && cd build
cmake ..
make
./2D_feature_tracking

Output

Set both bCompareDetectors and bCompareDescriptors to false to visualize matched keypoints among image pairs for the chosen detector and descriptor. Set either (or both) to true to output tabulated statistics on the distribution of keypoints' neighborhood size (MP.7) and/or comparisons among detector-descriptor combinations (MP.8-9).

Keypoint Matching FAST-BRIEF

Overview

Fuse LiDAR and camera data to compute a robust time-to-collision estimate.

In part 2 of the camera course, I integrate data from LiDAR and camera sensors to provide an estimate of time-to-collision (TTC) with a vehicle in front, in the context of a Collision Detection System (CDS). After classifying the objects on the road using deep learning framework YOLOv3, I identify the LiDAR points and keypoint matches that fall within the region of interest (ROI) of the bounding box enclosing the preceding car, and use these to compute reliable estimates of TTC for both sensors.

Link to code | Starter Code from Udacity

SIFT Time-to-Collision

How to Build and Run the Project

For the prerequisites, see project 2. Download the YOLOv3 weights inside the dat/yolo folder:

wget https://pjreddie.com/media/files/yolov3.weights

Then build and run as follows:

cd /home/$whoami/workspace/udacity-rsend/projects/p3
mkdir build && cd build
cmake ..
make
./3D_object_tracking

Overview

Analyze radar signatures to detect and track objects.

In this MATLAB project, I generate and propagate a radar signal, simulate its reflection off a target, process the received echo to estimate the target's range and velocity (Doppler) via Fast Fourier Transform (FFT), and finally suppress unwanted noise in the output using 2D Cell-Averaging Constant False Alarm Rate (CA-CFAR).

Link to code

How to Run the Project

If run locally, this project requires a valid MATLAB license plus Signal Processing Toolbox.

Overview

Track non-linear vehicle motion blending data from multiple sensors via Unscented Kálmán Filter.

In this capstone assignment, I implement an Unscented Kálmán Filter to estimate the state of multiple cars on a simulated highway, fusing noisy measurements from LiDAR and radar. To cover a wider range of possible state values and capture the uncertainty and variability of the state estimation more accurately, the UKF's sigma points (representative points from a Gaussian distributions) are generated using the Constant Turn Rate and Velocity Magnitude (CTRV) model. This choice impacts the prediction step of the algorithm, and is reflected in the green orbs displayed at the top of each target vehicle. LiDAR and radar markers are also included as red spheres and as magenta arrow lines, respectively.

Link to code | Starter Code from Udacity

UKF Default Camera View

How to Build and Run the Project

cd /home/$whoami/workspace/udacity-rsend/projects/p5
mkdir build && cd build
cmake ..
make
./ukf_highway

Output

An alternative outcome, in which the stylised car shapes are replaced by box-bound point cloud clusters, is presented below. Because LiDAR is at times unable to capture the full shape of the vehicles, especially when a target is in front of ego car and moving away from it, the RMSE threshold for X is frequently crossed (the XY-midpoints of the bounding boxes, used to calculate the position of the red orbs, tend to align poorly with the ground truth). Using minimum (PCA-based) boxes does not seem to improve upon the results. Point cloud processing considerably slows down the simulation, but further reducing cloud size through voxel grid sampling would significantly worsen the accuracy of the LiDAR estimates so it is not recommended.

UKF XY Bounding Boxes

About

Udacity Sensor Fusion Engineer Nanodegree projects.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published