Skip to content

robertokcanale/yolov7-ros

 
 

Repository files navigation

ROS2 package for official YOLOv7

This repo contains a ROS Galactic package for the official YOLOv7. It wraps the official implementation into a ROS node (so most credit goes to the YOLOv7 creators).

Also credit goes to lucazso for starting thsi repo.

Note

There are currently two YOLOv7 variants out there. This repo contains the implementation from the paper YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors.

Requirements & Getting Started

Following ROS packages are required:

sudo apt-get install ros-galctic-vision-msgs
sudo apt-get install ros-galctic-geometry-msgs
sudo apt-get install ros-galctic-shape-msgs
sudo apt-get install ros-galctic-message-generation
sudo apt-get install ros-galctic-actionlib-msgs

First, clone the repo into your catkin workspace and build the package:

git clone https://github.com/robertokcanale/yolov7-ros.git ~/ros2_ws/src/
cd ~/colcon_ws
colcon build

The Python requirements are listed in the requirements.txt. You can simply install them as

pip install -r requirements.txt

Download the YOLOv7 weights from the official repository.

The package has been tested under Ubuntu 20.04 and Python 3.8.10.

Usage

COCO Object Detection

Before you launch the node, adjust the parameters in the launch file. For example, you need to set the path to your YOLOv7 weights and the image topic to which this node should listen to. The launch file also contains a description for each parameter.

ros2 launch yolov7_ros yolov7.launch.py

YOLOv7 Human Pose Estimation

Before you launch the node, adjust the parameters in the launch file. For example, you need to set the path to your YOLOv7 weights and the image topic to which this node should listen to. The launch file also contains a description for each parameter. You can download the weights from the official repo or here: https://drive.google.com/file/d/1Khl44NDNp2bpQMWWN-hvfc258SGx_QtV/view?usp=sharing

ros2 launch yolov7_ros yolov7_hpe.launch.py

Each time a new image is received it is then fed into YOLOv7.

Notes

  • The detections will be published under /yolov7/out_topic.
  • If you set the visualize parameter to true, the detections will be drawn into the image, which is then published under /yolov7/out_topic/visualization.

About

ROS package for official YOLOv7

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Python 98.2%
  • Other 1.8%