Skip to content

The project on the path to develop automation of post-processing in additive manufacturing

License

Notifications You must be signed in to change notification settings

VCUAAM/vcu_am_post_processing

Repository files navigation

VCU Additive Manufacturing Post-Processing

This project is the graduate research of Logan Schorr. The goal of this research is to determine a path to automate post-processing of additive manufacturing, processes such as heat treatment, support removal, and polishing.

The current implementation of is on a Universal Robots UR5e, with a Robotiq 2F-140 end effector with custom grippers, utilizing a Vzense DCAM560CPro 3D camera for vision capabilities. It is primarily programmed in Python3, using ROS2 Rolling. This respository is broken into 4 packages.

  • AM Vision
  • UR Path Planning
  • VCU UR Driver
  • AM Post-Processing Interfaces

Package Descriptions

AM Vision

The AM vision package handles all interfaces concerning the camera. The main purpose of the camera is to identify the surface of the workspace. For our testing purposes, this is a table. The TOF camera captures a depth image, converts it into a pointcloud, identifies the table, and determines the vertices of the table.

Modules Used

UR Path Planning

The UR path planning package handles path planning of the UR robot. The initial usage relates to creating a path for automated powder processing, and takes the table vertices generated by the AM vision package to create the ideal path for the robot to take.

Modules Used

VCU UR Driver

The VCU UR driver is a working copy of the robot drivers of the ROS2 Universal Robots driver, for testing and developing modules related to controlling the robot.

AM Post-Processing Interfaces

The AM post-processing interfaces contains all custom actions, messages, and services used by the other packages, to ease the flow of information with custom data types.

Usage

The following commands only work after installing all relevant packages, connected to network, and initialized the robot and camera

  • Initializes node for communication
ros2 launch ur_robot_driver ur5e.launch.py robot_ip:=192.168.1.102 launch_rviz:=true
  • Plays program to enable ROS control of UR
ros2 service call /dashboard_client/play std_srvs/srv/Trigger
  • Starts moving robot based on pre-programmed positions in configuration file
ros2 launch ur_robot_driver test_scaled_joint_trajectory_controller.launch.py
  • Transform position of tool0 (end effector) frame to robot base frame using tf2
ros2 run tf2_ros tf2_echo tool0 base
  • Read current position of camera relative to the robot base frame

Execute transform_server.py, located in AM Vision package

ros2 action send_goal pointcloudcapture am_pp_interfaces/action/PointcloudCapture "{execute: True}"
  • Transform position of tcp (as defined in Polyscope) to robot base frame, and push current position of camera to the pointcloud processing node

Execute pointcloud_node.py, located in AM Vision package

ros2 launch am_vision ur5e_tf_reader.launch.py

About

The project on the path to develop automation of post-processing in additive manufacturing

Topics

Resources

License

Stars

Watchers

Forks