Author:Rahman, Md Jamiur Supervisor: Prof. Gudrun Klinker Advisor: Jadid, Adnane (@ne23kah) Submission Date: [created]
Abstract
Robotics has drawn substantial attention in recent years in the field of logistics au- tomation. The main difficulty in the area of computer vision and robotics is object grasping. The target for a robot is to use a suction cup, parallel gripper, or some form of the robot end-effector to grab recognizable objects with random poses. The robot is fitted with numerous cameras and sensors. A fully integrated logistic robotics system requires a robust vision that identifies and locates objects that have a huge impact due to a wide range of objects, sensor noise, cluttered conditions, and self-occlusions in a reliable manner. In this work, using LabelFusion, we developed a dataset and also introduced a DenseFusion based neural network for robot object grasping, which uses RGB-D images as input and predicts the location and orientation in each of the axes, with respect to the camera.
Results/Implementation/Project Description
Conclusion
[ PDF (optional) ]
[ Slides Kickoff/Final (optional)]