Electronics, Communication & Automation Technology

Research on Autonomous Grasping of a Humanoid Robot Based on Vision

  • ZHANG Xuchong ,
  • YANG Jun
Expand
  • School of Design,South China University of Technology,Guangzhou 510006,Guangdong,China
张续冲(1989—),男,博士,副教授,主要从事仿人机器人设计、运动规划研究。E-mail: sdxczhang@scut.edu.cn

Received date: 2024-02-06

  Online published: 2024-02-06

Supported by

the Natural Science Foundation of Guangdong Province(2020A1515010397)

Abstract

With the development of robotics technology, humanoid robots have shown application potential and value in multiple fields. Research on autonomous grasping of humanoid robots based on machine vision aims to improve their grasping adaptability and humanoid actions in natural environments. In terms of machine vision, the Realsense-D435 depth camera was adopted, and the YOLO (You Only Look Once) object detection model was used to achieve target object recognition, spatial positioning, depth map cropping, and target point cloud generation. The object’s posture was obtained based on the registration algorithm (ICP) between the target point cloud and the standard point cloud. The robot head was modeled using the D-H method, and the position and posture of the object were converted from the camera coordinate system to the robot coordinate system. In terms of motion planning, according to the grasping law of the human arm, the grasping process was divided into 9 basic actions: initial position, moving to the pre-grasping position, grasping the object, lifting the object, moving the object, moving to placement position placing the object, retreating position, and returning to the initial position. Corresponding grasping postures were determined for different objects to improve the success rate of grasping. Based on the grasping and placing points obtained visually, the remaining key points were calculated independently, and the spatial arc was used as the grasping trajectory. Through Matlab simulation, the rationality of the end movement trajectory and joint trajectory of the robotic arm during the grasping process was verified. Finally, an object grasping experiment was conducted, and the results showed that the humanoid robot can quickly and accurately recognize and locate different objects in the natural environment, and can successfully grasp and transport them with a success rate of over 80%. And it takes into account the imitation of human nature of the action, verifying the effectiveness of the proposed solution. This study can promote the application and popularization of humanoid robots in human daily life.

Cite this article

ZHANG Xuchong , YANG Jun . Research on Autonomous Grasping of a Humanoid Robot Based on Vision[J]. Journal of South China University of Technology(Natural Science), 2024 , 52(7) : 53 -61 . DOI: 10.12141/j.issn.1000-565X.230528

References

1 林义忠,陈旭 .基于机器视觉的机器人定位抓取的研究进展[J].自动化与仪器仪表2021257(3):9-12.
  LIN Yizhong, CHEN Xun .Research progress of robot positioning and grasping based on machine vision[J].Automation & Instrumentation2021257(3):9-12.
2 WANG J, LI S .Grasp detection via visual rotation object detection and point cloud spatial feature scoring[J].International Journal of Advanced Robotic Systems202118(6):1459-1475.
3 SUN J, INFORMATION V F A, ZHANG K,et al .A model-free 6-DOF grasp detection method based on point clouds of local sphere area[J].Advanced Robotics202337(11):679-690.
4 刘汉伟,曹雏清,王永娟 .基于非结构基本组成分析的自主抓取方法[J].机器人201941(5):583-590.
  LIU Hanwei, CAO Chuqing, WANG Yongjuan .Autonomous grasping method based on non-structural basic composition analysis[J].Robot201941(5):583-590.
5 HUANG L, WANG H, LU Y,et al .Research on intelligent grasping system for general objects based on deep learning[C]∥Proceedings of the 2nd International Conference on Computer,Control and Robotics.Shanghai:ICCR,2022:103-108.
6 WU Y, FU Y, WANG S .Information-theoretic exploration for adaptive robotic grasping in clutter based on real-time pixel-level grasp detection[J]2023 IEEE Transactions on Industrial Electronics202371(3):2683-2693.
7 仵沛宸,帅威,陈小平,等 .抓取任务中的融差控制方法[J].机器人202244(5):589-600.
  WU Peichen, SHUAI Wei, CHEN Xiaoping,et al .The rong-cha based control method in grasping task[J].Robot202244(5):589-600.
8 CHEN H, WANG J, MENG M Q H .Kinova gemini:interactive robot grasping with visual reasoning and conversational AI[C]∥Proceedings of the 2022 IEEE International Conference on Robotics and Biomimetics.Jinghong:IEEE,2022:129-134.
9 HUNDHAUSEN F, GRIMM G, STIEBER L,et al .Fast reactive grasping with in-finger vision and in-hand FPGA-accelerated CNNs[C]∥Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.Prague:IROS,2021:6825-6832.
10 王高,陈晓鸿,柳宁,等 .一种基于视角选择经验增强算法的机器人抓取策略[J].华南理工大学学报(自然科学版)202250(9):126-137.
  WANG Gao, CHEN Xiaohong, LIU Ning,et al .A robot grasping policy based on viewpoint selection experience enhancement algorithm[J].Journal of South China University of Technology (Natural Science Edition)202250(9):126-137.
11 陈佳盼,郑敏华 .基于深度强化学习的机器人操作行为研究综述[J].机器人202244(2):236-256.
  CHEN Jiapan, ZHENG Minhua .A survey of robot manipulation behavior research based on deep reinforcement learning[J].Robot202244(2):236-256.
12 XU L, ZHOU Z, WANG C .An robot vision grasping network based on inception-lite[J].Journal of Physics:Conference Series20211748(2):022041/1-7.
13 LIANG P, HUANG C, FAN Z,et al .3D eye-to-hand coordination for uninstructed robot grasp planning [C]∥Proceedings of the 2022 12th International Conference on CYBER Technology in Automation,Control,and Intelligent Systems.Baishan:CYBER,2022:537-542.
14 于旭,陶先童,宁丹阳,等 .RGB-D图像引导的机器人操作任务模仿学习[J].组合机床与自动化加工技术2023590(4):165-168,173.
  YU Xu, TAO Xiantong, NING Danyang,et al .Imitation learning of robot operation task based on RGB-D image[J]Modular Machine Tool & Automatic Manufacturing Technique2023590(4):165-168,173.
15 史步海,欧华海,郭清达 .基于位置跟踪器的机器人快速示教方法[J].华南理工大学学报(自然科学版)202250(10):62-69.
  SHI Buhai, Huahai OU, GUO Qingda .A fast teaching method of robot based on position tracker[J].Journal of South China University of Technology (Natural Science Edition)202250(10):62-69.
16 ZHOU R, ZHANG Z, PENG K,et al .Humanoid action imitation learning via boosting sample DQN in virtual demonstrator environment[C]∥Proceedings of the 2016 23rd International Conference on Mechatronics and Machine Vision in Practice.Nanjing:M2VIP,2016:1-9.
17 KAYUKAWA Y, TAKAHASHI Y, TSUJIMOTO T,et al .Influence of emotional expression of real humanoid robot to human decision-making[C]∥Proceeding of the 2017 IEEE International Conference on Fuzzy Systems.Naples:IEEE,2017:1-6.
18 KIM B, BRUCE M, BROWN L,et al .A comprehensive approach to validating the uncanny valley using the anthropomorphic RoBOT (ABOT) database[C]∥Proceedings of the 2020 Systems and Information Engineering Design Symposium.Charlottesville:SIEDS,2020:1-6.
19 张续冲,张瑞秋,陈亮,等 .仿人机械双臂的运动学建模及实验[J].机械设计与研究202036(6):24-28,34.
  ZHANG Xuchong, ZHANG Ruiqiu, CHEN Liang,et al .Kinematical modeling and experimental research of a humanoid dual-manipulator[J].Machine Design and Research202036(6):24-28,34.
20 ZHOU Q, PARK J, KOLTUN V .Open 3D:modern library fora 3D data processing[EB/OL].(2018-01-30)[2023-08-17]..
21 WU X, QU W, ZHANG T,et al .Object pose estimation with point cloud data for robot grasping[C]∥Proceeding of the 2022 IEEE International Conference on Mechatronics and Automation.Guilin:ICMA,2022:1069-1074.
22 ZHANG X, YANG P, LI W .Research on humanoid movements of a 7-DOF manipulator for planar grasping[C]∥Proceeding of the 2022 2nd International Conference on Robotics and Control Engineering.New York:RobCE,2022:67-72.
23 LIN Yuquan, ZHANG Xuchong, LI Wenfan,et al .Human-like motion planning of anthropomorphic arms based on hierarchical strategy[C]∥Proceeding of the 2023 3nd International Conference on Robotics and Control Engineering. ACM International Conference Proceeding Series.New York:ACM,2023:55-59.
24 THOMAS T, JAVIER R, HEINZ-BODO S,et al .The GRASP taxonomy of human grasp types[J].IEEE Transactions on Human-Machine Systems201646(1):66-77.
Outlines

/