Human-Robot Collaboration

Project Page on Human-Robot Collaboration

Hong (Herbert) Cai and Yasamin Mostofi, UCSB

Predicting human visual performance for optimal query Co-optimization of sensing tour and human collaboration Optimal path planning for robotic field sensing with human assistance

Research Summary

Robots are becoming more capable of accomplishing complicated tasks these days. There, however, still exist many tasks that robots cannot autonomously perform to a satisfactory level. As such, it is of great importance to properly include humans in certain robotic operations since robots can benefit tremendously from asking humans for help. However, human performance may also be far from perfect depending on the given task.

In this project, we design human-robot collaboration techniques by 1) predicting human task performance and response, and 2) incorporating such prediction into the optimization of robot field decision-making, sensing, and path planning. For instance, given a specific task, humans may not be able to provide a satisfactory performance if the task is too challenging (e.g., a visual search task where the captured image is extremely dark). Thus, in order to properly query humans, the robot needs to predict human performance.

In this project, we show how to equip robots with the capability to predict human performance. More specifically, for visual tasks (e.g., visual searches), we propose a machine learning-based pipeline that can predict human visual performance probabilistically, for any visual input (RSS16 paper, project page, data/code). We have released the code and data for the learning pipeline, which is based on several MTurk data (RSS16 project website). Such predictions can then be utilized for the optimization of human-robot collaboration in several different applications, as we have shown with several experiments on our campus (RSS16 paper, ACC15 paper). In other parts of our project, we have then focused on the joint optimization of human collaboration and robotic field decision making (e.g., sensing, path planning, communication), under resource constraints. For instance, we have shown how robotic site visit and human collaboration can be co-optimized, and mathematically characterized the optimal solution ([TRO 2019][Book Chapter].

In [TRO 2020], we have shown how the robot can jointly classify objects, even under poor sensing quality, by deducing object similarity and utilizing it in its sensing, path planning and object classification. More specifically, even when the robot cannot classify individual objects due to poor sensing, we have shown that the correlation coefficient of its DNN feature vectors carries vital information on object similarity, which it can then utilize to deduce object similarity and jointly classify objects. Our campus experiments further confirm the proposed framework. Please see the papers below for more details.

Back to top

Publications


Back to top

Acknowledgements

Arjun Muralidharan, Chitra Karanam and Saandeep Depatla for helping with the experiments.