FURI | Summer 2020

Multi-modal Communication Between Human and Robots through Virtual and Augmented Reality

Data icon, disabled. Four grey bars arranged like a vertical bar chart.

The goal of this project is to enable multi-modal communication with human-robot teams. This study will demonstrate a way to incorporate spatial and temporal cues to enhance human-robot communication with virtual reality (VR) and augmented reality (AR)  support. The process is divided into three steps: setting up the environment so that the TeSIS (temporal spatial inverse semantics) code is able to run, adjusting the TeSIS such that it is compatible with VR/AR technology, and incorporating the code into machine learning and language processing models. Progress has been made in setting up the environment properly in the first step.

Student researcher

Michael Chung

Computer science

Hometown: Chandler, Arizona, United States

Graduation date: Spring 2023