Professor Michael Lewis to discuss robot research, June 8


The iSchool is pleased to host a special colloquium on Friday, June 8 to showcase the research efforts of Professor Michael Lewis and his usability research lab. The colloquium will take place at 1:30 pm in Room 405 of the Information Sciences building.  Dr. Lewis plans to discuss developing scalability for control of multi-robot systems, scheduling human attention to improve human-robot interaction (HRI) performance, and developing displays for robot teams.

Michael Lewis is a professor in the School of Information Sciences at the University of Pittsburgh.  His research focuses on human interaction with intelligent automation.  Since conducting research in modeling human process control operators at Georgia Tech, he has studied visualization-based control interfaces, human-agent teamwork, and most recently, human-robot interaction.  He is the author of more than 180 scientific papers in the area of Human Factors.  His group at the iSchool has developed a number of widely-used research tools including USARSim, a robotic simulation adopted for RoboCup competition and downloaded from SourceForge more than 70,000 times.


Developing Scalability for Control of Multi-Robot Systems
Many applications such as interplanetary construction, search and rescue in dangerous environments, or cooperating uninhabited aerial vehicles have been proposed for multi-robot systems (MrS). Controlling these robot teams becomes especially difficult as the number of robots grows large.  Drawing an analogy from computational complexity we have developed a classification of human-multirobot tasks by considering how they scale in N of robots.  Highly autonomous collectives such as swarms may be O(1) with control complexity independent of the N of robots.  Other tasks such as specifying waypoints to be followed are O(n) because each additional robot requires the same additional effort.  Tasks which require robots to take coordinated action O(>n) can be shown to rapidly grow too demanding for a human to control.  Two lines of research motivated by this model will be presented.

Scheduling Human Attention to improve human-robot interaction (HRI) performance
Because O(n) tasks are independent they can be performed in a sequence chosen by the operator.  If the operator is treated as a server and robots presenting themselves for service as jobs the human-robot team can be modeled as a queuing system and scheduling results used to improve performance.  For this to work, HRI tasks need to be re-designed to approximate a queuing system and we must be able to direct human attention to the desired robot.  At Pitt we have conducted a series of experiments using robot self-reflection and alarms to approximate queuing systems and studied techniques for directing operator attention to the robot to be serviced.  In conjunction with CMU researchers we have developed a variety of methods for scheduling operator attention with performance guarantees.

Developing O(1) Displays for Robot Teams
While our efforts to increase span of control over unmanned vehicle (UV) teams is making progress, the asymmetry between what we can command and what we can comprehend is growing.  As the number of cameras returning streaming video grows operators have increasing difficulty monitoring windows in parallel and reconciling a diversity of viewpoints of potentially the same region(s) and object(s).  Over the past four years we have been investigating asynchronous displays which can convert this high workload forced-pace job into a sequential, self-paced inspection of archival imagery.  Early studies of control via manual panoramas will be described culminating in our current Image Queue system which imposes no additional demand on the operator and improves HRI system performance as N robots increases.