ICAPS 2012 Invited Speakers
- Robert O. Ambrose: Controlling robots across intermediate time delays
- Anthony G. Cohn: Building qualitative models of spatio-temporal behaviour
- George J. Pappas: Temporal logic motion planning for mobile robots
Robert O. Ambrose
Controlling Robots Across Intermediate Time Delays [watch the video]
Abstract: NASA has worked with three levels of time delay in commanding robots, with control modes adapted to the speed of light limits of communication across the vast distances of space. During human missions, astronauts have teleoperated robotic functions with essentially no time delay (< 50 ms), such as when they operated the Remote Manipulator System onboard the Shuttle. A longer delay was seen when commanding the pan-tilt camera on the Lunar Roving Vehicle. The ground operators in Houston used a look up table to send commands across a multi second time delay to point and capture video of the Apollo astronauts as they ascended from the Lunar Surface. Longer delays have been handled when commanding remote spacecraft using scripts that have evolved with ever increasing sophistication, allowing intelligent control of actions across tens of minutes to hours of time delay.
Of the three modes, we have the least experience in the intermediate realm. Time delays longer than 50 ms have been described as incompatible with direct teleoperation. Sending 24 hours of scripted activity inhibits interaction between humans on Earth and our surrogate robot explorers. How can robot autonomy allow for a more interactive command approach, more akin to supervision of a subordinate? A capable subordinate stays productive by re-tasking and adapting, but is smart enough to ask for help. High orbit, lunar, and asteroid missions will have intermediate time delays on the order of seconds to minutes: too long for direct control, but too short to command once a day. NASA’s recent experiments commanding rovers and manipulators will be described. New approaches are sought.
Biography: Dr. Robert O. Ambrose is the Chief of the Software, Robotics and Simulation Division at NASA’s Johnson Space Center in Houston Texas, USA.
Robert Ambrose received his Ph.D. from the University of Texas at Austin in Mechanical Engineering. He received his M.S. and B. S. degrees from Washington University in St. Louis. Dr. Ambrose currently serves as the Division Chief of the Software, Robotics and Simulation Division at NASA’s Johnson Space Center in Houston Texas, USA. The SR&S Division is responsible for flight spacecraft software, space robotics and human system simulations for human spaceflight missions. Within the Division are five Branches responsible for managing on-orbit robotic systems for the Shuttle and International Space Station, development of software for the Crew Exploration Vehicle and future Human Spaceflight systems, simulation development for engineering development and training, hardware and software GFE, hardware in the loop facilities for anomaly resolution and crew training, and the technology Branch for development of new robotic systems. Dr. Ambrose leads NASA’s agency wide Human-Robotics Systems technology project, is the lead for robotics elements in support of the human exploration architecture study teams, and is NASA’s point of contact for the National Robotics Initiative. He is married to Dr. Catherine G. Ambrose and lives in Houston Texas.
Anthony G. Cohn
European Coordinating Committee for Artificial Intelligence (ECCAI) Invited Speaker
Building Qualitative Models of Spatio-Temporal Behaviour [watch the video]
Abstract: In this talk I will present ongoing work at Leeds on building models of activity from video and other sensors, using both supervised and unsupervised techniques. Activities may occur in parallel, while actors and objects may participate in multiple activities simultaneously. The representation exploits qualitative spatio-temporal relations to provide symbolic models at a relatively high level of abstraction. A novel method for robustly transforming noisy sensor data to qualitative relations will be presented. For supervised learning, I will show how the supervisory burden can be reduced by using what we term "deictic supervision," whilst in the unsupervised case I will present a method for learning the most likely interpretation of the training data. I will also show how objects can be "functionally categorised" according to their spatio-temporal behaviour and how the use of type information can help in the learning process, especially in the presence of noise. I will present results from several domains including a kitchen scenario and an aircraft apron.
Biography: Tony Cohn holds a Personal Chair at the University of Leeds, where he is Professor of Automated Reasoning and served a term as Head of the School of Computing, from August 1999 to July 2004. He is presently Director of the Institute for Artificial Intelligence and Biological Systems. He holds BSc and PhD degrees from the University of Essex, where he studied under Pat Hayes. He spent 10 years at the University of Warwick before moving to Leeds in 1990. He now leads a research group working on Knowledge Representation and Reasoning with a particular focus on qualitative spatial/spatio-temporal reasoning, the best known being the well cited Region Connection Calculus (RCC). His current research interests range from theoretical work on spatial calculi and spatial ontologies, to cognitive vision, modeling spatial information in the hippocampus, and integrating utility data recording the location of underground assets. Many of the group’s publications concerning spatial reasoning can be found here. He has received substantial funding from a variety of sources, including EPSRC, the DTI, DARPA, the European Union and various industrial sources. Work from the Cogvis project won the British Computer Society Machine Intelligence prize in 2004.
George J. Pappas
Temporal Logic Motion Planning for Mobile Robots [watch the video]
Abstract: Motion planning and task planning are two fundamental problems in robotics that have been addressed from different perspectives. Bottom-up motion-planning techniques concentrate on creating control inputs or closed-loop controllers that steer a robot from one configuration to another while taking into account different dynamics and motion constraints. On the other hand, top-down task-planning approaches are usually focused on finding coarse, typically discrete, robot actions in order to achieve more complex missions. In this talk we will present a framework to automatically generate a hybrid controller that guarantees that the robot can achieve its task, given a robot model, a class of admissible environments, and a high-level task or behavior for the robot. The desired task specifications, which are expressed in a fragment of linear temporal logic (LTL), can capture complex robot behaviors, such as search and rescue, coverage, and collision avoidance. In addition, our framework explicitly captures sensor specifications that depend on the environment with which the robot is interacting, which results in a novel paradigm for sensor-based temporal-logic motion planning. As one robot is part of the environment of another robot, our sensor-based framework very naturally captures multi-robot specifications in a decentralized manner. Our computational approach is based on first creating discrete controllers that satisfy specific LTL formulas. If feasible, the discrete controller is then used to guide the sensor-based composition of continuous controllers, which results in a hybrid controller that satisfies the high level specification, but only if the environment is admissible.
Biography: George J. Pappas is the Joseph Moore Professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania. He also holds a secondary appointment in the Departments of Computer and Information Sciences, and Mechanical Engineering and Applied Mechanics. He is a member of the GRASP Lab and the PRECISE Center. He currently serves as the Deputy Dean for Research in the School of Engineering and Applied Science. His research focuses on control theory and in particular, hybrid systems, embedded systems, hierarchical and distributed control systems, with applications to unmanned aerial vehicles, distributed robotics, green buildings, and bio molecular networks. He is a Fellow of IEEE, and has received various awards such as the Antonio Ruberti Young Researcher Prize, the George S. Axelby Award, and the National Science Foundation PECASE.