High Fidelity Simulations: The Critical Path to Real-World Robotics
Microsoft Research, USA
ABSTRACT Developing and testing algorithms for building robots is an expensive and time consuming process. Machine Learning is one of the key component that enables robotic systems that operate under uncertainty. However, in order to utilize recent advances in machine intelligence and deep learning we need to collect a large amount of annotated training data in a variety of conditions and environments. Also, such data driven system are far from perfect and can result in failure cases that can jeopardize safety. In this talk we will explore how high fidelity simulations can help us alleviate some of these problems. We will discuss how such near-realistic simulations can help not only with gathering training but also can be embedded in imitation-learning or reinforcement learning loops in order to improve sample complexity. Our discussion will center around AirSim, a new open-source simulator built on Unreal Engine that offers physically and visually realistic simulations.
BIOSKETCH Ashish Kapoor is a Principal Researcher at Microsoft Research, Redmond. His research focuses on machine learning and robotics with an emphasis on building near-realistic simulation systems (see: https://github.com/Microsoft/AirSim). Ashish received his PhD and Master’s degrees from MIT Media Laboratory and a Bachelor’s degree in computer science and engineering from Indian Institute of Technology, Delhi.
Open Source Robotics Foundation, USA
BIOSKETCH Brian Gerkey is CEO and Founder of OSRF. Prior to joining OSRF, Brian was Director of Open Source Development at Willow Garage. Previously, Brian was a Computer Scientist in the Artificial Intelligence Center at SRI, and before that, a postdoctoral research fellow in the Artificial Intelligence Lab at Stanford University. Brian received his Ph.D. in Computer Science from the University of Southern California (USC) in 2003, his M.S. in Computer Science from USC in 2000, and his B.S.E. in Computer Engineering, with a secondary major in Mathematics and a minor in Robotics and Automation, from Tulane University in 1998.
Brian is a strong believer in, frequent contributor to, and constant beneficiary of open source software. Since 2008, Brian has worked on the ROS Project, which develops and releases one of the most widely used robot software platforms in robotics research and education (and soon industry). He is founding and former lead developer on the open source Player Project, which continues to maintain widely used robot simulation and development tools. For his work on Player and ROS, Brian was recognized by MIT Technology Review with the TR35 award in 2011 and by Silicon Valley Business Journal with their 40 Under 40 award in 2016.
Efficient Coding, Prediction and Mental Imagery for Intelligent, Cognitive Behavior in Robots
Jeffrey L. Krichmar
University of California, Irvine, USA
ABSTRACT Inspired by ability of the nervous system to efficiently predict, encode and respond appropriately to environmental features and events, we have developed robots capable of cognitive behavior. The nervous system is under tight metabolic constraints and this leads to incredibly efficient representations of important environmental features. These representations are sparse and reduced, leading to energy efficient processing and less computation. Being able to predict outcomes through mental simulation can increase environmental fitness and reduce uncertainty. Such encodings and predictions reduce surprise and fit with thermodynamically driven theories of brain function by attempting to reduce entropy. I will discuss recent developments of brain-inspired models along these lines and how these ideas can be embodied on physical robots. This work highlights the importance of taking into consideration efficient coding, mental imagery and embodiment when constructing artificial cognitive systems.
BIOSKETCH Jeffrey L. Krichmar received a B.S. in Computer Science in 1983 from the University of Massachusetts at Amherst, a M.S. in Computer Science from The George Washington University in 1991, and a Ph.D. in Computational Sciences and Informatics from George Mason University in 1997. He spent 15 years as a software engineer on projects ranging from the PATRIOT Missile System at the Raytheon Corporation to Air Traffic Control for the Federal Systems Division of IBM. In 1997, he became an assistant professor at The Krasnow Institute for Advanced Study at George Mason University. From 1999 to 2007, he was a Senior Fellow in Theoretical Neurobiology at The Neurosciences Institute. He currently is a professor in the Department of Cognitive Sciences and the Department of Computer Science at the University of California, Irvine. His research interests include neurorobotics, embodied cognition, biologically plausible models of learning and memory, and the effect of neural architecture on neural function.
Automation vs. Augmentation: Socially Assistive Robotics and the Future of Work
University of Southern California, USA
ABSTRACT Robotics is booming all around us. A field that was originally driven by the desire to automate physical work is now raising concerns about the future of work. Less discussed but no more important are the implications on human health, as the science on longevity and resilience indicates that having the drive to work is key for health and wellness. However, robots, machines that were originally invented to automate work, are also becoming helpful by not doing any physical work at all, but instead by motivating and coaching us to do our own work, based on evidence from neuroscience and behavioral science demonstrating that human behavior is most strongly influenced by physically embodied social agents, including robots. The field of socially assistive robotics (SAR) focuses on developing intelligent socially interactive machine that that provide assistance through social rather than physical means. The robot’s physical embodiment is at the heart of SAR’s effectiveness, as it leverages the inherently human tendency to engage with lifelike (but not necessarily human-like or otherwise biomimetic) agents. People readily ascribe intention, personality, and emotion to robots; SAR leverages this engagement to develop robots capable of monitoring, motivating, and sustaining user activities and improving human learning, training, performance and health outcomes. Human-robot interaction (HRI) for SAR is a growing multifaceted research field at the intersection of engineering, health sciences, neuroscience, social, and cognitive sciences, with rapidly growing commercial spinouts. This talk will describe research into embodiment, modeling and steering social dynamics, and long-term adaptation and learning for SAR, grounded in projects involving multi-modal activity data, modeling personality and engagement, formalizing social use of space and non-verbal communication, and personalizing the interaction with the user over a period of months, among others. SAR systems have been validated with a variety of user populations, including stroke patients, children with autism spectrum disorders, elderly with Alzheimer’s and other forms of dementia; this talk will cover the short, middle, and long-term commercial applications of SAR, as well as the frontiers of SAR research.
BIOSKETCH Maja Matarić is professor and Chan Soon-Shiong chair in Computer Science Department, Neuroscience Program, and the Department of Pediatrics at the University of Southern California, founding director of the USC Robotics and Autonomous Systems Center (RASC), co-director of the USC Robotics Research Lab and Vice Dean for Research in the USC Viterbi School of Engineering. She received her PhD in Computer Science and Artificial Intelligence from MIT in 1994, MS in Computer Science from MIT in 1990, and BS in Computer Science from the University of Kansas in 1987.
She is a Fellow of the American Association for the Advancement of Science (AAAS), Fellow of the IEEE and AAAI, and recipient of the Presidential Awards for Excellence in Science, Mathematics & Engineering Mentoring (PAESMEM), the Anita Borg Institute Women of Vision Award for Innovation, Okawa Foundation Award, NSF Career Award, the MIT TR35 Innovation Award, and the IEEE Robotics and Automation Society Early Career Award. She served as the elected president of the USC faculty and the Academic Senate. At USC she has been awarded the Viterbi School of Engineering Service Award and Junior Research Award, the Provost’s Mentoring Award and Center for Interdisciplinary Research Fellowship, the Mellon Mentoring Award, the Academic Senate Distinguished Faculty Service Award, and a Remarkable Woman Award. She is featured in the science documentary movie “Me & Isaac Newton”, in The New Yorker (“Robots that Care” by Jerome Groopman, 2009), Popular Science (“The New Face of Autism Therapy”, 2010), the IEEE Spectrum (“Caregiver Robots”, 2010), and is one of the LA Times Magazine 2010 Visionaries.
Prof. Matarić is the author of a popular introductory robotics textbook, “The Robotics Primer” (MIT Press 2007), an associate editor of three major journals and has published extensively. She serves or has recently served on a number of advisory boards, including the National Science Foundation Computing and Information Sciences and Engineering (CISE) Division Advisory Committee, and the Willow Garage and Evolution Robotics Scientific Advisory Boards. Prof. Matarić is actively involved in K-12 educational outreach, having obtained federal and corporate grants to develop free open-source curricular materials for elementary and middle-school robotics courses in order to engage student interest in science, technology, engineering, and math (STEM) topics.
Her Interaction Lab’s research into socially assistive robotics is aimed at endowing robots with the ability to help people through individual non-contact assistance in convalescence, rehabilitation, training, and education. Her research is currently developing robot-assisted therapies for children with autism spectrum disorders, stroke and traumatic brain injury survivors, and individuals with Alzheimer’s Disease and other forms of dementia.
Planetary Robotic Exploration: Mobility and Autonomy
Issa A.D. Nesnas
Jet Propulsion Laboratory, USA
ABSTRACT The success of the Mars rovers has provided a wealth of information leading to major scientific discoveries. Planetary mobility has proved to be an invaluable tool of surface exploration, complementing orbital observations. In this talk, Dr. Nesnas will provide an overview of advances in robotic mobility and autonomy that have contributed to the success of the Spirit, Opportunity, and Curiosity rovers and that are planned for use on the Mars 2020 rover. He will also highlight current and needed advances in mobility, sampling, and autonomy that would allow future access to extreme terrains such as crater walls, icy crevasses, gullies, canyons and skylights. He will share advances and field-test results of recent work in wheeled, rappelling, legged, microgravity, and aerial mobility.
BIOSKETCH Issa Nesnas is a principal technologist and the supervisor of the Robotic Mobility group at the Jet Propulsion Laboratory. He conducts research in mobility and autonomy with a focus on extreme terrain access and microgravity mobility. He contributed to the development of autonomous rover navigation and visual target tracking and participated in the development of the Curiosity and Mars 2020 rovers. Dr. Nesnas led the development of multi-institutional robotic autonomy software effort, served on NASA’s Capability Leadership Team for Autonomy, and co-chaired NASA’s Technology Roadmaps for Robotics and Autonomous Systems. He authored or co-authored over fifty publications and holds several patents. Early in his career, he worked with national leaders in industrial robotic automation at a Silicon Valley firm. He received a B.E. degree in Electrical Engineering from Manhattan College in 1991, and earned the M.S. and Ph.D. degrees in robotics from the Mechanical Engineering Department at the University of Notre Dame in 1993 and 1995 respectively.