Keynote Speech I

AI Empowered Social Robotics for Better Human Care

Prof. Li-Chen Fu, National Taiwan University, Taiwan, R.O.C.

ABSTRACT  Given the rapid advance in various robot technology development, increasing research attentions have been paid to the area of Social Robotics lately, where a social robot is an autonomous robot that interacts and communicates with humans or other autonomous physical agents by following social behaviors and rules attached to its role. However, it remains a great challenge for such entity to be able to stay along with humans long and well enough in order to provide substantial care to human’s life due to lack of contextual intelligence and, hence, the essential autonomy. In other words, robots are not able to adapt what they know or are skilled to real world scenarios and situations. Fortunately, enlightened by the recent breakthrough in artificial intelligence (AI), a variety of powerful perceptional and reasoning abilities have been established such that, after leveraging versatile big data from open source, the social robots is enabled to understand the contexts autonomously much better than before. Under this circumstance, social robots can interact with humans more naturally and warmly, and an immediate benefit of this is that humans more probably can acquire better caring services from the robot group as a whole, which is particularly meaningful and crucial when the society is getting aging or aged. In this talk, we will show a few examples of social robots empowered by AI, such as ones good at making a company, vitalizing cognitive ability, and supporting fading memory, etc.

BIOSKETCH Li-Chen Fu received his Ph.D. degree from University of California, Berkeley, U.S.A. in 1987, and is currently a Distinguished full professor in both Dept. of Electrical Engineering and Dept. of Computer Science and Information Engineering at National Taiwan University (NTU), Taiwan, R.O.C. His main research interests are robotics, smart home, computer vision & applications, and control systems. He has received numerous academic recognitions, including Distinguished Research Awards from National Science Council, Taiwan, R.O.C., Academic Award from Ministry of Education, Taiwan, R.O.C., Irving T. Ho Chair Professorship, Macronix Chair Professorship, and is an IEEE Fellow (2004) and an IFAC Fellow (2017). He is now the Editor-in-Chief of Asian Journal of Control published by Wiley, and serves as Director of Center for Artificial Intelligence and Advanced Robotics at National Taiwan University as well as Director of MOST (Ministry of Science and Technology) All Vista Healthcare Center since 2018.

Keynote Speech II

Robot-Human Adaptation and Mutual Adaptation

Prof. David Hsu, National University of Singapore (NUS), Singapore

ABSTRACTEarly robots occupied tightly controlled environments, e.g., factory floors, designed to segregate robots and humans for safety. In the near future, robots will “live” with humans, providing a variety of services at homes, in workplaces, or on the road. To become effective and trustworthy collaborators, robots must adapt to human behaviors and more importantly, adapt to changing human behaviors, as humans adapt as well. I will present some ideas to achieve robot-human adaptation by modeling key conceptual elements such as human intention, trust, … and by exploiting these these elements in new reasoning and learning algorithms. The discussion, I hope, will spur greater interest towards principled approaches that integrate robot perception, reasoning, and learning for fluid human-robot collaboration.

BIOSKETCH David Hsu is a professor of computer science at the National University of Singapore (NUS) and a member of NUS Graduate School for Integrative Sciences & Engineering. He received PhD in computer science from Stanford University . At NUS, he co-founded NUS Advanced Robotics Center. He is an IEEE Fellow.

His research spans robotics and AI. In recent years, he has been working on robot planning and learning under uncertainty and human-robot collaboration. He, together with colleagues and students, won the Humanitarian Robotics and Automation Technology Challenge Award at International Conference on Robotics & Automation (ICRA) 2015, the RoboCup Best Paper Award at International Conference on Intelligent Robots & Systems (IROS) 2015, and the Best Systems Paper Award at Robotics: Science & Systems (RSS), 2017.

He has chaired or co-chaired several major international robotics conferences, including International Workshop on the Algorithmic Foundation of Robotics (WAFR) 2004 and 2010, Robotics: Science & Systems (RSS) 2015, and ICRA 2016. He was an associate editor of IEEE Transactions on Robotics. He is currently serving on the editorial boards of Journal of Artificial Intelligence Research and International Journal of Robotics Research.

Keynote Speech III

Mark R. Cutkosky, Stanford University, USA

BIOSKETCHCutkosky applies analyses, simulations, and experiments to the design and control of robotic hands, tactile sensors, and devices for human/computer interaction. In manufacturing, his work focuses on design tools for rapid prototyping.

Keynote Speech IV

Wearable Haptic Devices for Ubiquitous Communication

Allison Okamura, Stanford University, USA

ABSTRACTHaptic devices allow touch-based information transfer between humans and intelligent systems, enabling communication in a salient but private manner that frees other sensory channels. For such devices to become ubiquitous, their physical and computational aspects must be intuitive and unobtrusive. The amount of information that can be transmitted through touch is limited in large part by the location, distribution, and sensitvity of human mechanoreceptors. Not surprisingly, many haptic devices are designed to be held or worn at the highly sensitive fingertips, yet stimulation using a device attached to the fingertips precludes natural use of the hands. Thus, we explore the design of a wide array of haptic feedback mechanisms, ranging from devices that can be actively touched by the fingertips to multi-modal haptic actuation mounted on the arm. We demonstrate how these devices are effective in virtual reality, human-machine communication, and human-human communication.

BIOSKETCH Allison M. Okamura received the BS degree from the University of California at Berkeley and the MS and PhD degrees from Stanford University, all in mechanical engineering. She is currently Professor in the mechanical engineering department at Stanford University, with a courtesy appointment in computer science. She is an IEEE Fellow and Editor-in-Chief of the journal IEEE Robotics and Automation Letters. Her awards include the 2016 Duca Family University Fellow in Undergraduate Education, 2009 IEEE Technical Committee on Haptics Early Career Award, 2005 IEEE Robotics and Automation Society Early Academic Career Award, and 2004 NSF CAREER Award. Her academic interests include haptics, teleoperation, virtual environments and simulators, medical robotics, neuromechanics and rehabilitation, prosthetics, and engineering education. Outside academia, she enjoys spending time with her husband and two children, running, and playing ice hockey. For more information about her research, please see the Collaborative Haptics and Robotics in Medicine (CHARM) Laboratory website:

Keynote Speech V

Automated Decision Making for Safety Critical Applications

Mykel Kochenderfer, Stanford University, USA

ABSTRACT Building robust decision making systems is challenging, especially for safety critical systems such as unmanned aircraft and driverless cars. Decisions must be made based on imperfect information about the environment and with uncertainty about how the environment will evolve. In addition, these systems must carefully balance safety with other considerations, such as operational efficiency. Typically, the space of edge cases is vast, placing a large burden on human designers to anticipate problem scenarios and develop ways to resolve them. This talk discusses major challenges associated with ensuring computational tractability and establishing trust that our systems will behave correctly when deployed in the real world. We will outline some methodologies for addressing these challenges.

BIOSKETCH Mykel Kochenderfer is an assistant professor of Aeronautics and Astronautics at Stanford University. He is the director of the Stanford Intelligent Systems Laboratory (SISL), conducting research on advanced algorithms and analytical methods for the design of robust decision making systems. In addition, he is the director of the SAIL-Toyota Center for AI Research at Stanford and a co-director of the Center for AI Safety. He received a Ph.D. in informatics from the University of Edinburgh and B.S. and M.S. degrees in computer science from Stanford University. Prof. Kochenderfer is an author of the textbooks “Decision Making under Uncertainty: Theory and Application” and “Algorithms for Optimization”, both from MIT Press.