Dr. Anca Dragan is an Assistant Professor in the Electrical Engineering and Computer Sciences Department at UC Berkeley. She runs the InterACT (Interactive Autonomy and Collaborative Technologies) Lab, which focuses on the (re)design of robotics algorithms for interaction and coordination with people. Anca got her PhD from the Robotics Institute at Carnegie Mellon in 2015 for her work on legible (goal-expressive) robot motion planning.

Since starting on the faculty at Berkeley in 2015, Anca’s research has focused on accounting for the influence that robot actions will have what people do in order to enhance coordination across autonomous driving and manipulation applications; on enabling robot transparency beyond motion goals; and on value alignment -- enabling robots to interactively learn the right objectives. Anca helped found and serves on the steering committee for the Berkeley AI Research (BAIR) Lab, and is a co-Pi on the Center for Human-Compatible AI.

She has won an NSF CAREER award, the Okawa Foundation award, and was on MIT Tech Review's 35 innovators under 35 list. Her research has been featured in The Atlantic, New Scientist, WIRED, IEEE Spectrum, and NPR.

Keynote: Perceiving Human Internal State

Robots perceive and act so that they can optimize their objective functions. They assume that somehow, their objective is exogenously specified. But in fact, they come from people: someone sits down and figures out what objective to write down in order to incentivize the right behavior. People are unfortunately imperfect, and this process often leads to misspecification. What is more, the robot usually gets shipped off to help not its designer, but an end-user. Ultimately what the robot should optimize is not exactly the objective that its designer intended, but rather the objective that it's end-user intends. In this talk, we will look at algorithms that enable robots to work together with designers and end-users to estimate what its true objective function ought to be.


Anca Dragan

University of California Berkeley



Richard Szeliski

Facebook‍‍‍ Research

Richard Szeliski is a Research Scientist in the Computational Photography group at Facebook, which he founded in 2015. He is also an Affiliate Professor at the University of Washington, and is a member of the NAE and a Fellow of the ACM and IEEE. Dr. Szeliski has done pioneering research in the fields of Bayesian methods for computer vision, image-based modeling, image-based rendering, and computational photography, which lie at the intersection of computer vision and computer graphics. His research on Photo Tourism, Photosynth, and Hyperlapse are exciting examples of the promise of large-scale image and video-based rendering.

Dr. Szeliski received his Ph.D. degree in Computer Science from Carnegie Mellon University, Pittsburgh, in 1988 and joined Facebook as founding Director of the Computational Photography group in 2015. Prior to Facebook, he worked at Microsoft Research for twenty years, the Cambridge Research Lab of Digital Equipment Corporation for six years, and several other industrial research labs. He has published over 150 research papers in computer vision, computer graphics, neural nets, and numerical analysis, as well as the books Computer Vision: Algorithms and Applications and Bayesian Modeling of Uncertainty in Low-Level Vision. He was a Program Committee Chair for CVPR'2013 and ICCV'2003, served as an Associate Editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence and on the Editorial Board of the International Journal of Computer Vision, and as Founding Editor of Foundations and Trends in Computer Graphics and Vision.

Keynote: Visual Reconstruction and Image-Based Rendering

The reconstruction of 3D scenes and their appearance from imagery is one of the longest-standing problems in computer vision. Originally developed to support robotics and artificial intelligence applications, it has found some of its most widespread use in the support of interactive 3D scene visualization.

One of the keys to this success has been the melding of 3D geometric and photometric reconstruction with a heavy re-use of the original imagery, which produces more realistic rendering than a pure 3D model-driven approach. In this talk, I give a retrospective of two decades of research in this area, touching on topics such as sparse and dense 3D reconstruction, the fundamental concepts in image-based rendering and computational photography, applications to virtual reality, as well as ongoing research in the areas of layered decompositions and 3D-enabled video stabilization.

CRV 2018