CRV 2009

Joint conference information

Invited Speakers

Vision for Robots at Home and at Work
Jim Little
University of British Columbia

Increasingly we want computers and robots to observe us and know who we are and what we are doing, and to understand the objects and tasks in our world, both at work and at home. I will describe how we've built systems for mobile robots to find objects using visual cues and how a range of visual capabilities permits the robot to work for and with humans.

Learning to Learn: Closing the Loop Between Data Analysis and Acquisition
Rui Castro
Columbia University

In this talk I present a discussion of active learning, or learning using sequential experimental designs. In many practical scenarios it is possible to adjust the data collection process based on information gleaned from previous observations, in the spirit of the "twenty-questions" game. These techniques, generically referred to as active learning or adaptive sampling, have the potential to dramatically improve the learning performance. Although appealing, analysis of such procedures is difficult, due to the complicated dependencies in the data created by the closed-loop observation process. These difficulties are further exasperated by the presence of measurement uncertainty or noise.

In this talk I present a quantitative analysis of active learning in a variety of scenarios, in particular I present results characterizing the fundamental limits of active learning for set estimation in nonparametric settings. I will also present a novel active sensing procedure - Distilled Sensing - that is effective for the detection and estimation of high-dimensional sparse signals in noise. Large-sample analysis shows that the proposed procedure provably outperforms the best possible detection methods based on non-adaptive sensing, allowing for the detection and estimation of extremely weak signals.

Shapes and Shock Graphs: From Segmented Shapes to Shapes Embedded in Images
Ben Kimia
Brown University

The recognition of shapes in figure-ground segmented images is challenging due to the tremendous range of variations caused by changes in viewpoint, object pose, illumination, articulation, occlusion, and most significantly within category variations. The key to successful recognition is the use of a representation whose topology captures this variation, so that in general small changes in shape cause small changes to the representation, and to explicitly deal with cases when this is violated.

This talk will show how the the shock graph representation of shape is a suitable intermediate representation in mediating the pixel-bound intensities and coordinate-free object models. We show two approaches, one for bottom-up perceptual grouping and object recognition, and one in model-based object recognition and segmentation, both using a notion of object fragments induced by the shock graph.