Current location: tutorials
home
news
call for papers
important dates
venue/travel/local info
organizing committee
presentation instructions
submissions
reviewing
program
special sessions
awards
workshops
tutorials
demos
registration
travel grants
accommodation
statistics
sponsors
job offers
ECCV history
Contact
eccv2010@ics.forth.gr

 

ECCV 2010 Tutorials, September 5th, 2010

Full day
  • Variational Methods for Computer Vision
    Daniel Cremers (Technische Universität München), Thomas Pock (Graz University of Technology) and Bastian Goldücke (Technische Universität München)

    Variational methods are among the most classical and established methods to solve a multitude of problems in computer vision and image processing. Over the last years, they have evolved substantially, giving now rise to some of the most powerful methods for problems like optic flow estimation, image segmentation or 3D reconstruction. The aim of this tutorial is to discuss important aspects of variational methods for computer vision and image processing applications. In the first part, we provide a gentle introduction of the basic concepts of variational methods. In the second part, we provide variational methods to a variety of classical computer vision and image processing problems such as image restoration, image segmentation, optical flow, stereo and multiview reconstruction. In the third part we present recent developments in the area of convex relaxation techniques and numerical optimization.

  • Computer Vision and 3D Perception for Robotics
    Radu Bogdan Rusu, Gary Bradski, Caroline Pantofaru (Willow Garage), Stefan Hinterstoisser, Stefan Holzer (Technische Universität München), Kurt Konolige (Willow Garage) and Andrea Vedaldi (Oxford University)

    This tutorial is meant as an introduction to the design and implementation of vision algorithms that are useful for robots that act around people. The tutorial concentrates on four areas: mapping and navigation, object recognition, object tracking, and people detection. We will discuss vision algorithms that have been successful on personal robots, as well as tools (such as datasets and software architectures) for future algorithm development. We will address multiple levels of computer vision, from collecting training data, eficiently processing sensor data, and extracting low-level features, to higher-level object and environment modeling. The software tools presented include OpenCV, PCL, VLFeat, and ROS.

  • Computational Symmetry: Past, Current, Future
    Yanxi Liu (Pennsylvania State University)

    Humans, animals and insects have an innate ability to perceive and take advantage of symmetry, which is a pervasive phenomenon presenting itself in all forms and scales in natural and man-made environments. Though our understanding of repeated patterns is generalized by the mathematical concept of symmetry and group theory, and seeking symmetry from digital data has been attempted for over four decades, few effective computational tools are available today. The perception and recognition of symmetry have yet to be fully explored in machine intelligence, in particular computer vision.

    Motivated by a re-surging interest in computational symmetry in the computer vision and computer graphics community, we organize this timely and unique course (associated with a competition) to investigate this potentially powerful intermediate level computer vision tool. The event has three main components:

    • a multidisciplinary perspective on the importance and lasting impact of symmetry detection, presented by a worldwide group of distinguished speakers;
    • a detailed summary of relevant mathematical theory (symmetry group theory), state of the art algorithms and a diverse set of applications (successes and failures), presented by the organizer, and
    • the algorithms and the outcome of the first "symmetry detection algorithm competition", presented by the top three winners of the benchmarked symmetry detection algorithms.

Half day, morning
  • Diffusion Geometry in Shape Analysis
    Michael Bronstein (Technion) and Radu Horaud (INRIA Rhône-Alpes)

    Over the last decade, 3D shape analysis has become a topic of increasing interest in the computer vision community. Nevertheless, when attempting to apply current image analysis methods to 3D shapes (feature-based description, registration, recognition, indexing, etc.) one has to face fundamental differences between images and geometric objects. Shape analysis poses new challenges that are non-existent in image analysis. The purpose of the tutorial is to overview the foundations of shape analysis and to formulate state-of-the-art theoretical and computational methods for shape description based on their intrinsic geometric properties. The emerging field of diffusion geometry provides a generic framework for many methods in the analysis of geometric shapes and objects. The tutorial will present in a new light the problems of shape analysis based on diffusion geometric constructions such as manifold embeddings using the Laplace-Beltrami and heat operator, heat kernel local descriptors, diffusion and compute-time metrics.

  • Feature Learning for Image Classification
    Kai Yu (NEC Labs) and Andrew Ng (Stanford University)

    Image classification has traditionally relied on hand-crafting features to try to capture the essence of different visual patterns. Fundamentally, a long-term goal in AI research is to build intelligent systems that can automatically learn meaningful feature representations from a massive amount of image data. The primary objective of this tutorial is to introduce a paradigm of feature learning from unlabeled images, with an emphasis on applications to supervised image classification. We describe a class of algorithms for learning powerful sparse nonlinear features, and showcase their superior performance on a number of challenging image classification benchmarks. Furthermore, we describe deep learning and a variety of deep learning algorithms, which learn rich feature hierarchies from unlabeled data and can capture complex invariances in visual patterns.

  • Statistical and Structural Recognition of Human Actions
    Ivan Laptev (INRIA/ENS) and Greg Mori (Simon Fraser University)

    Automatic recognition of human actions and gestures is an important topic in computer vision. Solving this problem is essential for a number of emerging industries including indexing of professional and user-generated video archives, automatic video surveillance, and human-computer interaction. Moreover, understanding the function and the meaning of many object and scene classes is intertwined with understanding human actions which highlights the importance of action recognition in solving other computer vision problems.

    The field of human action recognition has evolved considerably over the recent years. Local video representations are now used extensively in combination with statistical recognition methods. At the same time, new powerful structural methods have emerged, presenting solutions to action recognition based on recent advances in structured learning. This course will give an introduction into novel trends in statistical and structural action recognition and will illustrate ideas with examples of successful methods from recent literature. In particular, we will cover bag-of-features action recognition and will discuss alternative local feature representations and their extensions. We will consider current issues in human actions datasets and will address weakly supervised and unsupervised approaches for human actions. We will next present advances in structural modeling of human poses and cover recent structured learning methods for action recognition. While this course will mostly cover action recognition in video, we will also discuss action recognition from still images such as in the Action Classification Taster Competition of PASCAL VOC 2010.

Half day, afternoon
  • Distance Functions and Metric Learning
    Michael Werman, Ofir Pele (Hebrew University of Jerusalem), Brian Kulis (University of California at Berkeley)

    Distance functions are at the core of numerous computer vision tasks. This tutorial provides an in-depth introduction to existing distances that can be used in computer vision and methods for learning new distances via metric learning. The tutorial has two parts. In the first part, we cover several existing distances, discuss their properties, and show how and when to apply them. We give special emphasis on Earth Mover's Distance variants and approximations, and efficient algorithms for their computation. In the second part, we cover metric learning methods; these methods aim to learn an appropriate distance/similarity function for a given task. We focus on learning Mahalanobis distance functions (a.k.a. quadratic forms), and also cover non-linear and non-Mahalanobis metric learning methods. Throughout this part, we will focus on a variety of existing vision applications of metric learning. For this tutorial, we only assume that attendees know some basic mathematics and computer science.

  • Nonrigid Structure from Motion
    Yaser Sheikh (Carnegie Mellon University) and Sohaib Khan (Lahore University of Management Sciences)

    As most cameras available today are monocular and most scenes of interest deform over time, the case when the stereoscopic constraint does not apply, i.e. when the 3D points move between the capture of two images, has wide application. In this tutorial, we will review the existing corpus of work in nonrigid structure for motion, include shape and trajectory representations, estimation paradigms, fundamental ambiguity in NRSFM solutions and the additional constraints proposed by different researchers for a numerically stable solution. The tutorial will give a complete overview of the area and identify the key challenges and open problems that remain to be solved.





Call for proposals >>


back