We plan to maintain the performance tables (similar to the ones in the overview slides). The link to the tables will be provided later at this website..
SDHA 2010 will be held on August 22nd Sunday at 9am.
SDHA 2010 was held successfully at ICPR 2010 on August 22nd. We are posting the summary of the contest results below. The winner of the aerial-view challenge is the team BU Action Covariance Manifolds
(Kai Guo, Prakash Ishwar, and Janusz Konrad). We do not have a winner for the interaction challenge or the wide-area challenge.
Overview: An Overview of Contest on Semantic Description of Human Activities (SDHA) 2010 overview paper overview slides
- Finalist 1: HMM Based Action Recognition with Projection Histogram Features paper slides
- Finalist 2: Action Recognition in Video by Sparse Representation on Covariance Manifolds of Silhouette Tunnels paper slides
- Finalist 3: Variations of a Hough-Voting Action Recognition System paper slides
The Contest on Semantic Description of Human Activities is a research competition to recognize human activities in realistic scenarios. Three different challenges have been designed to encourage the development of activity recognition methodologies applicable to real-world environments (e.g. surveillance systems). In each challenge, a set of videos is provided to the contestants for the training and testing of their systems. The goal is to label all ongoing activities in a video.
SDHA 2010 was held in conjunction with the 20th International Conference on Pattern Recognition (ICPR 2010
) at Istanbul, Turkey.
The contest is composed of three different types of activity recognition challenges: High-level Human Interaction Recognition Challenge
, Aerial View Activity Classification Challenge
, and Wide-Area Activity Search and Recognition Challenge
. The general idea behind three challenges is to test methodologies with realistic surveillance-type videos having multiple actors and pedestrians. The objective of the first challenge is to recognize high-level interactions between two humans, such as hand-shake and push. The goal of the second challenge is to recognize relatively simple one-person actions (e.g. bend and dig) taken from a low-resolution far-away camera. The third challenge is to monitor human activities with multiple cameras observing a wide area. The challenges will encourage researchers to test their state-of-the-art recognition systems on the three datasets with different characteristic, and motivate them to develop methodologies designed for complex scenarios in realistic environments.
A separate surveillance-type video dataset will be provided for each challenge. UT-Interaction, UT-Tower, and UCR-Videoweb are the names of the datasets used in each challenge.
The full dataset is now available at each challenge website.
Interaction Challenge Sample
Aerial View Challenge Sample
Wide-Area Challenge Sample