contact classes office hours about me Go home
Pattern Recognition
Computer Vision
Community Activities
Selected Papers


Research

Research Highlights

In a research career spanning over 40 years, Professor Aggarwal has made seminal contributions in diverse research areas including digital signal processing, image processing, pattern recognition and computer vision. Professor Aggarwal has served as the Director of Computer and Vision Research Center at the University of Texas at Austin since 1985. He has graduated 38 Ph.D. and 53 Masters students. He has edited/co-edited and authored/co-authored seven books, and has published over 175 refereed archival journals, as well as 200+ refereed conference proceedings.

His current research focuses on understanding human motion and interactions using computer vision and content-based image/video retrieval . One of the goals of his present research is to build a bridge between human motion understanding and content-based video retrieval and summarization for the automatic analysis and understanding of video.

In addition, he has contributed to the development of products including software for seismic data processing and modeling of real objects in order to estimate structure from multiple views, and the evolution of new research areas - dynamic scene analysis and multi sensor fusion.

(back to top)

 

Digital Signal Processing

Prof. Aggarwal's contributions to digital signal processing and system theory were recognized by his election as IEEE Fellow in 1976. His work on linear shift-variant digital filters documented the relationship between the classes of systems described by linear time-variant difference equations and rational generalized transfer functions, and led to a better understanding of the characteristics of time-varying filters, as well as their synthesis and implementation as digital filters.

Prof. Aggarwal applied digital signal processing expertise to the solution of important problems in seismic data processing, including the processing of time-variant signals (signals that change in frequency as a function of time) and the deconvolution of seismic data, namely, the analysis of seismic data to isolate favorable geologic structures that may contain oil or gas. Many of these algorithms are still in use in the oil industry.

(back to top)

 

Pattern Recognition

In ground-breaking work on object recognition, Prof. Aggarwal developed an algorithm to determine the edges of curved or planar 3D objects leading to the identification of object boundaries (1) , which received the Pattern Recognition Society's 1975 Best Paper Award. The segmentation and analysis of the scene based on curvature of object boundaries enabled recognition of objects from partial views. This algorithm has since been employed in the recognition of industrial parts.

(back to top)

 

Computer Vision

Image Sequence Analysis for Structure and Motion

Professor Aggarwal introduced the seminal concept of analyzing image sequences to obtain information on tracking and structure of moving objects. He was among the first to use motion in computer vision to recognize and track objects (2) . He later used image sequence analysis to compute object description and motion parameters (3) . The technique of computing structure from multiple views from three views of six points first proposed in this paper was later implemented in the Eos Systems, Inc. PhotoModeler software for constructing 3D models from photographs. Prof. Aggarwal was also the first to compute the structure of non rigid objects from images sequences (4) .

From these pioneering research contributions arose a new research area known as dynamic scene analysis. He received the 1996 IEEE Computer Society Technical Achievement Award for pioneering contributions towards establishing fundamentals of structure extraction and computational motion from image sequences. In addition, the work presented at the 1992 IEEE Robotics and Automation Conference (5) , which applied his experience in the analysis of video sequences to the navigation of an indoor mobile robot, earned his student, Xavier Lebegue, the Phillips Award for the Best Paper. Professor Aggarwal has also co-authored several review papers on motion research (6-9) .




Human Motion and Interaction

More recently Professor Aggarwal and his students have focused on tracking humans and recognizing interactions between humans.

With his student Koichi Sato, he has developed a transform, the Temporal Spatial Velocity Transform, that enables the tracking of persons and recognition of ‘blob' level interactions between persons. This work was presented at the 2001 IEEE Workshop on Multi-Object Tracking (10) and at the 2nd IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS'2001) (11). A journal paper based on this is due to appear in Computer Vision and Image Understanding .

With his student Sangho Park, he has developed a methodology for tracking multiple body parts to recognize interactions between persons at a more detailed level. This work involves tracking the individual body parts of the interacting persons. It was presented at the 2002 IEEE Workshop on Motion and Video Computing (12) and the 2003 ACM International Workshop on Video Surveillance (13) .

Professor Aggarwal continues to make contributions to the analysis and understanding of human motion and human-human interactions using computer vision. His current research addresses the important problems of monitoring and evaluating human activities, and modeling and recognizing human-human interactions. He was the invited speaker at the recent IEEE Computer Society 2003 Workshop on Computer Vision and Pattern Recognition for Human-Computer Interface.



Content-Based Image Retrieval

Modern data systems—in areas ranging from surveillance to medical imaging—accrue and store massive numbers of images for future use. The accumulated images, however significant, are of little value if they cannot be quickly retrieved. Efficient query systems are needed to quickly locate images with particular properties within large collections. Content-based image retrieval systems analyze image features to identify image content. Color and texture are two of the features that have traditionally been used to approach this challenging problem.

Professor Aggarwal and his student, Qasim Iqbal, have found that structure derived by perceptual grouping is a valuable tool in the quest for more efficient content-based image retrieval. The use of structure derived via perceptual grouping for image classification and retrieval does not require image segmentation. Using color, texture and structure to retrieve images containing both natural and manmade objects demonstrates that collectively structure, color and texture form an excellent feature set for image retrieval. The CIRES system, a robust content-based image retrieval system ( http://amazon.ece.utexas.edu/~qasim/research.htm), incorporates relevance feedback from the user to further refine the search. This research has been published in two papers in Pattern Recognition (14-15) , the second (15) of which received the journal editors' honorable mention for Best Paper of 2003.




An Application to Biomedical Engineering

Another significant accomplishment was the development of a computer vision analysis system, with Prof. Ken Diller, to analyze microscope images of human tissue cells during freezing and thawing. This system was adopted in private and public laboratories around the world as a tool to extend the shelf life of human tissue and organs. It was cited as one of the Top 100 Innovations of 1985 by Science Digest magazine.




Multisensor Fusion

As one of the first computer vision researchers to recognize the usefulness of obtaining data from multiple sensors to provide additional information for object recognition, Professor Aggarwal coined the term multisensor fusion , now a fast-growing area of computer vision research. Prof. Aggarwal's groundbreaking work in multisensor fusion, driven by the synergistic integration of information from multiple sensors, includes the integration of visual and thermal images to classify outdoor scenes (presented at the First International Conference on Computer Vision (16) and subsequently in IEEE Trans. PAMI (17) ); laser radar and thermal images for image interpretation (18-19) ; and structured light and visual sensing for computing 3D structure (20) . These ideas are also detailed in Encyclopedia of Artificial Intelligence (21) . He organized and directed the 1989 NATO Advanced Research Workshop on Multisensor Fusion ( Grenoble, France) and later edited a book arising from this workshop, Multisensor Fusion for Computer Vision (22). His contribution on laser radar and thermal images (18) received the IEEE Computer Society Outstanding Paper Award at the 1991 Conference on Artificial Intelligence Applications.

(back to top)

 

Community Activities

Throughout his career, Prof. Aggarwal has organized many workshops and conferences, including the first workshop on Computer Analysis of Time-Varying Imagery (Philadelphia, 1979). More recently he has organized a series of IEEE Computer Society workshops on motion: Motion of Nonrigid and Articulate Objects (Austin, 1994), Nonrigid and Articulated Motion (Puerto Rico, 1997), Human Motion (Austin, 2000) and Motion and Video Computing (Orlando, 2002).

Through research, supervision of graduate research, and organization of conferences and workshops, Professor Aggarwal has started and nurtured the area of motion in computer vision. Today's computer vision researchers are extending this research and developing products based on the image sequence analysis methods initiated by Professor Aggarwal in 1975.

(back to top)

 

Selected Papers

1     “Finding the Edges of the Surfaces of Three-Dimensional Curved Objects by Computer,” Pattern Recognition , vol. 7, pp. 25-52, 1975, with J.W. McKee.

2     “Computer Analysis of Moving Polygonal Images," IEEE Trans. on Computers , vol. C-24, no. 10, pp. 966-976,1975, with R. O. Duda.

3     “Determining the Movement of Objects from a Sequence of Images,” IEEE Trans. on Pattern Analysis and Machine Intelligence , vol. 2, no. 6, pp. 554-562, November 1980, with J.W. Roach.

4     “Structure from Motion of Rigid and Jointed Objects,” Artificial Intelligence 19, 107-130, 1982, with J.A. Webb.

5     “Extraction and Interpretation of Semantically Significant Line Segments for a Mobile Robot, ” Proc. IEEE International Conference on Robotics and Automation , Nice, France (1992), with Xavier Lebegue.

6     “Dynamic Scene Analysis: A Survey”, Computer Graphics and Image Processing , vol. 7 no. 3 June 1978, with W. N. Martin.

7     “On the Computation of Motion from Sequences of Images – A review”, Proceedings of the IEEE vol. 76 no. 8 August 1988, with N. Nandhakumar.

8     “Nonrigid Motion Analysis: Articulated and Elastic Motion” Computer Vision and Image Understanding , vol. 70 no. 3 May 1998, with Q. Cai, W. Liao and B. Sabata.

9     “Human Motion Analysis: A Review”, Computer Vision and Image Understanding , vol. 73, no.3, March 1999, with Q. Cai.

10   “Recognizing and tracking two-person interactions in outdoor image sequences,” 2001 IEEE Workshop on Multi-Object Tracking, Vancouver, Canada, July 2001, with Koichi Sato.

11   “Tracking Persons and Vehicles in Outdoor Image Sequences using Temporal Spatio-Velocity Transform,” IEEE Computer Society International Workshop on Performance Evaluation of Tracking and Surveillance (PETS'2001), Kauai, Hawaii, December 9, 2001, with Koichi Sato.

12   “Segmentation and Tracking of Interacting Human Body Parts Under Occlusion and Shadowing,” Proc. IEEE Workshop on Motion and Video Computing , Orlando, Florida, Dec. 5-6, 2002, with Sangho Park.

13   “Recognition of Two-person Interactions Using a Hierarchical Bayesian Network”, Proc. First ACM SIGMM Workshop on Video Surveillance , Berkeley, CA, November 2-8, 2003, with Sangho Park.

14   “Retrieval by Classification of Images Containing Large Manmade Objects Using Perceptual Grouping”, Pattern Recognition , vol. 35, no. 7. pp. 1463-1479, July 2002, with Qasim Iqbal.

15   “Image Retrieval via Isotropic and Anisotropic Mappings”, Pattern Recognition , vol. 35, no. 12, pp. 2673-2686, December 2002, with Qasim Iqbal.

16   “Multisensor Integration – Experiments in Integrating Thermal and Visual Sensors,” Proc. IEEE-IAPR First International Conference on Computer Vision , London, England, June 1987, pp. 83-92.

17   “Integrated Analysis of Thermal and Visual Image for Scene Interpretation,” IEEE Trans. PAMI 10 (4) 469-481, 1988, with N. Nandhakumar.

18   “Multi-Sensor Image Interpretation Using Laser Radar and Thermal Images,” Proc. IEEE Computer Society 1991 Conference on Artificial Intelligence Applications , with C.C. Chu.

19   “The Integration of Image Segmentation Maps Using Region and Edge Information,” IEEE Trans. on Pattern Analysis and Machine Intelligence , vol. 15, no. 12, December 1993, pp. 1241-1252, with Chen-Chau Chu.

20   “Integration of Active and Passive Sensing Techniques for Representing Three-Dimensional Objects,” IEEE Trans. Robot. Automation , 5 (4) 460-471, 1989, with Y. F. Wang.

21   Encyclopedia of Artificial Intelligence , 2nd edition, Vol. 2, pp. 1511-1526. John Wiley & Sons,1992, ISBN: 047150307X.

22   Multisensor Fusion for Computer Vision , J. K. Aggarwal, Editor. Springer Verlag, Berlin, 1993, 456 pp.

(back to top)