Title: Intelligent Video Surveillance
Stan Z. Li
Institute of Automation, Chinese Academy of Sciences,PR China
Abstract: Intelligent Video Surveillance (IVS) incorporates image/video analysis and computer vision techniques into video surveillance and is becoming an important part of anti-crime and security surveillance systems. In this talk, I will describe key technologies inside IVS and important applications. The talk includes the following parts: basis IVS technologies object detection, tracking, and classification; high level vision analysis for behavior analysis or anomaly detection; latest progress on multi-camera object tracking; and IVS applications. Technologies and applications will be illustrated by numerous real video demos.
Stan Z. Li received his B.Eng from Hunan University, China, M.Eng from National University of Defense Technology, China, and PhD degree from Surrey University, UK. He is currently a professor and the director of Center for Biometrics and Security Research (CBSR), Institute of Automation, Chinese Academy of Sciences (CASIA). His research interest includes pattern recognition and machine learning, image and vision processing, face recognition, biometrics, and intelligent video surveillance. He has published over 200 papers in international journals and conferences, and authored and edited 8 books. He is currently an associate editor of IEEE Transactions on Pattern Analysis and Machine Intelligence and acted as the editor-in-chief for the Encyclopedia of Biometrics (Springer Reference Work, 2009). He was elevated to IEEE Fellow for his contributions to the fields of face recognition, pattern recognition and computer vision.
Title: Locally Adaptive Regression Kernels (LARK) for Visual Signal Processing and Recognition
Department of Electrical Engineering, University of California, Santa Cruz, USA
Abstract: I will present a non-parametric framework based on Locally Adaptive Regression Kernels (LARKs) which are visual descriptors that adapt to local characteristics of the given data, capturing both the spatial density of the data samples ("the geometry"), and the actual values of those samples ("the radiometry"). These descriptors are exceedingly robust in expressing the underlying structure of multidimensional signals even in the presence of significant noise, missing data, and other disturbances. As the framework does not rely upon strong assumptions about noise or signal models, it is applicable to a wide variety of problems. Of particular interest in two and three dimensions are state of the art denoising and upscaling of images and video, and novel applications in computer vision such as "visual search".
Peyman Milanfar received a B.S. degree in Electrical Engineering/Mathematics from the University of California, Berkeley, and the Ph.D. degree in Electrical Engineering from the Massachusetts Institute of Technology. Prior to coming to UCSC, he was at SRI(formerly Stanford Research Institute) and a Consulting Professor of computer science at Stanford. He is a recipient of the Career Award from the US National Science Foundation. In 2005 he founded MotionDSP Inc. to bring state-of-art video enhancement technology to consumer and forensic markets. He has served as associate editor for IEEE Signal Processing Letters, Transactions on Image Processing, and Image and Vision Computing. He is a member of the Signal Processing Society's Image Video, and Multidimensional Signal Processing Technical Committee. Interests are in statistical signal, image and video processing, and computational vision. He is a Fellow of the IEEE.
Title: 3DTV and Realistic Broadcasting Services
Department of Information and Communications, Gwangju Institute of Science and Technology, Korea
Abstract: In recent years, various multimedia services have become available and the demand for three-dimensional television (3DTV) is growing rapidly. Since 3DTV is considered as the next generation broadcasting service that can deliver realistic and immersive experience by supporting user-friendly interactions, a number of advanced three-dimensional video technologies have been studied. In this talk, we are going to cover current research activities on 3DTV. After defining the basic requirements for realistic 3D broadcasting services, we will explain various multi-modal immersive media processing techniques.
Dr. Yo-Sung Ho received the B.S. and M.S. degrees in electronic engineering from Seoul National University, Seoul, Korea, in 1981 and 1983, respectively, and the Ph.D. degree in electrical and computer engineering from the University of California, Santa Barbara, in 1990. He joined ETRI (Electronics and Telecommunications Research Institute), Daejon, Korea, in 1983. From 1990 to 1993, he was with Philips Laboratories, Briarcliff Manor, New York, where he was involved in development of the Advanced Digital High-Definition Television (AD-HDTV) system. In 1993, he rejoined the technical staff of ETRI and was involved in development of the Korean DBS Digital Television and High-Definition Television systems. Since September 1995, he has been with Gwangju Institute of Science and Technology (GIST), where he is currently Professor of Information and Communications Department. Since August 2003, he has been Director of Realistic Broadcasting Research Center (RBRC) at GIST in Korea. He is presently serving as an Associate Editor of IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT). His research interests include Digital Image and Video Coding, Three-dimensional Image Modeling and Representation, Advanced Source Coding Techniques, and Three-dimensional Television (3DTV).