Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB

Video Content Analysis

Video Content Analysis

 Video analysis

Group Description

The interest in automated video analysis solutions has constantly increased throughout the past years. Video analysis methods applied under mainly controlled conditions, such as industrial environments, are nowadays an established technology. Despite great progress this field, the application auf video analysis techniques under uncontrolled conditions is still a widely unsolved problem. Major challenges mainly stem from the complexity and variability of unstructured outdoor environments. Additional challenges are e.g. objects showing a strong shape variation. Taking these challenges into account, we mainly focus on the following research questions:

  • Robustness. The focus lies on the analysis of video data originating from multi-spectral sensor platforms, which can freely move through unstructured environments. This includes the detection and tracking of objects, as well as the development of efficient preprocessing mechanisms capable of supporting automated scene understanding.
      
  • Adaptivity. Video analysis systems, applied in unknown environments must be able to adapt themselves automatically to specific conditions up to a certain degree.  Adaptation can be done “offline” by applying statistical approaches, as well as through methods that automatically learn from the environment.
       
  • Scalability. Current video analytic is mainly restricted to a limited number of object categories or events which are robustly detectable. In general there is a strong interest in methods showing advanced discriminative capabilities. Towards this end the question on how to scale up video analysis systems is of significant importance. 

Projects

  • Appearance Based Person-Detection and Re-Identification
    Person, especially pedestrian detection, is an established challenge in computer vision with a broad range of applications in the fields of automated video analysis, robotics, or user assistance systems. The problem of person detection is addressed by combining boosting methods trained offline in order to solve the initial detection task and online-boosting to learn person specific models on the fly. The combination of these two components allows to detect persons in unconstrained environments and to keep track of persons at significant pose and scale changes. Further short-term person re-identification and transmodality is supported. Transmodality means that the classifiers can be applied in different spectral ranges without the need to generate large scale multi-spectral data sets.

  • Online Visualization of Person Tracks from Single View Points
    Following and assessing dynamic scenes from single or multiple video streams is a challenging task for human observers. The aim of the project is to support human observers by automatically generating static views of individual persons, which summarizes the behavior and presence of individuals. Summarization currently includes a best shot analysis of a single person and a visualization of a person specific manifold generated from a single view point.

  • Automated Indoor Activity Monitoring
    Automated monitoring of indoor activities in mid or large scale video networks bears significant challenges w.r.t. to legal aspects, in terms of scalability, and also in terms of interoperability with other cyber-physical security techniques. The aim of the project is to develop video analysis components that support automated generation of log-messages that summarize every day routines in indoor environments as observed by video systems on a semantically abstract level.

  • Auto-Calibration of Master-Slave Systems
    Observing large areas from a single view point poses the problem of finding a tradeoff between the field of view of a camera system and the minimum resolution required for many video analysis methods. Multi-focal systems offer a potential solution to this problem. Multi-focal system can be based on a Master-Slave design which consists of multiple cameras that need to be referenced between each other. The aim of this project is to provide a simple auto-calibration method that allows to easily setup master-slave components based on affordable consumer cameras and that has minimum quality requirements on the mounting. The proposed setup may also serve as a basis for semi-stationary sensor networks, where individual nodes can easily be mounted or dismounted.

Team

Dr. Wolfgang Hübner
Stefan Becker
Ann-Kristin Grosselfinger
Ronny Hug
Hilke Kieritz
David Münch


Selected Publications

  • Becker S., Kieritz H., Hübner W., Arens M.: „On the Benefit of State Separation for Tracking in Image Space with an Interacting Multiple Model Filter“, Proc. 7th Int. Conf. on Image and Signal Processing (ICISP), 2016 [Best Paper Award]
  • Münch D., Grosselfinger A., Hübner W., Arens M.: „Automatic unconstrained  online configuration of a master-slave camera system“, In Proc. of the Inter. Conf. on Computer Vision Systems (ICVS), LCNS, Springer, 2013 [Best Paper Award]
  • Kieritz H., Becker S., Hübner W., Arens M.: „Online multi-person tracking using integral channel features“, In Proc. of the 13th IEEE Int. Conf. on Advanced Video and Signal Based Surveillance (AVSS), 2016
  • Hilsenbeck B., Münch D., Kieritz H., Hübner W., Arens M.: „Hierarchical Hough forests for view-independent action recognition“, In Proc. 23rd Int. Conf. on Pattern Recognition (ICPR), 2016
  • IJsselmuiden J., Münch D., Grosselfinger A., Arens M., Stiefelhagen R.: „Automatic understanding of group behavior using fuzzy temporal logic“, In: Journal of ambient intelligence and smart environments, S.623-649, 2014

 Recent Publications [ More ]

  • Becker S., Kieritz H., Hübner W., Arens M.: „On the Benefit of State Separation for Tracking in Image Space with an Interacting Multiple Model Filter“, Proc. 7th Int. Conf. on Image and Signal Processing (ICISP), 2016
  • Kieritz H., Becker S., Hübner W., Arens M.: „Online multi-person tracking using integral channel features“, In Proc. of the 13th IEEE Int. Conf. on Advanced Video and Signal Based Surveillance (AVSS), 2016
  • Hilsenbeck B., Münch D., Kieritz H., Hübner W., Arens M.: „Hierarchical Hough forests for view-independent action recognition“, In Proc. 23rd Int. Conf. on Pattern Recognition (ICPR), 2016
  • Hilsenbeck B., Münch D., Grosselfinger A., Hübner W., Arens M.: „Action recognition in the longwave infrared and the visible spectrum using Hough forests“, In Proc. IEEE Int. Symposium on Multimedia (ISM), 2016
  • Becker S., Scherer-Negenborn N., Thakkar P., Hübner W., Arens M.: „The effects of camera jitter for background subtraction algorithms on fused infrared-visible video streams“,  Proc. of SPIE, 2016
  • Becker S., Krah S., Hübner W., Arens M.: „MAD for visual tracker fusion“,  Proc. of SPIE, 2016
  • Münch D., Hilsenbeck B., Kieritz H., Becker S., Grosselfinger A., Hübner W., Arens M.: „Detection of infrastructure manipulation with knowledge-based video surveillance“,  Proc. of SPIE, 2016
  • Münch D., Becker S., Kieritz H., Hübner W., Arens M.: „Video-based log generation for security systems in indoor surveillance scenarios“,  Security Research Conference „Future Security“, 2016
  • Becker S., Hübner W., Arens M.: „Annotation driven MAP search space estimation for sliding-window based person detection“,  Proc. of 14th IAPR Int. Conf. on Machine Vision Applications (MVA), 2015
  • Becker S.,Münch D., Kieritz H., Hübner W., Arens M.: „Detecting abandoned objects using interacting multiple models“, Proc. of SPIE, 2015
  • Becker S., Scherer-Negenborn N., Thakkar P., Hübner W., Arens M.: „An evaluation of background subtraction algorithms on fused infrared-visible video streams“,  Proc. of SPIE, 2015
  • Münch D., Becker S., Kieritz H., Hübner W., Arens M.: „Knowledge-based situational analysis of unusual events in public places“,  Security Research Conference „Future Security“, 2015
  • More ]

Recent Theses

  • P. Abel, „Robust Object Tracking with Interleaved Detection and Segmentation“, Masterarbeit, Institut für Theoretische Elektrotechnik und Systemoptimierung (ITE), KIT, 2017
  • F. Finkenbein, „Poseninvariante Erkennung von Fußgängern mittels nicht linearer Merkmalsintegration“, Masterarbeit, Karlsruher Institut für Technologie (KIT), 2015
  • J. Brauer, „Human Pose Estimation with Implicit Shape Models“, Doktorarbeit, Karlsruher Institut für Technologie (KIT), 2014
  • S. Krah, „Erscheinungsbasiertes Tracking unter Ausnutzung von 3D Modellwissen“, Diplomarbeit, Institut für Anthropomatik (IFA), KIT, 2014 (in german)
  • H. Caesar, „Image-based Activity Recognition using UAVs“, Diplomarbeit, Institut für Anthropomatik (IFA), KIT, 2014
  • M. Cassola, „Using Online Multiple Instance Learning for Person Re-Identification in Video Data“, Diplomarbeit, , Institut für Anthropomatik (IFA), KIT, 2014 

Datensätze

  • The IOSB Multispectral Action Dataset containts video sequences showing violent and non-violent behaviour, recorded in the visible and the infrared spectrum. The dataset is freely available on request. [ Download ]