Detecting semantic concepts from video using temporal gradients and audio classification

Mika Rautiainen, Tapio Seppänen, Jani Penttilä, Johannes Peltola

    Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

    8 Citations (Scopus)

    Abstract

    In this paper we describe new methods to detect semantic concepts from digital video based on audible and visual content. Temporal Gradient Correlogram captures temporal correlations of gradient edge directions from sampled shot frames. Power-related physical features are extracted from short audio samples in video shots. Video shots containing people, cityscape, landscape, speech or instrumental sound are detected with trained self-organized maps and kNN classification results of audio samples. Test runs and evaluations in TREC 2002 Video Track show consistent performance for Temporal Gradient Correlogram and state-of-the-art precision in audio-based instrumental sound detection.
    Original languageEnglish
    Title of host publicationImage and Video Retrieval
    Subtitle of host publicationCIVR 2003
    PublisherSpringer
    Pages260-270
    ISBN (Electronic)978-3-540-45113-6
    ISBN (Print)978-3-540-40634-1
    DOIs
    Publication statusPublished - 2003
    MoE publication typeA4 Article in a conference publication
    EventImage and Video Retrieval, CIVR 2003 - Urbana-Champaign, United States
    Duration: 24 Jul 200325 Jul 2003

    Publication series

    SeriesLecture Notes in Computer Science
    Volume2728

    Conference

    ConferenceImage and Video Retrieval, CIVR 2003
    Abbreviated titleCIVR 2003
    CountryUnited States
    CityUrbana-Champaign
    Period24/07/0325/07/03

    Fingerprint Dive into the research topics of 'Detecting semantic concepts from video using temporal gradients and audio classification'. Together they form a unique fingerprint.

    Cite this