posted on 2013-11-28, 13:07authored byLiang Bai, Songyang Lao, Alan F Smeaton, Noel E. O'Connor, David Sadlier, David Sinclair
The most common approach to automatic summarisation and highlight detection
in sports video is to train an automatic classi er to detect semantic highlights
based on occurrences of low-level features such as action replays, excited
commentators or changes in a scoreboard. We propose an alternative approach
based on the detection of perception concepts (PCs) and the construction of
Petri-Nets which can be used for both semantic description and event detection
within sports videos. Low-level algorithms for the detection of perception
concepts using visual, aural and motion characteristics are proposed, and a series
of Petri-Nets composed of perception concepts is formally de ned to describe
video content. We call this a Perception Concept Network-Petri Net (PCN-PN)
model. Using PCN-PNs, personalized high-level semantic descriptions of video
highlights can be facilitated and queries on high-level semantics can be achieved. A
particular strength of this framework is that we can easily build semantic detectors
based on PCN-PNs to search within sports videos and locate interesting events.
Experimental results based on recorded sports video data across three types of
sports games (soccer, basketball and rugby), and each from multiple broadcasters,
are used to illustrate the potential of this framework.
History
Publication
The Computer Journal;52(7), pp. 808-823
Publisher
Oxford University Press
Note
peer-reviewed
Other Funding information
National High Technology Development 863 Program of China, National Natural Science Foundation of China, SFI
Rights
This is a pre-copyedited, author-produced PDF of an article accepted for publication in The computer Journal following peer review. The definitive publisher-authenticated version ,Semantic analysis of field sports video using a petri-net of audio-visual concepts,52(7), pp. 808-8823 is available online at:http://dx.doi.org/10.1093/comjnl/bxn058