Skip to Main Content
Frequently Asked Questions
Submit an ETD
Global Search Box
Need Help?
Keyword Search
Participating Institutions
Advanced Search
School Logo
Files
File List
osu1071863899.pdf (3.05 MB)
ETD Abstract Container
Abstract Header
Hierarchical video semantic annotation – the vision and techniques
Author Info
Li, Honglin
Permalink:
http://rave.ohiolink.edu/etdc/view?acc_num=osu1071863899
Abstract Details
Year and Degree
2003, Doctor of Philosophy, Ohio State University, Electrical Engineering.
Abstract
The omnipresent multimedia data calls for efficient and flexible methodologies to annotate, organize, store, and access video resources. Video annotation data, or video meta-data, plays an important role in the future annotation-driven video systems. Although the importance of the video annotation data is widely recognized and a considerable amount of research has been conducted on its various aspects, there is no consistent framework on which to structure video annotation data. In this dissertation, we propose a hierarchical structure for video semantic annotation. Not only do users think in terms of semantic concepts, they also think and operate video systems in a hierarchical fashion. Moreover, hierarchical structures are being used to store and transmit video production data. Consequently, a hierarchical structure for video annotation data is needed. The hierarchical structure is so important that it is likely to affect almost every aspect of multimedia computing. We anticipate that numerous research activities in various aspects of video will be tailored toward this hierarchical structure. Second, various techniques are investigated in terms of how to hierarchically extract video annotations, from low- to mid- to high-levels. The lower the level of the video annotation in the hierarchy, the more applicable automatic approaches are likely to be. Different semantic levels call for different techniques to extract video annotations. For example, high-level video annotations tend to describe high-level video events that are present in the video data. High-level video events are highly structural and the traditional statistical pattern analysis is insufficient. As a result, structural pattern analysis methods such as the syntactic approach are needed to extract high-level video annotations. In this dissertation, we have studied the techniques to hierarchically extract video annotations, from low-, to mid-, and to high-level. In particular, one of the key contributions of this research is to propose building predictive models to deduce midlevel annotations and apply the syntactic pattern analysis approach to analyze high-level video events. Finally, we have developed an XML-based video markup language, VideoML, which is built upon the video part of MPEG-7 with special consideration of the video annotation data’s hierarchical structure.
Committee
Stanley Ahalt (Advisor)
Pages
146 p.
Keywords
Video Annotation
;
Hierarchical Structure
;
Video Query
;
Syntactic Pattern Analysis
;
Hidden Markov Model
;
Video Markup Language
Recommended Citations
Refworks
EndNote
RIS
Mendeley
Citations
Li, H. (2003).
Hierarchical video semantic annotation – the vision and techniques
[Doctoral dissertation, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=osu1071863899
APA Style (7th edition)
Li, Honglin.
Hierarchical video semantic annotation – the vision and techniques.
2003. Ohio State University, Doctoral dissertation.
OhioLINK Electronic Theses and Dissertations Center
, http://rave.ohiolink.edu/etdc/view?acc_num=osu1071863899.
MLA Style (8th edition)
Li, Honglin. "Hierarchical video semantic annotation – the vision and techniques." Doctoral dissertation, Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=osu1071863899
Chicago Manual of Style (17th edition)
Abstract Footer
Document number:
osu1071863899
Download Count:
1,371
Copyright Info
© 2003, all rights reserved.
This open access ETD is published by The Ohio State University and OhioLINK.