Efficient Video Annotations by an Image Groups

Authors

  • balan KM UG Student, Department of CSE, Bharath University, Tamilnadu, India
  • kumar JS UG Student, Department of CSE, Bharath University, Tamilnadu, India.
  • Rajakumari K Asst.Professor, Department of CSE, Bharath University, Tamilnadu, India

Keywords:

Video annotation, Domain adaptation, A-KML, ASVM

Abstract

Searching desirable events in uncontrolled videos is a challenging task. So, researches mainly focus on obtaining concepts from numerous labelled videos. But it is time consuming and labour expensive to collect a large amount of required labelled videos for training event models under various condition. To avoid this problem, we propose to leverage abundant Web images for videos since Web images contain a rich source of information with many events roughly annotated and taken under various conditions. However, information from the Web is difficult .so,brute force knowledge transfer of images may hurt the video annotation performance. so, we propose a novel Group-based Domain Adaptation learning framework to leverage different groups of knowledge (source target) queried from the Web image search engine to consumer videos (domain target). Different from old methods using multiple source domains of images, our method makes the Web images according to their intrinsic semantic relationships instead of source. Specifically, two different types of groups ( event-specific groups and concept specific groups) are exploited to respectively describe the event-level and concept-level semantic meanings of target-domain videos.

References

Y. Jiang, G. Ye, S. Chang, D. Ellis, and A. Louie, “Consumer video understanding: A benchmark database and an evaluation of human and machine performance,” in ICMR, 2011, p. 29.

M. R. Nap hade and J. R. Smith, “On the detection of semantic concepts at trecvid,” in Proceedings of the 12th annual ACM international conference on Multimedia, 2004, pp. 660–667.

A. Louie, J. Lou, S. Chang, D. Ellis, W. Jiang, L. Kennedy, K. Lee, and A. Yanagawa, “Kodak’s consumer video benchmark data set: concept definition and annotation,” in Workshop on Multimedia Information Retrieval, 2007, pp. 245–254.

X. Wu, X. Dong, D. Lixin, L. Jiebo, and J. Yunde, “Action recognition using multi-level features and latent structural svm,” vol. 23, no. 8, pp.1422–1431, 2013.

Y.-G. Jiang, S. Bhattacharya, S.-F. Chang, and M. Shah, “High-level event recognition in unconstrained videos,” International Journal of Multimedia Information Retrieval, pp. 1–29, 2012.

L. Duan, D. Xu, I. Tsang, and J. Lou, “Visual event recognition in videos by learning from web data,” in CVPR, 2010, pp. 1959–1966.

Z. Ma, A. G. Hauptmann, Y. Yang, and N. Sebe, “Classifier-specific intermediate representation for multimedia tasks,” in Proceedings of the 2nd ACM International Conference on Multimedia Retrieval, 2012,p. 50.

K. Rajakumari, Dr. C. Nalini, National Conference on Advance Trends in Information Computing (NCATIC’ 14) “Face Recognition From Sequence Videos in Pose, Illumination Invarient” on 20th August 2014.

N. Ikizler-Cinbis and S. Sclaroff, “Object, scene and actions: Combining multiple features for human action recognition,” in ECCV, 2010, pp.494–507.

N. Ikizler-Cinbis, R. Cinbis, and S. Sclaroff, “Learning actions from the web,” in CVPR, 2009, pp. 995–1002.

X. W. Han Wang and Y. Jia, “Annotating video events from the web images,” in ICPR, 2012.

K. Rajakumari,C. Nalini, “Improvement of Image Quality Based on Fractal Image Compression” in Middle-East Journal of Scientific Research 20 (10): 1213-1217, 2014, ISSN 1990-9233, © IDOSI Publications, 2014.

Downloads

Published

2024-02-26

How to Cite

balan, K. .Mahi, kumar, J. S., & Rajakumari, K. . (2024). Efficient Video Annotations by an Image Groups. COMPUSOFT: An International Journal of Advanced Computer Technology, 4(04), 1650–1653. Retrieved from https://ijact.in/index.php/j/article/view/290

Issue

Section

Original Research Article

Similar Articles

<< < 1 2 3 4 5 

You may also start an advanced similarity search for this article.