• Title/Summary/Keyword: Video art

Search Result 278, Processing Time 0.026 seconds

Performance Analysis of Future Video Coding (FVC) Standard Technology

  • Choi, Young-Ju;Kim, Ji-Hae;Lee, Jong-Hyeok;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.4 no.2
    • /
    • pp.73-78
    • /
    • 2017
  • The Future Video Coding (FVC) is a new state of the art video compression standard that is going to standardize, as the next generation of High Efficiency Video Coding (HEVC) standard. The FVC standard applies newly designed block structure, which is called quadtree plus binary tree (QTBT) to improve the coding efficiency. Also, intra and inter prediction parts were changed to improve the coding performance when comparing to the previous coding standard such as HEVC and H.264/AVC. Experimental results shows that we are able to achieve the average BD-rate reduction of 25.46%, 38.00% and 35.78% for Y, U and V, respectively. In terms of complexity, the FVC takes about 14 times longer than the consumed time of HEVC encoder.

Sign Language Recognition Using ART2 Algorithm (ART2 알고리즘을 이용한 수화 인식)

  • Kim, Kwang-Baek;Woo, Young-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.5
    • /
    • pp.937-941
    • /
    • 2008
  • People who have hearing difficulties use sign language as the most important communication method, and they can broaden personal relations and manage their everyday lives without inconvenience through sign language. But they suffer from absence of interpolation between normal people and people who have hearing difficulties in increasing video chatting or video communication services by recent growth of internet communication. In this paper, we proposed a sign language recognition method in order to solve such a problem. In the proposed method, regions of two hands are extracted by tracking of two hands using RGB, YUV and HSI color information from a sign language image acquired from a video camera and by removing noise in the segmented images. The extracted regions of two hands are teamed and recognized by ART2 algorithm that is robust for noise and damage. In the experiment by the proposed method and images of finger number from 1 to 10, we verified the proposed method recognize the numbers efficiently.

A Study on the Audiovisual Art of Nam June Paik with Focus on Musical Synesthesia (백남준의 오디오비주얼아트 연구 : 음악적 공감각을 중심으로)

  • Yoon, Ji Won
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.4
    • /
    • pp.603-614
    • /
    • 2020
  • Analyzing various works of Nam June Paik who pioneered the world of video art, we can find a number of audiovisual art pieces - in a broad sense - with the characteristics of "intermedia" that are not clearly identified as one genre. Especially, his career as a musician becomes an important clue for us to discuss the unique originality found in Paik's works. Nevertheless, examples of research that specifically explore how Paik's musicality is reflected in his artistry and could be deeply understood is still rare. This article examines Paik's audiovisual art world, and reviews how his musicality is expressed through the characteristics of media and form, as well as contributed to the ultimate technical and aesthetic realizations of his pieces from the perspective of musical synesthesia. The fact that Paik expanded the concept of music through technology, and fused different media to visualize music, is an important achievement in the context of aesthetic, synesthetic music in the field of audiovisual art.

A Proposed Curriculum for the Basic Education of Video Image Design (영상 기초 교육 방법론에 관한 연구 - 단계별 프로젝트 중심의 영상 기초 교육과정 제시 -)

  • 원경아
    • Archives of design research
    • /
    • v.11 no.1
    • /
    • pp.269-278
    • /
    • 1998
  • Along with the development of the video-film related industry, the need for establishing the education of the video image design is now rapidly growing in a variety of video art institutes. The video image design IS therefore being more classified and systematized than ever to maxImize its effectivity and facilitate its creativity. This paper is thus aimed to suggest the basic curriculum and project class schedule on the video image design which can be utilized in class activities.

  • PDF

Development of Combined Architecture of Multiple Deep Convolutional Neural Networks for Improving Video Face Identification (비디오 얼굴 식별 성능개선을 위한 다중 심층합성곱신경망 결합 구조 개발)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.6
    • /
    • pp.655-664
    • /
    • 2019
  • In this paper, we propose a novel way of combining multiple deep convolutional neural network (DCNN) architectures which work well for accurate video face identification by adopting a serial combination of 3D and 2D DCNNs. The proposed method first divides an input video sequence (to be recognized) into a number of sub-video sequences. The resulting sub-video sequences are used as input to the 3D DCNN so as to obtain the class-confidence scores for a given input video sequence by considering both temporal and spatial face feature characteristics of input video sequence. The class-confidence scores obtained from corresponding sub-video sequences is combined by forming our proposed class-confidence matrix. The resulting class-confidence matrix is then used as an input for learning 2D DCNN learning which is serially linked to 3D DCNN. Finally, fine-tuned, serially combined DCNN framework is applied for recognizing the identity present in a given test video sequence. To verify the effectiveness of our proposed method, extensive and comparative experiments have been conducted to evaluate our method on COX face databases with their standard face identification protocols. Experimental results showed that our method can achieve better or comparable identification rate compared to other state-of-the-art video FR methods.

ROI-Based 3D Video Stabilization Using Warping (관심영역 기반 와핑을 이용한 3D 동영상 안정화 기법)

  • Lee, Tae-Hwan;Song, Byung-Cheol
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.76-82
    • /
    • 2012
  • As the portable camcorder becomes popular, various video stabilization algorithms for de-shaking of camera motion have been developed. In the past, most video stabilization algorithms were based on 2-dimensional camera motion, but recent algorithms show much better performance by considering 3-dimensional camera motion. Among the previous video stabilization algorithms, 3D video stabilization algorithm using content-preserving warps is known as the state-of-the art owing to its superior performance. But, the major demerit of the algorithm is its high computational complexity. So, we present a computationally light full-frame warping algorithm based on ROI (region-of-interest) while providing comparable visual quality to the state-of-the art in terms of ROI. First, a proper ROI with a target depth is chosen for each frame, and full-frame warping based on the selected ROI is applied.

An Analysis of Exhibition Video Contents in International Trade Show-Focused on the ISE 2013 (국제 무역박람회의 전시영상콘텐츠 분석 -ISE 2013을 중심으로)

  • Yang, Y.E.
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.7
    • /
    • pp.895-905
    • /
    • 2014
  • Recently the trade show is a massive marketing effort of a particular industry and introduces new products to potential customers, as well as the ability to elevate the brand's image and corporate identity. It is increasing its importance as a venue of communication. In these changes, the exhibition is required to have the emotional approach to enhance the dynamic and creative image of companies. Therefore, the product itself as well as the importance of various visual contents of the exhibition is increasing positive effect of the image and marketing on their products. In this sense, this study analyzes video contents of the exhibition that was held in Amsterdam, the Netherlands, Europe's largest display show 'ISE (Integrated Systems Europe)' in 2013, focusing on the domestic company, Samsung, LG Electronics and other four foreign companies. The analysis is based on the Herbert Zettl's applied media aesthetics related to the video contents, through which we can find the way to the development of exhibition video contents making for Trade show.

Media Literacy Education in the Australian Curriculum: Media Art (호주 국가교육과정 예술과목 'Media Art' 에 나타난 미디어 리터러시 교육)

  • Park, Yoo-Shin
    • Cartoon and Animation Studies
    • /
    • s.48
    • /
    • pp.271-310
    • /
    • 2017
  • This paper examines the composition and the content of media art which is an art education subject in a national curriculum of Australia; and discusses implications for Korean education curriculums. Media covered by Media Art subject in Australia are the multi types of general media including TV, movie, video, newspaper, radio, video game, the internet, and mobile media; and their contents. The purpose of ACARA's media art education curriculum is to improve creative use, knowledge, understanding, and technology of communication techniques for multiple purposes and the audiences. Through the Media Art subject, both the students and the community are able to participate in the actual communications with the rich culture surrounding them and to develop the knowledge and understanding of the 5 core concepts of language, technology, system, audience and re-creation while testing the culture. The implication of this study is as the following. ACARA's media art education curriculum has been developed as an independent educational program and has a special significance within Australian education curriculums. Although ACARA's media art education curriculum is formed as an independent subject, it is suggested within the curriculum to instruct in close connection with other subjects upon execution. Its organization and elaborateness in curriculum composition are very effective in terms of the teacher's teaching-learning design and as well as the evaluation. This seems to show a good model of leading media literacy curriculum. ACARA's media art education curriculum can be a great reference in introducing media literacy to Korean national education curriculums.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

A Study on Flow-emotion-state for Analyzing Flow-situation of Video Content Viewers (영상콘텐츠 시청자의 몰입상황 분석을 위한 몰입감정상태 연구)

  • Kim, Seunghwan;Kim, Cheolki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.3
    • /
    • pp.400-414
    • /
    • 2018
  • It is required for today's video contents to interact with a viewer in order to provide more personalized experience to viewer(s) than before. In order to do so by providing friendly experience to a viewer from video contents' systemic perspective, understanding and analyzing the situation of the viewer have to be preferentially considered. For this purpose, it is effective to analyze the situation of a viewer by understanding the state of the viewer based on the viewer' s behavior(s) in the process of watching the video contents, and classifying the behavior(s) into the viewer's emotion and state during the flow. The term 'Flow-emotion-state' presented in this study is the state of the viewer to be assumed based on the emotions that occur subsequently in relation to the target video content in a situation which the viewer of the video content is already engaged in the viewing behavior. This Flow-emotion-state of a viewer can be expected to be utilized to identify characteristics of the viewer's Flow-situation by observing and analyzing the gesture and the facial expression that serve as the input modality of the viewer to the video content.