• Title/Summary/Keyword: Video Image Analysis

Search Result 532, Processing Time 0.023 seconds

A Practical Digital Video Database based on Language and Image Analysis

  • Liang, Yiqing
    • Proceedings of the Korea Database Society Conference
    • /
    • 1997.10a
    • /
    • pp.24-48
    • /
    • 1997
  • . Supported byㆍDARPA′s image Understanding (IU) program under "Video Retrieval Based on Language and image Analysis" project.DARPA′s Computer Assisted Education and Training Initiative program (CAETI)ㆍObjective: Develop practical systems for automatic understanding and indexing of video sequences using both audio and video tracks(omitted)

  • PDF

Implementation of Video Surveillance System with Motion Detection based on Network Camera Facilities (움직임 감지를 이용한 네트워크 카메라 기반 영상보안 시스템 구현)

  • Lee, Kyu-Woong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.169-177
    • /
    • 2014
  • It is essential to support the image and video analysis technology such as motion detection since the DVR and NVR storage were adopted in the real time visual surveillance system. Especially the network camera would be popular as a video input device. The traditional CCTV that supports analog video data get be replaced by the network camera. In this paper, we present the design and implementation of video surveillance system that provides the real time motion detection by the video storage server. The mobile application also has been implemented in order to provides the retrieval functionality of image analysis results. We develop the video analysis server with open source library OpenCV and implement the daemon process for video input processing and real-time image analysis in our video surveillance system.

DPICM subprojectile counting technique using image analysis of infrared camera (적외선 영상해석을 이용한 이중목적탄 자탄계수 계측기법연구)

  • Park, Won-Woo;Choi, Ju-Ho;Lyou, Joon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.11-16
    • /
    • 1997
  • This paper describes the grenade counting system developed for DPICM submunition analysis using the infrared video streams, and its some video stream processing technique. The video stream data processing procedure consists of four sequences; Analog infrared video stream recording, video stream capture, video stream pre-processing, and video stream analysis including the grenade counting. Some applications of this algorithms to real bursting test has shown the possibility of automation for submunition counting.

  • PDF

Proposal for AI Video Interview Using Image Data Analysis

  • Park, Jong-Youel;Ko, Chang-Bae
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.212-218
    • /
    • 2022
  • In this paper, the necessity of AI video interview arises when conducting an interview for acquisition of excellent talent in a non-face-to-face situation due to similar situations such as Covid-19. As a matter to be supplemented in general AI interviews, it is difficult to evaluate the reliability and qualitative factors. In addition, the AI interview is conducted not in a two-way Q&A, rather in a one-sided Q&A process. This paper intends to fuse the advantages of existing AI interviews and video interviews. When conducting an interview using AI image analysis technology, it supplements subjective information that evaluates interview management and provides quantitative analysis data and HR expert data. In this paper, image-based multi-modal AI image analysis technology, bioanalysis-based HR analysis technology, and web RTC-based P2P image communication technology are applied. The goal of applying this technology is to propose a method in which biological analysis results (gaze, posture, voice, gesture, landmark) and HR information (opinions or features based on user propensity) can be processed on a single screen to select the right person for the hire.

Video Image Analysis in Accordance with Power Density of Arcing for Current Collection System in Electric Railway (전기철도 집전장치의 아크량에 따른 비디오 이미지 분석)

  • Park, Young;Lee, Kiwon;Park, Chulmin;Kim, Jae-Kwang;Jeon, Ahram;Kwon, Sam-Young;Cho, Yong Hyun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.9
    • /
    • pp.1343-1347
    • /
    • 2013
  • This paper presents an analysis methods for current collection quality in catenary system by means of video image based monitoring system. Arcing is the sparking at the interface point between pantograph and contact wire when the electric trains have traction current values at speed. Percentage of arcing at maximum line speed is measurable parameters for compliance with the requirements on dynamic behaviour of the interface between pantograph and contact wire in accordance with requirement of IEC and EN standards. The arc detector and video is installed on a train aim at the trailing contact strip according to the travel direction. The arc detector presented and measured verity of value such as the duration and power density of each arc and the video image is measured a image when the arc is occurred in pantograph. In this paper we analysis of video image in accordance with power density of arcing from arc detector and compared with video image and power density of arcing so as to produce quality of arcing from image.

A Study on in the Context of Audiovisual Art (<백-아베 비디오 신디사이저>의 오디오 비주얼아트적 고찰)

  • Yoon, Ji Won
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.4
    • /
    • pp.615-624
    • /
    • 2020
  • By enabling musicians to freely control the elements involved in sound production and tone generation with a variety of timbre, synthesizers have revolutionized and permanently changed music since the 1960s. Paik-Abe Video Synthesizer, a masterpiece of video art maestro Nam June Paik, is a prominent example of re-interpretation of this new musical instrument in the realm of video and audio. This article examines Paik-Abe Video Synthesizer as an innovative instrument to play videos from the perspective of audiovisual art, and establishes its aesthetic value and significance through both artistic and technical analysis. The instrument, which embodied the concept of image sampling and real-time interactive video as an image-based multi-channel music production tool, contributed to establishing a new relationship between sound and image within the realm of audiovisual art. The fact that his video synthesizer not only adds image to sound, but also presents a complete fusion of image and sound as an image instrument with musical characteristics, becomes highly meaningful in this age of synesthesia.

PCA-Based MPEG Video Retrieval in Compressed Domain (PCA에 기반한 압축영역에서의 MPEG Video 검색기법)

  • 이경화;강대성
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.1
    • /
    • pp.28-33
    • /
    • 2003
  • This paper proposes a database index and retrieval method using the PCA(Principal Component Analysis). We perform a scene change detection and key frame extraction from the DC Image constructed by DCT DC coefficients in the compressed video stream that is video compression standard such as MPEG. In the extracted key frame, we use the PCA, then we can make codebook that has a statistical data as a codeword, which is saved as a database index. We also provide retrieval image that are similar to user's query image in a video database. As a result of experiments, we confirmed that the proposed method clearly showed superior performance in video retrieval and reduced computation time and memory space.

Temporal Anti-aliasing of a Stereoscopic 3D Video

  • Kim, Wook-Joong;Kim, Seong-Dae;Hur, Nam-Ho;Kim, Jin-Woong
    • ETRI Journal
    • /
    • v.31 no.1
    • /
    • pp.1-9
    • /
    • 2009
  • Frequency domain analysis is a fundamental procedure for understanding the characteristics of visual data. Several studies have been conducted with 2D videos, but analysis of stereoscopic 3D videos is rarely carried out. In this paper, we derive the Fourier transform of a simplified 3D video signal and analyze how a 3D video is influenced by disparity and motion in terms of temporal aliasing. It is already known that object motion affects temporal frequency characteristics of a time-varying image sequence. In our analysis, we show that a 3D video is influenced not only by motion but also by disparity. Based on this conclusion, we present a temporal anti-aliasing filter for a 3D video. Since the human process of depth perception mainly determines the quality of a reproduced 3D image, 2D image processing techniques are not directly applicable to 3D images. The analysis presented in this paper will be useful for reducing undesirable visual artifacts in 3D video as well as for assisting the development of relevant technologies.

  • PDF

Removing Shadows for the Surveillance System Using a Video Camera (비디오 카메라를 이용한 감시 장치에서 그림자의 제거)

  • Kim, Jung-Dae;Do, Yong-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.176-178
    • /
    • 2005
  • In the images of a video camera employed for surveillance, detecting targets by extracting foreground image is of great importance. The foreground regions detected, however, include not only moving targets but also their shadows. This paper presents a novel technique to detect shadow pixels in the foreground image of a video camera. The image characteristics of video cameras employed, a web-cam and a CCD, are first analysed in the HSV color space and a pixel-level shadow detection technique is proposed based on the analysis. Compared with existing techniques where unified criteria are used to all pixels, the proposed technique determines shadow pixels utilizing a fact that the effect of shadowing to each pixel is different depending on its brightness in background image. Such an approach can accommodate local features in an image and hold consistent performance even in changing environment. In experiments targeting pedestrians, the proposed technique showed better results compared with an existing technique.

  • PDF

Virtual Contamination Lane Image and Video Generation Method for the Performance Evaluation of the Lane Departure Warning System (차선 이탈 경고 시스템의 성능 검증을 위한 가상의 오염 차선 이미지 및 비디오 생성 방법)

  • Kwak, Jae-Ho;Kim, Whoi-Yul
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.24 no.6
    • /
    • pp.627-634
    • /
    • 2016
  • In this paper, an augmented video generation method to evaluate the performance of lane departure warning system is proposed. In our system, the input is a video which have road scene with general clean lane, and the content of output video is the same but the lane is synthesized with contamination image. In order to synthesize the contamination lane image, two approaches were used. One is example-based image synthesis, and the other is background-based image synthesis. Example-based image synthesis is generated in the assumption of the situation that contamination is applied to the lane, and background-based image synthesis is for the situation that the lane is erased due to aging. In this paper, a new contamination pattern generation method using Gaussian function is also proposed in order to produce contamination with various shape and size. The contamination lane video can be generated by shifting synthesized image as lane movement amount obtained empirically. Our experiment showed that the similarity between the generated contamination lane image and real lane image is over 90 %. Futhermore, we can verify the reliability of the video generated from the proposed method through the analysis of the change of lane recognition rate. In other words, the recognition rate based on the video generated from the proposed method is very similar to that of the real contamination lane video.