• Title/Summary/Keyword: Video Image

Search Result 2,981, Processing Time 0.03 seconds

Development of Video Image Detection System based on Tripwire and Vehicle Tracking Technologies focusing performance analysis with Autoscope (Tripwire 및 Tracking 기반의 영상검지시스템 개발 (Autoscope와의 성능비교를 중심으로))

  • Oh, Ju-Taek;Min, Joon-Young;Kim, Seung-Woo;Hur, Byung-Do;Kim, Myung-Soeb
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.2
    • /
    • pp.177-186
    • /
    • 2008
  • Video Image Detection System can be used for various traffic managements including traffic operation and traffic safety. Video Image Detection Technique can be divide by Tripwire System and Tracking System. Autoscope, which is widely used in the market, utilizes the Tripwire System. In this study, we developed an individual vehicle tracking system that can collect microscopic traffic information and also developed another image detection technology under the Tripwire System. To prove the accuracy and reliability of the newly developed systems, we compared the traffic data of the systems with those generated by Autoscope. The results showed that 0.35% of errors compared with the real traffic counts and 1.78% of errors with Autoscope. Performance comparisons on speed from the two systems showed the maximum errors of 1.77% with Autoscope, which confirms the usefulness of the newly developed systems.

An Efficient Object Extraction Scheme for Low Depth-of-Field Images (낮은 피사계 심도 영상에서 관심 물체의 효율적인 추출 방법)

  • Park Jung-Woo;Lee Jae-Ho;Kim Chang-Ick
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.9
    • /
    • pp.1139-1149
    • /
    • 2006
  • This paper describes a novel and efficient algorithm, which extracts focused objects from still images with low depth-of-field (DOF). The algorithm unfolds into four modules. In the first module, a HOS map, in which the spatial distribution of the high-frequency components is represented, is obtained from an input low DOF image [1]. The second module finds OOI candidate by using characteristics of the HOS. Since it is possible to contain some holes in the region, the third module detects and fills them. In order to obtain an OOI, the last module gets rid of background pixels in the OOI candidate. The experimental results show that the proposed method is highly useful in various applications, such as image indexing for content-based retrieval from huge amounts of image database, image analysis for digital cameras, and video analysis for virtual reality, immersive video system, photo-realistic video scene generation and video indexing system.

  • PDF

Mosaic Technique on Panning Video Images using Interpolation Search (보간 검색을 이용한 Panning 비디오 영상에서의 모자이크 기법)

  • Jang, Sung-Gab;Kim, Jae-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.63-72
    • /
    • 2005
  • This paper proposes a new method to construct a panorama image from video sequences captured by the video camcoder revolving on the center axis of the tripod. The proposed method is consisted of two algorithms; frame selection and image mosaics. In order to select frames to construct the panorama image, we employ the interpolation search using the information in overlapped areas. This method can search suitable frames quickly. We construct an image mosaic using the projective transform induced from four pairs of quasi-features. The conventional methods select feature points by using only texture information, but the presented method in this paper uses the position of each feature point as well. We make an experiment on the proposed method with real video sequences. The results show that the proposed method is better than the conventional one in terms of image quality.

Multicontents Integrated Image Animation within Synthesis for Hiqh Quality Multimodal Video (고화질 멀티 모달 영상 합성을 통한 다중 콘텐츠 통합 애니메이션 방법)

  • Jae Seung Roh;Jinbeom Kang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.257-269
    • /
    • 2023
  • There is currently a burgeoning demand for image synthesis from photos and videos using deep learning models. Existing video synthesis models solely extract motion information from the provided video to generate animation effects on photos. However, these synthesis models encounter challenges in achieving accurate lip synchronization with the audio and maintaining the image quality of the synthesized output. To tackle these issues, this paper introduces a novel framework based on an image animation approach. Within this framework, upon receiving a photo, a video, and audio input, it produces an output that not only retains the unique characteristics of the individuals in the photo but also synchronizes their movements with the provided video, achieving lip synchronization with the audio. Furthermore, a super-resolution model is employed to enhance the quality and resolution of the synthesized output.

A Study on the Comparative Analysis of Overseas Medical Care Video and Domestic Medical Care Video (해외 의료케어 전문 영상과 국내 의료케어 영상 비교분석에 관한 연구)

  • Cho, Hyun Kyung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.415-420
    • /
    • 2021
  • In a situation where the medical care field is developing from various angles, medical promotion video analysis has an important meaning. It is important as a matter of improving competitiveness, and in the era of acceleration of AI systems, medical care is also the leading field. Accordingly, the importance of videos on publicity, advertisements, and explanations is very important, and it is also an important direction to change the image of a company. In this study, the design characteristics and differences in the video were compared, focusing on the comparative analysis of professional videos of AI medical brands, with two foreign major companies (Stryker and Hill-rom) and one domestic leading company (Nine Bell), and detailed part analysis and section analysis were performed accordingly. As a technical partial analysis of image editing, the transition method and infographic graphics were considered. In an in-depth comparison, we found that AI medical imaging Points such as differences in image tone and image color harmony were analyzed and compared. For a detailed analysis in the video image determination part, we compared and studied the differentiated elements appearing in the promotional design and specific scenes of the video intro part and the product description video part of each video.

The Design and Implementation of Virtual Studio

  • Sul, Chang-Whan;Wohn, Kwang-Yoen
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06b
    • /
    • pp.83-87
    • /
    • 1996
  • A virtual reality system using video image is designed and implemented. A participant having 2{{{{ { 1} over { 2} }}}}DOF can interact with the computer-generated virtual object using her/his full body posture and gesture in the 3D virtual environment. The system extracts the necessary participant-related information by video-based sensing, and simulates the realistic interaction such as collision detection in the virtual environment. The resulting scene obtained by compositing video image of the participant and virtual environment is updated in near real time.

  • PDF

A Study on the Development of Remote Diagnosis System for Nerve Conduction

  • Kim, Jong-Weon
    • Journal of Electrical Engineering and information Science
    • /
    • v.3 no.3
    • /
    • pp.286-291
    • /
    • 1998
  • A remote measurement system for nerve conduction has been developed to aid patients with spinal cord injury by accident. Existing cooperation between rescuers and doctors can be supported through the introduction of multimedia desktop video conferencing. Such facilities provide several advantages over conventional video conferencing. In particular, patients may feel free because they can see a doctor through the video conferencing facilities. This paper describes the system implementation and evaluation. The author considers the network capability and image data handling, and introduces a method a transmit image data for this system.

  • PDF

Automatic Jitter Evaluation Method from Video using Optical Flow (Optical Flow를 사용한 동영상의 흔들림 자동 평가 방법)

  • Baek, Sang Hyune;Hwang, WonJun
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1236-1247
    • /
    • 2017
  • In this paper, we propose a method for evaluating the uncomfortable shaking in the video. When you shoot a video using a handheld device, such as a smartphone, most of the video contains unwanted shake. Most of these fluctuations are caused by hand tremors that occurred during shooting, and many methods for correcting them automatically have been proposed. It is necessary to evaluate the shake correction performance in order to compare the proposed shake correction methods. However, since there is no standardized performance evaluation method, a correction performance evaluation method is proposed for each shake correction method. Therefore, it is difficult to make objective comparison of shake correction method. In this paper, we propose a method for objectively evaluating video shake. Automatically analyze the video to find out how much tremors are included in the video and how much the tremors are concentrated at a specific time. In order to measure the shaking index, we proposed jitter modeling. We applied the algorithm implemented by Optical Flow to the real video to automatically measure shaking frequency. Finally, we analyzed how the shaking indices appeared after applying three different image stabilization methods to nine sample videos.

Similar Video Detection Method with Summarized Video Image and PCA (요약 비디오 영상과 PCA를 이용한 유사비디오 검출 기법)

  • Yoo, Jae-Man;Kim, Woo-Saeng
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.8
    • /
    • pp.1134-1141
    • /
    • 2005
  • With ever more popularity of video web-publishing, popular content is being compressed, reformatted and modified, resulting in excessive content duplication. Such overlapped data can cause problem of search speed and rate of searching. However, duplicated data on other site can provide alternatives while specific site cause problem. This paper proposes the efficient method, for retrieving. similar video data in large database. In this research we have used the method to compare summarized video image instead of the raw video data, and detected similar videos through clustering in that dimension feature vector through PCA(principle component analysis). We show that our proposed method is efficient and accurate through our experiment.

  • PDF

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.