• Title/Summary/Keyword: Image-to-Video

Search Result 2,715, Processing Time 0.032 seconds

A Design of a Method for Determining Direction of Moving Vehicle using Image Information (영상정보를 이용한 차량 이동 방향 결정 기법의 설계)

  • Moon, Hye-Young;Kim, Jin-Deog;Yu, Yun-Sik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.95-97
    • /
    • 2010
  • Recently, CAN network technology and MOST network are introduced in vehicle to control many electronic devices and to provide entertainment service. Many interconnected devices operate in MOST network which has ring topology such as CD-ROM(DVD), AMP, VIDEO CAMERA, VIDEO DISPLAY, GPS NAVIGATION and so on. In this paper, The input image of CAMERA in the MOST network is used for determining the movement direction of vehicle. Even though the position information was received from GPS, it is difficult to directly determine the direction of moving vehicle in certain areas such as the parallel road structure. This paper designs and implements the method to determine vehicle's direction by real-time matching between CAMERA image and object image base on image DB.

  • PDF

A Heuristic Approach for Simulation of time-course Visual Adaptation for High Dynamic Image Streams

  • Kelvin, Bwalya;Yang, Seung-Ji;Choi, Jong-Soo;Park, Soo-Jun;Ro, Yong-Man
    • Annual Conference of KIPS
    • /
    • 2007.05a
    • /
    • pp.285-288
    • /
    • 2007
  • There is substantial evidence from earlier researches that older adults have difficult seeing under low illumination and at night, even in the absence of ocular diseases. During human aging, there is a rampant decrease in rod/cone-meditated adaptation which is caused by delayed rhodopsin regeneration and pigment depletion. This calls for a need to develop appropriate visual gadgets to effectively aid the aging generation. Our research culminates its approach from Pattanaik's model by making extensions to temporal visual filtering, thereby simulating a reduction of visual response which comes with age. Our filtering model paves way and lays a foundation for future research to develop a more effective adaptation model that may be further used in developing visual content adaptation aids and guidelines in MPEG 21 environment. We demonstrate our visual model using a High Dynamic Range image and the experiment results are in conversant with the psychophysical data from previous vision researches.

Accurate Segmentation Algorithm of Video Dynamic Background Image Based on Improved Wavelet Transform

  • Ming, Ming
    • Journal of Information Processing Systems
    • /
    • v.18 no.5
    • /
    • pp.711-718
    • /
    • 2022
  • In this paper, an accurate segmentation algorithm of video dynamic background image (VDBI) based on improved wavelet transform is proposed. Based on the smooth processing of VDBI, the traditional wavelet transform process is improved, and the two-layer decomposition of dynamic image is realized by using two-dimensional wavelet transform. On the basis of decomposition results and information enhancement processing, image features are detected, feature points are extracted, and quantum ant colony algorithm is adopted to complete accurate segmentation of the image. The maximum SNR of the output results of the proposed algorithm can reach 73.67 dB, the maximum time of the segmentation process is only 7 seconds, the segmentation accuracy shows a trend of decreasing first and then increasing, and the global maximum value can reach 97%, indicating that the proposed algorithm effectively achieves the design expectation.

Enhanced Image Encryption Scheme using Context Adaptive Variable Length Coding (적응 산술 부호화를 이용한 고화질 영상 암호화 전략)

  • Shim, Gab-Yong;Lee, Malrey
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.119-126
    • /
    • 2013
  • Achieve real-time encryption and video data transcoding, current video encryption methods usually integrate encryption algorithm with video compression course. This paper is devoted to discussing the video encryption technology, by encrypting to avoid unauthorized person getting video data. This paper studied the H.264 entropy coding and proposed of CAVLC video encryption scheme which is combined with the process of entropy coding of H.264 CAVLC encryption scheme. Three encryption levels are proposed. In addition, a scrambling method is also proposed which makes the encrypted frames more robust in anti crack. This method showed more robust video data encryption function and compressive rate.

An Efficient Requantization for Transcoding of MPEG Video

  • Hwang, Hee-Chul;Kim, Duk-Gyoo
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1023-1026
    • /
    • 2002
  • In this paper, we propose an efficient transcoding of MPEG video. Transcoding is the process of converting a compressed video format to another different compressed video format. We propose an simple and efficient transcoding by requantization in which MPEG coded video at high bit-rate is converted into MPEG bitstream at lower bit-rate. To reduce a image quality degradation, we use HVS(Human Visual System) that is the effect that visibility of noise is less in high activity regions than in low activity regions. By using the effect, the part of image in high activity region is coarsely quantized without seriously degrading the image quality. Experimental results show that the proposed method can provide good performance.

  • PDF

Study on 3 DoF Image and Video Stitching Using Sensed Data

  • Kim, Minwoo;Chun, Jonghoon;Kim, Sang-Kyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4527-4548
    • /
    • 2017
  • This paper proposes a method to generate panoramic images by combining conventional feature extraction algorithms (e.g., SIFT, SURF, MPEG-7 CDVS) with sensed data from inertia sensors to enhance the stitching results. The challenge of image stitching increases when the images are taken from two different mobile phones with no posture calibration. Using inertia sensor data obtained by the mobile phone, images with different yaw, pitch, and roll angles are preprocessed and adjusted before performing stitching process. Performance of stitching (e.g., feature extraction time, inlier point numbers, stitching accuracy) between conventional feature extraction algorithms is reported along with the stitching performance with/without using the inertia sensor data. In addition, the stitching accuracy of video data was improved using the same sensed data, with discrete calculation of homograph matrix. The experimental results for stitching accuracies and speed using sensed data are presented in this paper.

Using a high-resolution LED display Dual Scanning Image Control System Design (듀얼 스캐닝을 이용한 고해상 LED 전광판 영상제어장치설계)

  • Ha, Young-Jea;Kim, In-Jea;Kim, Sun-Hyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.7
    • /
    • pp.1415-1422
    • /
    • 2011
  • In this paper, full color billboards for the efficient expression of the resolution to offer dual-scanning control method, using the LED display it on a fixed pixel video signal to the pixel dot pattern was changed. And DICT(Dynamic Image Correction Technology) using the main controller in accordance with video information, histogram equalization of image gray scale values to be uniformly distributed, and dynamically improves image quality by converting the area, and a dual auto-scan input video switching controller board as the pixels in the LED Module by controlling the physical manifestation of the existing board LED pixel dots than 4 times the resolution proposes a technique that can be expressed and made it through testing verified the performance.

A Study on the Interactive Effect of Spoken Words and Imagery not Synchronized in Multimedia Surrogates for Video Gisting (비디오 의미 파악을 위한 멀티미디어 요약의 비동시적 오디오와 이미지 정보간의 상호 작용 효과 연구)

  • Kim, Hyun-Hee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.45 no.2
    • /
    • pp.97-118
    • /
    • 2011
  • The study examines the interactive effect of spoken words and imagery not synchronized in audio/image surrogates for video gisting. To do that, we conducted an experiment with 64 participants, under the assumption that participants would better understand the content of videos when viewing audio/image surrogates rather than audio or image surrogates. The results of the experiment showed that overall audio/image surrogates were better than audio or image surrogates for video gisting, although the unsynchronized multimedia surrogates made it difficult for some participants to pay attention to both audio and image when the content they present is very different.

Development CMOS Sensor-Based Portable Video Scope and It's Image Processing Application (CMOS 센서를 이용한 휴대용 비디오스코프 및 영상처리 응용환경 개발)

  • 김상진;김기만;강진영;김영욱;백준기
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.517-520
    • /
    • 2003
  • Commercial video scope use CCD sensor and frame grabber for image capture and A/D interface but application limited by input resolution and high cost. In this paper we introduce portable video scope using CMOS sensor, USB pen and tuner card (low frame grabber) in place of commercial CCD sensor and frame grabber. Our video scope serves as an essential link between advancing commercial technology and research, providing cost effective solutions for educational, engineering and medical applications across an entire spectrum of needs. The software implementation is done using Direct Show in second version after initial trials using First version VFW (video for window), which gave very low frame rate. Our video scope operates on windows 98, ME, XP, 2000. The drawback of our video scope is crossover problem in output images caused due to interpolation, which has to be rectified for more efficient performance.

  • PDF

A Comparative Study of Local Features in Face-based Video Retrieval

  • Zhou, Juan;Huang, Lan
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.1
    • /
    • pp.24-31
    • /
    • 2017
  • Face-based video retrieval has become an active and important branch of intelligent video analysis. Face profiling and matching is a fundamental step and is crucial to the effectiveness of video retrieval. Although many algorithms have been developed for processing static face images, their effectiveness in face-based video retrieval is still unknown, simply because videos have different resolutions, faces vary in scale, and different lighting conditions and angles are used. In this paper, we combined content-based and semantic-based image analysis techniques, and systematically evaluated four mainstream local features to represent face images in the video retrieval task: Harris operators, SIFT and SURF descriptors, and eigenfaces. Results of ten independent runs of 10-fold cross-validation on datasets consisting of TED (Technology Entertainment Design) talk videos showed the effectiveness of our approach, where the SIFT descriptors achieved an average F-score of 0.725 in video retrieval and thus were the most effective, while the SURF descriptors were computed in 0.3 seconds per image on average and were the most efficient in most cases.