• Title/Summary/Keyword: Image-to-Video

Search Result 2,715, Processing Time 0.028 seconds

Video Quality Assessment based on Deep Neural Network

  • Zhiming Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2053-2067
    • /
    • 2023
  • This paper proposes two video quality assessment methods based on deep neural network. (i)The first method uses the IQF-CNN (convolution neural network based on image quality features) to build image quality assessment method. The LIVE image database is used to test this method, the experiment show that it is effective. Therefore, this method is extended to the video quality assessment. At first every image frame of video is predicted, next the relationship between different image frames are analyzed by the hysteresis function and different window function to improve the accuracy of video quality assessment. (ii)The second method proposes a video quality assessment method based on convolution neural network (CNN) and gated circular unit network (GRU). First, the spatial features of video frames are extracted using CNN network, next the temporal features of the video frame using GRU network. Finally the extracted temporal and spatial features are analyzed by full connection layer of CNN network to obtain the video quality assessment score. All the above proposed methods are verified on the video databases, and compared with other methods.

A Fast Image Matching Method for Oblique Video Captured with UAV Platform

  • Byun, Young Gi;Kim, Dae Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.2
    • /
    • pp.165-172
    • /
    • 2020
  • There is growing interest in Vision-based video image matching owing to the constantly developing technology of unmanned-based systems. The purpose of this paper is the development of a fast and effective matching technique for the UAV oblique video image. We first extracted initial matching points using NCC (Normalized Cross-Correlation) algorithm and improved the computational efficiency of NCC algorithm using integral image. Furthermore, we developed a triangulation-based outlier removal algorithm to extract more robust matching points among the initial matching points. In order to evaluate the performance of the propose method, our method was quantitatively compared with existing image matching approaches. Experimental results demonstrated that the proposed method can process 2.57 frames per second for video image matching and is up to 4 times faster than existing methods. The proposed method therefore has a good potential for the various video-based applications that requires image matching as a pre-processing.

A VIDEO GEOGRAPHIC INFORMATION SYSTEM FOR SUPPORTING BI-DIRECTIONAL SEARCH FOR VIDEO DATA AND GEOGRAPHIC INFORMATION

  • Yoo, Jea-Jun;Joo, In-Hak;Park, Jong-Huyn;Lee, Jong-Hun
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.151-156
    • /
    • 2002
  • Recently, as the geographic information system (GIS) which searches, manages geographic information is used more widely, there is more requests for some systems which can search and display more actual and realistic information. As a response to these requests, the video geographic information system which connects video data obtained by using cameras and geographic information as it is by displaying the obtained video data is being more popular. However, because most existing video geographic information systems consider video data as an attribute of geographic information or use simple one-way links from geographic information to video data to connect video data with geographic information, they support only displaying video data through searching geographic information. In this paper, we design and implement a video geographic information system which connects video data with geographic information and supports hi-directional search; searching geographic information through searching video data and searching video data through searching geographic information. To do this, we 1) propose an ER data model to represent connection information related to video data, geographic information, 2) propose a process to extract and to construct connection information from video data and geographic information, 3) show a component based system architecture to organize the video geographic information system.

  • PDF

FEASIBILITY ON GENERATING STEREO MOSAIC IMAGE

  • Noh, Myoung-Jong;Lee, Sung-Hun;Cho, Woo-Sug
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.201-204
    • /
    • 2005
  • Recently, the generation of panoramic images and high quality mosaic images from video sequences has been attempted by a variety of investigations. Among a matter of investigation, in this paper, left and right stereo mosaic image generation utilizing airborne-video sequence images is focused upon. The stereo mosaic image is generated by creating left and right mosaic image which is generated by front and rear slit having different viewing angle in consecutive video frame images. The generation of stereo mosaic image proposed in this paper consists of several processes: camera parameter estimation for each video frame image, rectification, slicing, motion parallax elimination and image mosaicking. However it is necessary to check the feasibility on generating stereo mosaic image as explained processes. Therefore, in this paper, we performed the feasibility test on generating stereo mosaic image using video frame images. In doing so, anaglyphic image for stereo mosaic images is generated and tested for feasibility check.

  • PDF

Stereoscopic Conversion of Object-based MPEG-4 Video (객체 기반 MPEG-4 동영상의 입체 변환)

  • 박상훈;김만배;손현식
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2407-2410
    • /
    • 2003
  • In this paper, we propose a new stereoscopic video conversion methodology that converts two-dimensional (2-D) MPEG-4 video to stereoscopic video. In MPEG-4, each Image is composed of background object and primary object. In the first step of the conversion methodology, the camera motion type is determined for stereo Image generation. In the second step, the object-based stereo image generation is carried out. The background object makes use of a current image and a delayed image for its stereo image generation. On the other hand, the primary object uses a current image and its horizontally-shifted version to avoid the possible vertical parallax that could happen. Furthermore, URFA(Uncovered Region Filling Algorithm) is applied in the uncovered region which might be created after the stereo image generation of a primary object. In our experiment, show MPEG-4 test video and its stereoscopic video based upon out proposed methodology and analyze Its results.

  • PDF

A Study on in the Context of Audiovisual Art (<백-아베 비디오 신디사이저>의 오디오 비주얼아트적 고찰)

  • Yoon, Ji Won
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.4
    • /
    • pp.615-624
    • /
    • 2020
  • By enabling musicians to freely control the elements involved in sound production and tone generation with a variety of timbre, synthesizers have revolutionized and permanently changed music since the 1960s. Paik-Abe Video Synthesizer, a masterpiece of video art maestro Nam June Paik, is a prominent example of re-interpretation of this new musical instrument in the realm of video and audio. This article examines Paik-Abe Video Synthesizer as an innovative instrument to play videos from the perspective of audiovisual art, and establishes its aesthetic value and significance through both artistic and technical analysis. The instrument, which embodied the concept of image sampling and real-time interactive video as an image-based multi-channel music production tool, contributed to establishing a new relationship between sound and image within the realm of audiovisual art. The fact that his video synthesizer not only adds image to sound, but also presents a complete fusion of image and sound as an image instrument with musical characteristics, becomes highly meaningful in this age of synesthesia.

Virtual Contamination Lane Image and Video Generation Method for the Performance Evaluation of the Lane Departure Warning System (차선 이탈 경고 시스템의 성능 검증을 위한 가상의 오염 차선 이미지 및 비디오 생성 방법)

  • Kwak, Jae-Ho;Kim, Whoi-Yul
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.24 no.6
    • /
    • pp.627-634
    • /
    • 2016
  • In this paper, an augmented video generation method to evaluate the performance of lane departure warning system is proposed. In our system, the input is a video which have road scene with general clean lane, and the content of output video is the same but the lane is synthesized with contamination image. In order to synthesize the contamination lane image, two approaches were used. One is example-based image synthesis, and the other is background-based image synthesis. Example-based image synthesis is generated in the assumption of the situation that contamination is applied to the lane, and background-based image synthesis is for the situation that the lane is erased due to aging. In this paper, a new contamination pattern generation method using Gaussian function is also proposed in order to produce contamination with various shape and size. The contamination lane video can be generated by shifting synthesized image as lane movement amount obtained empirically. Our experiment showed that the similarity between the generated contamination lane image and real lane image is over 90 %. Futhermore, we can verify the reliability of the video generated from the proposed method through the analysis of the change of lane recognition rate. In other words, the recognition rate based on the video generated from the proposed method is very similar to that of the real contamination lane video.

Efficient Image Size Selection for MPEG Video-based Point Cloud Compression

  • Jia, Qiong;Lee, M.K.;Dong, Tianyu;Kim, Kyu Tae;Jang, Euee S.
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.825-828
    • /
    • 2022
  • In this paper, we propose an efficient image size selection method for video-based point cloud compression. The current MPEG video-based point cloud compression reference encoding process configures a threshold on the size of images while converting point cloud data into images. Because the converted image is compressed and restored by the legacy video codec, the size of the image is one of the main components in influencing the compression efficiency. If the image size can be made smaller than the image size determined by the threshold, compression efficiency can be improved. Here, we studied how to improve the compression efficiency by selecting the best-fit image size generated during video-based point cloud compression. Experimental results show that the proposed method can reduce the encoding time by 6 percent without loss of coding performance compared to the test model 15.0 version of video-based point cloud encoder.

  • PDF

The Impact of Video Quality and Image Size on the Effectiveness of Online Video Advertising on YouTube

  • Moon, Jang Ho
    • International Journal of Contents
    • /
    • v.10 no.4
    • /
    • pp.23-29
    • /
    • 2014
  • Online video advertising is now an increasingly important tool for marketers to reach and connect with their consumers. The purpose of this study was to empirically investigate the impact of video format on online video advertising. More specifically, this study aimed to explore whether online video quality and image size influences viewer responses toward online video advertising. By conducting an experimental study on YouTube, the results suggested that enhanced video quality of online advertising may have an important impact on effectiveness of the advertising, and the concept of presence is a key to understanding the effects of enhanced video quality in online advertising.

Analysis on Video Image Effect in , China's Performing Arts Work of Cultural Tourism (중국의 문화관광 공연작품 <장한가>에 나타난 영상이미지 효과 분석)

  • Yook, Jung-Hak
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.6
    • /
    • pp.77-85
    • /
    • 2013
  • This study aims to analyze the effects that video image in Seo-an's , claiming to China's first gigantic historic dance drama, has on the performance; it focuses on investigating which video image is used to accomplish the effects in showing specific themes and materials in . Image is meant by 'reflection of object', such as movie, television, dictionary, etc, with its coverage being extensive. The root of a word, image', is founded on imitary, signifying specifically and mentally visual representation. In other words, video image is considered combination of two synonymous words, 'video' and 'image'. Video is not just comprehension of traditional art genre, like literary value, theatrical qualities, and artistry of scenario, but wholeness as product, integrating original functions of all kinds of art and connecting subtle image creation of human being. The effects of video image represented in are as followings; first, expressive effect of the connotative meaning, reflecting the spirit of the age and its culture. Second, imaginary identification. Third, transformation scene. Fourth, dramatic interest through immersion. Last but not least, visual effect by dint of dimension of performance.