• Title/Summary/Keyword: Sequence images

Search Result 584, Processing Time 0.031 seconds

A Study on Adaptive Information Hiding Technique for Copyright Protection of Digital Images (디지털 영상물의 저작권 보호를 위한 적응적 정보 은닉 기술에 관한 연구)

  • Park, Kang-Seo;Chung, Tae-Yun;Oh, Sang-Rok;Park, Sang-Hee
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2427-2429
    • /
    • 1998
  • Digital watermarking is the techinque which embeds the invisible signal into multimedia data such as audio, video, images, for copyright protection, including owner identification and copy control information. This paper proposes a new watermark embedding and extraction technique by extending the direct sequence spread spectrum technique. The proposed technique approximates the frequency component of pixels in spatial domain by using Laplacian mask and adaptively embeds the watermark considering the HVS to reduce the degradation of Image. In watermark extraction process, the proposed technique strengthens the high frequency components of image and extracts the watermark by demodulation. All this processes are performed in spatial domain to reduce the processing time.

  • PDF

The Role of Dynamic Contrast Enhanced MR Mammography in Differentiation between Benign and Malignant Breast Lesions

  • 한송이;차은숙;정상설;김학희;변재영;이재문
    • Proceedings of the KSMRM Conference
    • /
    • 2002.11a
    • /
    • pp.135-135
    • /
    • 2002
  • Purpose: To assess diagnostic accuracy of dynamic contrast enhanced MR mammography in differentiating between benign and malignant lesions. Materials and methods: Ninety-three patients with suspicious mammographic, sonographic or palpable findings underwent pre- or postoperative contrast-enhanced MR imaging of breast using three dimensional fast low-angle shot (3D FLASH) sequence (16/4 msec[repetition time / echo time], 20 flip angle, 3mm slice thickness with no slice gap, 256 by 256 in-plane matrix) covering whole breasts. T1 weighted images were obtained before and after bolus administration of gadopentetate dimeglumine (0.15 mmol/kg). Subtraction images and time-signal intensity curves of region of interest were obtained sequentially and correlated with pathologic diagnoses of lesions.

  • PDF

Caption Extraction in News Video Sequence using Frequency Characteristic

  • Youglae Bae;Chun, Byung-Tae;Seyoon Jeong
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.835-838
    • /
    • 2000
  • Popular methods for extracting a text region in video images are in general based on analysis of a whole image such as merge and split method, and comparison of two frames. Thus, they take long computing time due to the use of a whole image. Therefore, this paper suggests the faster method of extracting a text region without processing a whole image. The proposed method uses line sampling methods, FFT and neural networks in order to extract texts in real time. In general, text areas are found in the higher frequency domain, thus, can be characterized using FFT The candidate text areas can be thus found by applying the higher frequency characteristics to neural network. Therefore, the final text area is extracted by verifying the candidate areas. Experimental results show a perfect candidate extraction rate and about 92% text extraction rate. The strength of the proposed algorithm is its simplicity, real-time processing by not processing the entire image, and fast skipping of the images that do not contain a text.

  • PDF

A Segmentation Method for a Moving Object on A Static Complex Background Scene. (복잡한 배경에서 움직이는 물체의 영역분할에 관한 연구)

  • Park, Sang-Min;Kwon, Hui-Ung;Kim, Dong-Sung;Jeong, Kyu-Sik
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.3
    • /
    • pp.321-329
    • /
    • 1999
  • Moving Object segmentation extracts an interested moving object on a consecutive image frames, and has been used for factory automation, autonomous navigation, video surveillance, and VOP(Video Object Plane) detection in a MPEG-4 method. This paper proposes new segmentation method using difference images are calculated with three consecutive input image frames, and used to calculate both coarse object area(AI) and it's movement area(OI). An AI is extracted by removing background using background area projection(BAP). Missing parts in the AI is recovered with help of the OI. Boundary information of the OI confines missing parts of the object and gives inital curves for active contour optimization. The optimized contours in addition to the AI make the boundaries of the moving object. Experimental results of a fast moving object on a complex background scene are included.

  • PDF

Cardio-Angiographic Sequence Coding Using Neural Network Adaptive Vector Quantization (신격회로망 적응 VQ를 이용한 심장 조영상 부호화)

  • 주창희;최종수
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.40 no.4
    • /
    • pp.374-381
    • /
    • 1991
  • As a diagnostic image of hospitl, the utilization of digital image is steadily increasing. Image coding is indispensable for storing and compressing an enormous amount of diagnostic images economically and effectively. In this paper adaptive two stage vector quantization based on Kohonen's neural network for the compression of cardioangiography among typical angiography of radiographic image sequences is presented and the performance of the coding scheme is compare and gone over. In an attempt to exploit the known characteristics of changes in cardioangiography, relatively large blocks of image are quantized in the first stage and in the next stage the bloks subdivided by the threshold of quantization error are vector quantized employing the neural network of frequency sensitive competitive learning. The scheme is employed because the change produced in cardioangiography is due to such two types of motion as a heart itself and body motion, and a contrast dye material injected. Computer simulation shows that the good reproduction of images can be obtained at a bit rate of 0.78 bits/pixel.

  • PDF

A Propagation Programming Neural Network for Real-time matching of Stereo Images (스테레오 영상의 실시간 정합을 위한 보간 신경망 설계)

  • Kim, Jong-Man
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2003.05c
    • /
    • pp.194-199
    • /
    • 2003
  • Depth error correction effect for maladjusted stereo cameras with calibrated pixel distance parameter is presented. The proposed neural network technique is the real time computation method based theory of inter-node diffusion for searching the safety distances from the sudden appearance-objects during the work driving. The main steps of the distance computation using the theory of stereo vision like the eyes of man is following steps. One is the processing for finding the corresponding points of stereo images and the other is the interpolation processing of full image data from nonlinear image data of objects. All of them request much memory space and time. Therefore the most reliable neural-network algorithm is derived for real-time matching of objects, which is composed of a dynamic programming algorithm based on sequence matching techniques.

  • PDF

Adaptive Linear Predictive Coding of Time-varying Images Using Multidimensional Recursive Least-squares Ladder Filters

  • Nam Man K.;Kim Woo Y.
    • Journal of the military operations research society of Korea
    • /
    • v.13 no.1
    • /
    • pp.1-18
    • /
    • 1987
  • This paper presents several adaptive linear predictive coding techniques based upon extension of recursive ladder filters. A 2-D recursive ladder filter is extended to a 3-D case which can adaptively track the variation of both spatial and temporal changes of moving images. Using the 2-D/3-D ladder filter and a previous farme predictor, two types of adaptive predictor-control schemes are proposed in which the prediction error at each pel can be obtained at or close to a minimum level. We also investigate several modifications of the basic encoding methods. Performance of the 2-D/3-D ladder filters, their adaptive control schemes, and variations in coding methods are evaluated by computer simulations on a real sequence and compared to the results of motion compensation and frame differential coders. As a validity test of the ladder filters developed, the error signals for the different predictors are compared and evaluated.

  • PDF

Fuzzy Classifier and Bispectrum for Invariant 2-D Shape Recognition (2차원 불변 영상 인식을 위한 퍼지 분류기와 바이스펙트럼)

  • 한수환;우영운
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.3
    • /
    • pp.241-252
    • /
    • 2000
  • In this paper, a translation, rotation and scale invariant system for the recognition of closed 2-D images using the bispectrum of a contour sequence and a weighted fuzzy classifier is derived and compared with the recognition process using one of the competitive neural algorithm, called a LVQ( Loaming Vector Quantization). The bispectrum based on third order cumulants is applied to the contour sequences of an image to extract fifteen feature vectors for each planar image. These bispectral feature vectors, which are invariant to shape translation, rotation and scale transformation, can be used to the represent two-dimensional planar images and are fed into a weighted fuzzy classifier. The experimental processes with eight different shapes of aircraft images are presented to illustrate a relatively high performance of the proposed recognition system.

  • PDF

Small-scale structures in the dust cloud associated with 17P/Holmes outburst

  • Ham, Ji-Beom;Ishiguro, Masateru;Kuroda, Daisuke;Fukushima, Hideo;Watanabe, Jun-Ichi
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.35 no.1
    • /
    • pp.92-92
    • /
    • 2010
  • A short-period comet, 17P/Holmes, is one of the most outstanding comets because of the outburs in 2007. It orbits the sun at the distance between 2.1AU and 5.2 AU with the orbital period of 6.9 year. On 2007 October 23, its brightness was suddenly increased by about a million times from 17 mag to 2.5 mag. We made observations of 17P/Holmes soon after the outburst on October 25, 27 and 28, using a 105cm telescope at the Ishigakijima Astronomical observatory, Japan. We took the images with V, R and I-band filters simultaneously. Total exposure times are 15 (October 25), 69 (October 27), and 37 (October 28) minute in each filter. The composite images provide good signal to noise ratio and help us to recognize faint structures embedded in the dust cloud. We examined a sequence of images using a digital filter that enhances the small-scale structures. As the result of the data analysis, we confirm (1) the radial expanded structure coming out from the nucleus of comet, and (2) dozens of blobs that moved radially away from the nucleus. In this presentation, we introduce the observations and the data reductions, and consider the origins of these fine structure.

  • PDF

View synthesis with sparse light field for 6DoF immersive video

  • Kwak, Sangwoon;Yun, Joungil;Jeong, Jun-Young;Kim, Youngwook;Ihm, Insung;Cheong, Won-Sik;Seo, Jeongil
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.24-37
    • /
    • 2022
  • Virtual view synthesis, which generates novel views similar to the characteristics of actually acquired images, is an essential technical component for delivering an immersive video with realistic binocular disparity and smooth motion parallax. This is typically achieved in sequence by warping the given images to the designated viewing position, blending warped images, and filling the remaining holes. When considering 6DoF use cases with huge motion, the warping method in patch unit is more preferable than other conventional methods running in pixel unit. Regarding the prior case, the quality of synthesized image is highly relevant to the means of blending. Based on such aspect, we proposed a novel blending architecture that exploits the similarity of the directions of rays and the distribution of depth values. By further employing the proposed method, results showed that more enhanced view was synthesized compared with the well-designed synthesizers used within moving picture expert group (MPEG-I). Moreover, we explained the GPU-based implementation synthesizing and rendering views in the level of real time by considering the applicability for immersive video service.