• Title/Summary/Keyword: 영상 간 변환

Search Result 847, Processing Time 0.024 seconds

Development of Distortion Correction Technique in Tilted Image for River Surface Velocity Measurement (하천 표면영상유속 측정을 위한 경사영상 왜곡 보정 기술 개발)

  • Kim, Hee Joung;Lee, Jun Hyeong;Yoon, Byung Man;Kim, Seo Jun
    • Ecology and Resilient Infrastructure
    • /
    • v.8 no.2
    • /
    • pp.88-96
    • /
    • 2021
  • In surface image velocimetry, a wide area of a river is photographed at an angle to measure its velocity, inevitably causing image distortion. Although a distorted image can be corrected into an orthogonal image by using 2D projective coordinate transformation and considering reference points on the same plane as the water surface, this method is limited by the uncertainty of changes in the water level in the event of a flood. Therefore, in this study, we developed a tilt image correction technique that corrects distortions in oblique images without resetting the reference points while coping with changes in the water level using the geometric relationship between the coordinates of the reference points set at a high position the camera, and the vertical distance between the water surface and the camera. Furthermore, we developed a distortion correction method to verify the corrected image, wherein we conducted a full-scale river experiment to verify the reference point transformation equation and measure the surface velocity. Based on the verification results, the proposed tilt image correction method was found to be over 97% accurate, whereas the experiment result of the surface velocity differed by approximately 4% as compared to the results calculated using the proposed method, thereby indicating high accuracy. Application of the proposed method to an image-based fixed automatic discharge measurement system can improve the accuracy of discharge measurement in the event of a flood when the water level changes rapidly.

Real-time Montage System Design using Contents Based Image Retrieval (내용 기반 영상 검색을 이용한 실시간 몽타주 시스템 설계)

  • Choi, Hyeon-Seok;Bae, Seong-Joon;Kim, Tae-Yong;Choi, Jong-Soo
    • Archives of design research
    • /
    • v.19 no.2 s.64
    • /
    • pp.313-322
    • /
    • 2006
  • In this paper, we introduce 'Contents Based Image Retrieval' which helps a user find the images he or she needs more easily and reconfigures the images automatically. With this system, we try to realize the language of (motion) picture, that is, the Montage from the viewpoint of the user. The Real-time Montage System introduced in this paper uses 'Discrete Fourier Transform'. Through this, the user can find the feature of the image selected and compare the analogousness with the image in the database. This kind of system leads to the user's speedy and effective retrieving, Also, we can acquire the movement image of the user by Camera Tracking in Real-time. The movement image acquired is to be reconfigured automatically with the image of the user. In this way, we can get an easy and speedy image reconfiguration which sets to the user's intention. This system is a New Media Design tool(entertainment) which induces a user enjoy participating in it. In this system, Thus, the user is not just a passive consumer of one-way image channels but an active subject of image reproduction in this system. It is expected to be a foundation for a new style of user-centered movie (media based entertainment).

  • PDF

3D Coordinates Transformation in Orthogonal Stereo Vision (직교식 스테레오 비젼 시스템에서의 3차원 좌표 변환)

  • Yoon, Hee-Joo;Cha, Sun-Hee;Cha, Eui-Young
    • Annual Conference of KIPS
    • /
    • 2005.05a
    • /
    • pp.855-858
    • /
    • 2005
  • 본 시스템은 어항 속의 물고기 움직임을 추적하기 위해 직교식 스테레오 비젼 시스템(Othogonal Stereo Vision System)으로부터 동시에 독립된 영상을 획득하고 획득된 영상을 처리하여 좌표를 얻어내고 3차원 좌표로 생성해내는 시스템이다. 제안하는 방법은 크게 두 대의 카메라로부터 동시에 영상을 획득하는 방법과 획득된 영상에 대한 처리 및 물체 위치 검출, 그리고 3차원 좌표 생성으로 구성된다. Frame Grabber를 사용하여 초당 8-Frame의 두 개의 영상(정면영상, 상면영상)을 획득하며, 실시간으로 갱신하는 배경영상과의 차영상을 통하여 이동객체를 추출하고, Labeling을 이용하여 Clustering한 후, Cluster의 중심좌표를 검출한다. 검출된 각각의 좌표를 직선방정식을 이용하여 3차원 좌표보정을 수행하여 이동객체의 좌표를 생성한다.

  • PDF

Hardware Implementation of Integer Transform and Quantization for H.264 (하드웨어 기반의 H.264 정수 변환 및 양자화 구현)

  • 임영훈;정용진
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1182-1191
    • /
    • 2003
  • In this paper, we propose a new hardware architecture for integer transform, quantizer, inverse quantizer, and inverse integer transform of a new video coding standard H.264/JVT. We describe the algorithm and derive hardware architecture emphasizing the importance of area for low cost and low power consumption. The proposed architecture has been verified by PCI-interfaced emulation board using APEX-II Alters FPGA and also by ASIC synthesis using Samsung 0.18 um CMOS cell library. The ASIC synthesis result shows that the proposed hardware can operate at 100 MHz, processing more than 1,300 QCIF video frames per second. The hardware is going to be used as a core module when implementing a complete H.264 video encoder/decoder ASIC for real-time multimedia application.

Denoising Images by Soft-Threshold Technique Using the Monotonic Transform and the Noise Power of Wavelet Subbands (단조변환 및 웨이블릿 서브밴드 잡음전력을 이용한 Soft-Threshold 기법의 영상 잡음제거)

  • Park, Nam-Chun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.15 no.4
    • /
    • pp.141-147
    • /
    • 2014
  • The wavelet shrinkage is a technique that reduces the wavelet coefficients to minimize the MSE(Mean Square Error) between the signal and the noisy signal by making use of the threshold determined by the variance of the wavelet coefficients. In this paper, by using the monotonic transform and the power of wavelet subbands, new thresholds applicable to the high and the low frequency wavelet bands are proposed, and the thresholds are applied to the ST(soft-threshold) technique to denoise on image signals with additive Gaussian noise. And the results of PSNRs are compared with the results obtained by the VisuShrink technique and those of [15]. The results shows the validity of this technique.

Realtime Facial Expression Recognition from Video Sequences Using Optical Flow and Expression HMM (광류와 표정 HMM에 의한 동영상으로부터의 실시간 얼굴표정 인식)

  • Chun, Jun-Chul;Shin, Gi-Han
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.55-70
    • /
    • 2009
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. In that sense, inferring the emotional state of the person based on the facial expression recognition is an important issue. In this paper, we present a novel approach to recognize facial expression from a sequence of input images using emotional specific HMM (Hidden Markov Model) and facial motion tracking based on optical flow. Conventionally, in the HMM which consists of basic emotional states, it is considered natural that transitions between emotions are imposed to pass through neutral state. However, in this work we propose an enhanced transition framework model which consists of transitions between each emotional state without passing through neutral state in addition to a traditional transition model. For the localization of facial features from video sequence we exploit template matching and optical flow. The facial feature displacements traced by the optical flow are used for input parameters to HMM for facial expression recognition. From the experiment, we can prove that the proposed framework can effectively recognize the facial expression in real time.

  • PDF

Registration between High-resolution Optical and SAR Images Using linear Features (선형정보를 이용한 고해상도 광학영상과 SAR 영상 간 기하보정)

  • Han, You-Kyung;Kim, Duk-Jin;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.2
    • /
    • pp.141-150
    • /
    • 2011
  • Precise image-to-image registration is required to process multi-sensor data together. The purpose of this paper is to develop an algorithm that register between high-resolution optical and SAR images using linear features. As a pre-processing step, initial alignment was fulfilled using manually selected tie points to remove any dislocations caused by scale difference, rotation, and translation of images. Canny edge operator was applied to both images to extract linear features. These features were used to design a cost function that finds matching points based on their similarity. Outliers having larger geometric differences than general matching points were eliminated. The remaining points were used to construct a new transformation model, which was combined the piecewise linear function with the global affine transformation, and applied to increase the accuracy of geometric correction.

A Still Image Coding of Wavelet Transform Mode by Rearranging DCT Coefficients (DCT계수의 재배열을 통한 웨이브렛 변환 형식의 정지 영상 부호화)

  • Kim, Jeong-Sik;Kim, Eung-Seong;Lee, Geun-Yeong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.5
    • /
    • pp.464-473
    • /
    • 2001
  • Since DCT algorithm divides an image into blocks uniformly in both the spatial domain and the frequency domain, it has a weak point that it can not reflect HVS(Human Visual System) efficiently To avoid this problem, we propose a new algorithm, which combines only the merits of DCT and wavelet transform. The proposed algorithm uses the high compaction efficiency of DCT, and applies wavelet transform mode to DCT coefficients, so that the algorithm can utilize interband and intraband correlations of wavelet simultaneously After that, the proposed algorithm quantizes each coefficient based on the characteristic of each coefficient's band. In terms of coding method, the quantized coefficients of important DCT coefficients have symmetrical distribution, the bigger that value Is, the smaller occurrence probability is. Using the characteristic, we propose a new still image coding algorithm of symmetric and bidirectional tree structure with simple algorithm and fast decoding time. Comparing the proposed method with JPEG, the proposed method yields better image quality both objectively and subjectively at the same bit rate.

  • PDF

Experiment for 3D Coregistration between Scanned Point Clouds of Building using Intensity and Distance Images (강도영상과 거리영상에 의한 건물 스캐닝 점군간 3차원 정합 실험)

  • Jeon, Min-Cheol;Eo, Yang-Dam;Han, Dong-Yeob;Kang, Nam-Gi;Pyeon, Mu-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.1
    • /
    • pp.39-45
    • /
    • 2010
  • This study used the keypoint observed simultaneously on two images and on twodimensional intensity image data, which was obtained along with the two point clouds data that were approached for automatic focus among points on terrestrial LiDAR data, and selected matching point through SIFT algorithm. Also, for matching error diploid, RANSAC algorithm was applied to improve the accuracy of focus. As calculating the degree of three-dimensional rotating transformation, which is the transformation-type parameters between two points, and also the moving amounts of vertical/horizontal, the result was compared with the existing result by hand. As testing the building of College of Science at Konkuk University, the difference of the transformation parameters between the one through automatic matching and the one by hand showed 0.011m, 0.008m, and 0.052m in X, Y, Z directions, which concluded to be used as the data for automatic focus.

Distance Estimation Between Vanishing Point and Moving Object (소실점과 움직임 객체간의 거리 추정)

  • Kim, Dong-Wook
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.5
    • /
    • pp.637-642
    • /
    • 2011
  • In this paper, a new technique to estimate the distances between a vanishing point and moving objects is proposed. A vanishing point for an input image is estimated and it use to extract distance form the vanishing point to a moving object. Using the obtained distances, moving objects is extracted. In simulation results, several performances for a test image sequnce is shown.