• Title/Summary/Keyword: Integral Image

Search Result 334, Processing Time 0.023 seconds

An Accurate Moving Distance Measurement Using the Rear-View Images in Parking Assistant Systems (후방영상 기반 주차 보조 시스템에서 정밀 이동거리 추출 기법)

  • Kim, Ho-Young;Lee, Seong-Won
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.12
    • /
    • pp.1271-1280
    • /
    • 2012
  • In the recent parking assistant systems, finding out the distance to the object behind a car is often performed by the range sensors such as ultrasonic sensors, radars. However, the installation of additional sensors on the used vehicle could be difficult and require extra cost. On the other hand, the motion stereo technique that extracts distance information using only an image sensor was also proposed. However, In the stereo rectification step, the motion stereo requires good features and exacts matching result. In this paper, we propose a fast algorithm that extracts the accurate distance information for the parallel parking situation using the consecutive images that is acquired by a rear-view camera. The proposed algorithm uses the quadrangle transform of the image, the horizontal line integral projection, and the blocking-based correlation measurement. In the experiment with the magna parallel test sequence, the result shows that the line-accurate distance measurement with the image sequence from the rear-view camera is possible.

Integral Imaging Pickup Method of Bio-Medical Data using GPU and Octree (GPU와 옥트리를 이용한 바이오 메디컬 데이터의 집적 영상 픽업 기법)

  • Jang, Young-Hee;Park, Chan;Jung, Ji-Sung;Park, Jae-Hyeung;Kim, Nam;Ha, Jung-Sung;Yoo, Kwan-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.6
    • /
    • pp.1-9
    • /
    • 2010
  • Recently, 3D stereoscopic display such as 3D stereoscopic cinemas and 3D stereoscopic TV is getting a lot of interest. In general, a stereo image can be used in 3D stereoscopic display. In other hands, for 3D auto stereoscopic display, the elemental images should be generated through visualization from every camera in a lens array. Since a lens array consists of several cameras, it takes a lot of time to generate the elemental images with respect to 3D virtual space, specially, if a large bio-medical volume data is in the 3D virtual space, it will take more time. In order to improve the problem, in this paper, we construct an octree for a given bio-medical volume data and then propose a method to generate the elemental images through efficient rendering of the Octree data using GPU. Experimental results show that the proposed method can obtain more improvement comparable than conventional one, but the development of more efficient method is required.

Webcam-Based 2D Eye Gaze Estimation System By Means of Binary Deformable Eyeball Templates

  • Kim, Jin-Woo
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.5
    • /
    • pp.575-580
    • /
    • 2010
  • Eye gaze as a form of input was primarily developed for users who are unable to use usual interaction devices such as keyboard and the mouse; however, with the increasing accuracy in eye gaze detection with decreasing cost of development, it tends to be a practical interaction method for able-bodied users in soon future as well. This paper explores a low-cost, robust, rotation and illumination independent eye gaze system for gaze enhanced user interfaces. We introduce two brand-new algorithms for fast and sub-pixel precise pupil center detection and 2D Eye Gaze estimation by means of deformable template matching methodology. In this paper, we propose a new algorithm based on the deformable angular integral search algorithm based on minimum intensity value to localize eyeball (iris outer boundary) in gray scale eye region images. Basically, it finds the center of the pupil in order to use it in our second proposed algorithm which is about 2D eye gaze tracking. First, we detect the eye regions by means of Intel OpenCV AdaBoost Haar cascade classifiers and assign the approximate size of eyeball depending on the eye region size. Secondly, using DAISMI (Deformable Angular Integral Search by Minimum Intensity) algorithm, pupil center is detected. Then, by using the percentage of black pixels over eyeball circle area, we convert the image into binary (Black and white color) for being used in the next part: DTBGE (Deformable Template based 2D Gaze Estimation) algorithm. Finally, using DTBGE algorithm, initial pupil center coordinates are assigned and DTBGE creates new pupil center coordinates and estimates the final gaze directions and eyeball size. We have performed extensive experiments and achieved very encouraging results. Finally, we discuss the effectiveness of the proposed method through several experimental results.

Optical implementation of unidirectional integral imaging based on pinhole model (핀홀 모델 기반의 1차원 집적 영상 기법의 광학적 구현)

  • Shin, Dong-Hak;Kim, Nam-Woo;Lee, Joon-Jae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.2
    • /
    • pp.337-343
    • /
    • 2007
  • Since three-dimensional (3D) images reconstructed in interval imaging technique are related to the resolution of elemental images, there has been a problem that ray information of elemental images increases largely in order to obtain high-resolution 3D images. In this paper, to overcome this problem, a new unidirectional integral imaging based on pinhole model is proposed. Proposed method provides a new type of unidirectional elemental images, which are simply obtained by magnifying single horizontal pixel line of each elemental image to the vertical size of lenslet using ray analysis based on pinhole model and used to display 3D images. In proposed method, reduction effect of the ray information of elemental images can be obtained by scarifying vortical parallax. Feasibility of the proposed scheme is experimentally demonstrated and its results are presented.

Adaptable Center Detection of a Laser Line with a Normalization Approach using Hessian-matrix Eigenvalues

  • Xu, Guan;Sun, Lina;Li, Xiaotao;Su, Jian;Hao, Zhaobing;Lu, Xue
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.4
    • /
    • pp.317-329
    • /
    • 2014
  • In vision measurement systems based on structured light, the key point of detection precision is to determine accurately the central position of the projected laser line in the image. The purpose of this research is to extract laser line centers based on a decision function generated to distinguish the real centers from candidate points with a high recognition rate. First, preprocessing of an image adopting a difference image method is conducted to realize image segmentation of the laser line. Second, the feature points in an integral pixel level are selected as the initiating light line centers by the eigenvalues of the Hessian matrix. Third, according to the light intensity distribution of a laser line obeying a Gaussian distribution in transverse section and a constant distribution in longitudinal section, a normalized model of Hessian matrix eigenvalues for the candidate centers of the laser line is presented to balance reasonably the two eigenvalues that indicate the variation tendencies of the second-order partial derivatives of the Gaussian function and constant function, respectively. The proposed model integrates a Gaussian recognition function and a sinusoidal recognition function. The Gaussian recognition function estimates the characteristic that one eigenvalue approaches zero, and enhances the sensitivity of the decision function to that characteristic, which corresponds to the longitudinal direction of the laser line. The sinusoidal recognition function evaluates the feature that the other eigenvalue is negative with a large absolute value, making the decision function more sensitive to that feature, which is related to the transverse direction of the laser line. In the proposed model the decision function is weighted for higher values to the real centers synthetically, considering the properties in the longitudinal and transverse directions of the laser line. Moreover, this method provides a decision value from 0 to 1 for arbitrary candidate centers, which yields a normalized measure for different laser lines in different images. The normalized results of pixels close to 1 are determined to be the real centers by progressive scanning of the image columns. Finally, the zero point of a second-order Taylor expansion in the eigenvector's direction is employed to refine further the extraction results of the central points at the subpixel level. The experimental results show that the method based on this normalization model accurately extracts the coordinates of laser line centers and obtains a higher recognition rate in two group experiments.

Bandwidth Efficient Summed Area Table Generation for CUDA (CUDA를 이용한 효율적인 합산 영역 테이블의 생성 방법)

  • Ha, Sang-Won;Choi, Moon-Hee;Jun, Tae-Joon;Kim, Jin-Woo;Byun, Hye-Ran;Han, Tack-Don
    • Journal of Korea Game Society
    • /
    • v.12 no.5
    • /
    • pp.67-78
    • /
    • 2012
  • Summed area table allows filtering of arbitrary-width box regions for every pixel in constant time per pixel. This characteristic makes it beneficial in image processing applications where the sum or average of the surrounding pixel intensity is required. Although calculating the summed area table of an image data is primarily a memory bound job consisting of row or column-wise summation, previous works had to endure excessive access to the high latency global memory in order to exploit data parallelism. In this paper, we propose an efficient algorithm for generating the summed area table in the GPGPU environment where the input is decomposed into square sub-images with intermediate data that are propagated between them. By doing so, the global memory access is almost halved compared to the previous methods making an efficient use of the available memory bandwidth. The results show a substantial increase in performance.

FPGA Implementation of SURF-based Feature extraction and Descriptor generation (SURF 기반 특징점 추출 및 서술자 생성의 FPGA 구현)

  • Na, Eun-Soo;Jeong, Yong-Jin
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.4
    • /
    • pp.483-492
    • /
    • 2013
  • SURF is an algorithm which extracts feature points and generates their descriptors from input images, and it is being used for many applications such as object recognition, tracking, and constructing panorama pictures. Although SURF is known to be robust to changes of scale, rotation, and view points, it is hard to implement it in real time due to its complex and repetitive computations. Using 3.3 GHz Pentium, in our experiment, it takes 240ms to extract feature points and create descriptors in a VGA image containing about 1,000 feature points, which means that software implementation cannot meet the real time requirement, especially in embedded systems. In this paper, we present a hardware architecture that can compute the SURF algorithm very fast while consuming minimum hardware resources. Two key concepts of our architecture are parallelism (for repetitive computations) and efficient line memory usage (obtained by analyzing memory access patterns). As a result of FPGA synthesis using Xilinx Virtex5LX330, it occupies 101,348 LUTs and 1,367 KB on-chip memory, giving performance of 30 frames per second at 100 MHz clock.

An adaptive Fuzzy Binarization (적응 퍼지 이진화)

  • Jeon, Wang-Su;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.6
    • /
    • pp.485-492
    • /
    • 2016
  • A role of the binarization is very important in separating the foreground and the background in the field of the computer vision. In this study, an adaptive fuzzy binarization is proposed. An ${\alpha}$-cut control ratio is obtained by the distribution of grey level of pixels in a sliding window, and binarization is performed using the value. To obtain the ${\alpha}$-cut, existing thresholding methods which execution speed is fast are used. The threshold values are set as the center of each membership function and the fuzzy intervals of the functions are specified with the distribution of grey level of the pixel. Then ${\alpha}$-control ratio is calculated using the specified function and binarization is performed according to the membership degree of the pixels. The experimental results show the proposed method can segment the foreground and the background well than existing binarization methods and decrease loss of the foreground.

Study on the Difference of Standardized Uptake Value in Fusion Image of Nuclear Medicine (핵의학 융합영상의 표준섭취계수 차이에 관한 연구)

  • Kim, Jung-Soo;Park, Chan-Rok
    • Journal of radiological science and technology
    • /
    • v.41 no.6
    • /
    • pp.553-560
    • /
    • 2018
  • PET-CT and PET-MRI which integrates CT using ionized radiation and MRI using phenomena of magnetic resonance are determined to have the limitation to apply the semi-quantitative index, standardized uptake value (SUV), with the same level due to the fundamental differences of image capturing principle and reorganization, hence, their correlations were analyzed to provide their clinical information. To 30 study subjects maintaining pre-treatment, $^{18}F-FDG$ (5.18 MBq/㎏) was injected and they were scanned continuously without delaying time using $Biograph^{TM}$ mMR 3T (Siemens, Munich) and Biograph mCT 64 (Siemens, Germany), which is an integral type, under the optimized condition except the structural differences of both scanners. Upon the measurement results of $SUV_{max}$ setting volume region of interest with evenly distributed radioactive pharmaceuticals by captured images, $SUV_{max}$ mean values of PET-CT and PET-MRI were $2.94{\pm}0.55$ and $2.45{\pm}0.52$, respectively, and the value of PET-MRI was measured lower by $-20.85{\pm}7.26%$ than that of PET-CT. Also, there was a statistically significant difference in SUVs between two scanners (P<0.001), hence, SUV of PET-CT and PET-MRI cannot express the clinical meanings in the same level. Therefore, in case of the patients who undergo cross follow-up tests with PET-CT and PET-MRI, diagnostic information should be analyzed considering the conditions of SUV differences in both scanners.

Phantom Image Evaluations Depending on the Quality Control-Uniformity of Brain Perfusion SPECT Scanner (뇌 관류 SPECT 스캐너의 정도관리-균일도에 따른 팬텀 영상 평가)

  • Jung-Soo, Kim;Hyun-Jin, Yang;Joon, Kim;Chan-Rok, Park
    • Journal of radiological science and technology
    • /
    • v.46 no.1
    • /
    • pp.29-36
    • /
    • 2023
  • To have highly reliable diagnostic performance of it, this study comparatively analyzed spatial resolution of SPECT images and interrelationship depending on the changes of system uniformity of ga㎜a camera through phantom analysis. This study chose 6 kinds of results from quality control (uniformity) of triple head SPECT scanner operated in an university hospital in Seoul for six months. Then, study measured spatial resolutions (FWHM) of the images restructured by injecting radiopharmaceuticals to Jaszczak phantom, and doing SPECT scanning under the same conditions as clinical ones using the analytical program (image J). Quality controls performed by the experimental institution showed that differential uniformity of UFOV ranged from 2.76% to 7.61% (4.46±2.07), and integral uniformity of UFOV ranged from 1.98% to 5.42% (3.01±1.43). Meanwhile, Quantitative analysis evaluations of phantom images depending on the changes of uniformity of SPECT scanner detector showed that as the uniformity values of UFOV and CFOV decreased, FWHM values of phantom images decreased from 8.5 ㎜ to 5.8 ㎜. That is, it was quantitatively identified that the higher uniformity of detector is, the better spatial resolution of images gets (P<0.05). It is very important to perform continuous and consistent quality control of the nuclear medicinal system, and users should be clearly conscious of it.