• Title/Summary/Keyword: Pixel space

Search Result 292, Processing Time 0.028 seconds

Improved measurement uncertainty of photon detection efficiency for single pixel Silicon photomultiplier

  • Yang, Seul Ki;Lee, Hye-Young;Jeon, Jina;Kim, Sug-Whan;Lee, Jik;Park, Il H.
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.37 no.2
    • /
    • pp.210.1-210.1
    • /
    • 2012
  • We report technique used for improved measurement uncertainties for Photon detection efficiency(PDE) of $1mm^2$ single pixel SiPM. It consists of 470nm LED light source, two 2-inch integrating sphere and two NIST calibrated silicon photodiodes that have ${\pm}2.4%$ calibration error. With raytracing simulation of our experimental setup, we predict number of photon into SiPM and measurement uncertainty. For MPPC, Hamamatsu suggested PDE(1600 micro pixel) including crosstalk and afterpulse is 23.5% at 470 nm. By using new low calibration error photodiode and raytracing simulation, our simulation result has ${\pm}3%$ measurement uncertainty. The technical detail of measurement, simulation are presented with the results and implication.

  • PDF

Application of Deep Learning to Solar Data: 2. Generation of Solar UV & EUV images from magnetograms

  • Park, Eunsu;Moon, Yong-Jae;Lee, Harim;Lim, Daye
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.81.3-81.3
    • /
    • 2019
  • In this study, we apply conditional Generative Adversarial Network, which is one of the deep learning method, to the image-to-image translation from solar magentograms to solar UV and EUV images. For this, we train a model using pairs of SDO/AIA 9 wavelength UV and EUV images and their corresponding SDO/HMI line-of-sight magnetograms from 2011 to 2017 except August and September each year. We evaluate the model by comparing pairs of SDO/AIA images and corresponding generated ones in August and September. Our results from this study are as follows. First, we successfully generate SDO/AIA like solar UV and EUV images from SDO/HMI magnetograms. Second, our model has pixel-to-pixel correlation coefficients (CC) higher than 0.8 except 171. Third, our model slightly underestimates the pixel values in the view of Relative Error (RE), but the values are quite small. Fourth, considering CC and RE together, 1600 and 1700 photospheric UV line images, which have quite similar structures to the corresponding magnetogram, have the best results compared to other lines. This methodology can be applicable to many scientific fields that use several different filter images.

  • PDF

Performance Analysis of Retinex-based Image Enhancement According to Color Domain and Gamma Correction Adaptation (Color Domain 및 Gamma Correction 적용에 따른 Retinex 기반 영상개선 알고리즘의 효과 분석)

  • Kim, Donghyung
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.15 no.1
    • /
    • pp.99-107
    • /
    • 2019
  • Retinex-based image enhancement is a technique that utilizes the property that the human visual characteristics are sensitive to the difference from the surrounding pixel value rather than the pixel value itself. These Retinex-based algorithms show different characteristics of the improved image depending on the applied color space or gamma correction. In this paper, we set eight different experimental conditions according to the application of color space and gamma correction, and analyze the objective and subjective performance of each Retinex based image enhancement algorithm and apply it to the implementation of Retinex based algorithm. In the case of gamma correction, quantitative low entropy images and low contrast images are obtained. The application of Retinex technique in HSI color space rather than RGB color space is found to be high in overall subjective image quality as well as maintaining color.

Face Detction Using Face Geometry (얼굴 기하에 기반한 얼굴 검출 알고리듬)

  • 류세진;은승엽
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.49-52
    • /
    • 2002
  • This paper presents a fast algorithm for face detection from color images on internet. We use Mahalanobis distance between standard skin color and actual pixel color on IQ color space to segment skin color regions. The skin color regions are the candidate face region. Further, the locations of eyes and mouth regions are found by computing average pixel values on horizontal and vertical pixel lines. The geometry of mouth and eye locations is compared to the standard face geometry to eliminate false face regions. Our Method is simple and fast so that it can be applied to face search engine for internet.

  • PDF

Triqubit-State Measurement-Based Image Edge Detection Algorithm

  • Wang, Zhonghua;Huang, Faliang
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1331-1346
    • /
    • 2018
  • Aiming at the problem that the gradient-based edge detection operators are sensitive to the noise, causing the pseudo edges, a triqubit-state measurement-based edge detection algorithm is presented in this paper. Combing the image local and global structure information, the triqubit superposition states are used to represent the pixel features, so as to locate the image edge. Our algorithm consists of three steps. Firstly, the improved partial differential method is used to smooth the defect image. Secondly, the triqubit-state is characterized by three elements of the pixel saliency, edge statistical characteristics and gray scale contrast to achieve the defect image from the gray space to the quantum space mapping. Thirdly, the edge image is outputted according to the quantum measurement, local gradient maximization and neighborhood chain code searching. Compared with other methods, the simulation experiments indicate that our algorithm has less pseudo edges and higher edge detection accuracy.

Human Hand Detection Using Color Vision (컬러 시각을 이용한 사람 손의 검출)

  • Kim, Jun-Yup;Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.21 no.1
    • /
    • pp.28-33
    • /
    • 2012
  • The visual sensing of human hands plays an important part in many man-machine interaction/interface systems. Most existing visionbased hand detection techniques depend on the color cues of human skin. The RGB color image from a vision sensor is often transformed to another color space as a preprocessing of hand detection because the color space transformation is assumed to increase the detection accuracy. However, the actual effect of color space transformation has not been well investigated in literature. This paper discusses a comparative evaluation of the pixel classification performance of hand skin detection in four widely used color spaces; RGB, YIQ, HSV, and normalized rgb. The experimental results indicate that using the normalized red-green color values is the most reliable under different backgrounds, lighting conditions, individuals, and hand postures. The nonlinear classification of pixel colors by the use of a multilayer neural network is also proposed to improve the detection accuracy.

The Detection Scheme of Graph Area from Sea Level Measurements Recording Paper Images (조위관측기록지 이미지에서 그래프 영역 검출 기법)

  • Yu, Young-Jung;Kim, Young-Ju;Park, Seong-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.11
    • /
    • pp.2555-2562
    • /
    • 2010
  • In this paper, we propose the method that extracts sea level measurements graph from the sea level measurements recording paper image with a little interaction. At first, a pixel that is included in the graph area is selected. Then, background pixels are automatically determined using the distance between a selected pixel and other pixels on LAB color space. In each vertical line, a pixel that is the nearest to the selected pixel on LAB color space is extracted and the graph area is determined using that pixels. Experimental results show that the sea level measurements graph can be extracted with a few interaction from the various sea level measurements recording paper images.

Depth Upsampler Using Color and Depth Weight (색상정보와 깊이정보 가중치를 이용한 깊이영상 업샘플러)

  • Shin, Soo-Yeon;Kim, Dong-Myung;Suh, Jae-Won
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.7
    • /
    • pp.431-438
    • /
    • 2016
  • In this paper, we present an upsampling technique for depth map image using color and depth weights. First, we construct a high-resolution image using the bilinear interpolation technique. Next, we detect a common edge region using RGB color space, HSV color space, and depth image. If an interpolated pixel belongs to the common edge region, we calculate weighting values of color and depth in $3{\times}3$ neighboring pixels and compute the cost value to determine the boundary pixel value. Finally, the pixel value having minimum cost is determined as the pixel value of the high-resolution depth image. Simulation results show that the proposed algorithm achieves good performance in terns of PSNR comparison and subjective visual quality.

Denoise of Astronomical Images with Deep Learning

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.54.2-54.2
    • /
    • 2019
  • Removing noise which occurs inevitably when taking image data has been a big concern. There is a way to raise signal-to-noise ratio and it is regarded as the only way, image stacking. Image stacking is averaging or just adding all pixel values of multiple pictures taken of a specific area. Its performance and reliability are unquestioned, but its weaknesses are also evident. Object with fast proper motion can be vanished, and most of all, it takes too long time. So if we can handle single shot image well and achieve similar performance, we can overcome those weaknesses. Recent developments in deep learning have enabled things that were not possible with former algorithm-based programming. One of the things is generating data with more information from data with less information. As a part of that, we reproduced stacked image from single shot image using a kind of deep learning, conditional generative adversarial network (cGAN). r-band camcol2 south data were used from SDSS Stripe 82 data. From all fields, image data which is stacked with only 22 individual images and, as a pair of stacked image, single pass data which were included in all stacked image were used. All used fields are cut in $128{\times}128$ pixel size, so total number of image is 17930. 14234 pairs of all images were used for training cGAN and 3696 pairs were used for verify the result. As a result, RMS error of pixel values between generated data from the best condition and target data were $7.67{\times}10^{-4}$ compared to original input data, $1.24{\times}10^{-3}$. We also applied to a few test galaxy images and generated images were similar to stacked images qualitatively compared to other de-noising methods. In addition, with photometry, The number count of stacked-cGAN matched sources is larger than that of single pass-stacked one, especially for fainter objects. Also, magnitude completeness became better in fainter objects. With this work, it is possible to observe reliably 1 magnitude fainter object.

  • PDF

Real-Time Hierarchical Techniques for Rendering of Translucent Materials and Screen-Space Interpolation (반투명 재질의 렌더링과 화면 보간을 위한 실시간 계층화 알고리즘)

  • Ki, Hyun-Woo;Oh, Kyoung-Su
    • Journal of Korea Game Society
    • /
    • v.7 no.1
    • /
    • pp.31-42
    • /
    • 2007
  • In the natural world, most materials such as skin, marble and cloth are translucent. Their appearance is smooth and soft compared with metals or mirrors. In this paper, we propose a new GPU based hierarchical rendering technique for translucent materials, based on the dipole diffusion approximation, at interactive rates. Information of incident light, position, normal, and irradiance, on the surfaces are stored into 2D textures by rendering from a primary light view. Huge numbers of pixel photons are clustered into quad-tree image pyramids. Each pixel, we select clusters (sets of photons), and then we approximate multiple subsurface scattering term with the clusters. We also introduce a novel hierarchical screen-space interpolation technique by exploiting spatial coherence with early-z culling on the GPU. We also build image pyramids of the screen using mipmap and pixel shader. Each pixel of the pyramids is stores position, normal and spatial similarity of children pixels. If a pixel's the similarity is high, we render the pixel and interpolate the pixel to multiple pixels. Result images show that our method can interactively render deformable translucent objects by approximating hundreds of thousand photons with only hundreds clusters without any preprocessing. We use an image-space approach for entire process on the GPU, thus our method is less dependent to scene complexity.

  • PDF