• Title/Summary/Keyword: single pixel

Search Result 282, Processing Time 0.034 seconds

A study on Adaptive Multi-level Median Filter using Direction Information Scales (방향성 정보 척도를 이용한 적응적 다단 메디안 필터에 관한 연구)

  • 김수겸
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.28 no.4
    • /
    • pp.611-617
    • /
    • 2004
  • Pixel classification is one of basic image processing issues. The general characteristics of the pixels belonging to various classes are discussed and the radical principles of pixel classification are given. At the same time. a pixel classification scheme based on image direction measure is proposed. As a typical application instance of pixel classification, an adaptive multi-level median filter is presented. An image can be classified into two types of areas by using the direction information measure, that is. smooth area and edge area. Single direction multi-level median filter is used in smooth area. and multi-direction multi-level median filter is taken in the other type of area. What's more. an adaptive mechanism is proposed to adjust the type of the filters and the size of filter window. As a result. we get a better trade-off between preserving details and noise filtering.

Analysis of lenticular 3D liquid crystal displays using 3D pixel simulator

  • Kim, Hwi;Jung, Kyoung-Ho;Yun, Hae-Young;Lee, Seung-Hoon;Kim, Hee-Sub;Shin, Sung-Tae
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.443-446
    • /
    • 2009
  • In this paper, an accurate ray-tracing based visual analysis method of lenticular 3D liquid liquid crystal display (LCDs) and some analysis results are presented. In the developed method, the geometric optics analysis is performed on the single 3D unit pixel of 3D lenticular LCD. It is shown that the display characteristics of 3D lenticular LCD panels of arbitrary size can be evaluated through the 3D unit pixel analysis. The analysis results of a few representative structures of 3D lenticular LCDs are compared.

  • PDF

Efficient Single Image Dehazing by Pixel-based JBDCP and Low Complexity Transmission Estimation (저 복잡도 전달량 추정 및 픽셀 기반 JBDCP에 의한 효율적인 단일 영상 안개 제거 방법)

  • Kim, Jong-Ho
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.5
    • /
    • pp.977-984
    • /
    • 2019
  • This paper proposes a single image dehazing that utilizes the transmission estimation with low complexity and the pixel-based JBDCP (Joint Bright and Dark Channel Prior) for the effective application of hazy outdoor images. The conventional transmission estimation includes the refinement process with high computational complexity and memory requirements. We propose the transmission estimation using combination of pixel- and block-based dark channel information and it significantly reduces the complexity while preserving the edge information accurately. Moreover, it is possible to estimate the transmission reflecting the image characteristics, by obtaining a different air-light for each pixel position of the image using the pixel-based JBDCP. Experimental results on various hazy images illustrate that the proposed method exhibits excellent dehazing performance with low complexity compared to the conventional methods; thus, it can be applied in various fields including real-time devices.

Autoencoder-Based Defense Technique against One-Pixel Adversarial Attacks in Image Classification (이미지 분류를 위한 오토인코더 기반 One-Pixel 적대적 공격 방어기법)

  • Jeong-hyun Sim;Hyun-min Song
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1087-1098
    • /
    • 2023
  • The rapid advancement of artificial intelligence (AI) technology has led to its proactive utilization across various fields. However, this widespread adoption of AI-based systems has raised concerns about the increasing threat of attacks on these systems. In particular, deep neural networks, commonly used in deep learning, have been found vulnerable to adversarial attacks that intentionally manipulate input data to induce model errors. In this study, we propose a method to protect image classification models from visually imperceptible One-Pixel attacks, where only a single pixel is altered in an image. The proposed defense technique utilizes an autoencoder model to remove potential threat elements from input images before forwarding them to the classification model. Experimental results, using the CIFAR-10 dataset, demonstrate that the autoencoder-based defense approach significantly improves the robustness of pretrained image classification models against One-Pixel attacks, with an average defense rate enhancement of 81.2%, all without the need for modifications to the existing models.

Denoise of Astronomical Images with Deep Learning

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.54.2-54.2
    • /
    • 2019
  • Removing noise which occurs inevitably when taking image data has been a big concern. There is a way to raise signal-to-noise ratio and it is regarded as the only way, image stacking. Image stacking is averaging or just adding all pixel values of multiple pictures taken of a specific area. Its performance and reliability are unquestioned, but its weaknesses are also evident. Object with fast proper motion can be vanished, and most of all, it takes too long time. So if we can handle single shot image well and achieve similar performance, we can overcome those weaknesses. Recent developments in deep learning have enabled things that were not possible with former algorithm-based programming. One of the things is generating data with more information from data with less information. As a part of that, we reproduced stacked image from single shot image using a kind of deep learning, conditional generative adversarial network (cGAN). r-band camcol2 south data were used from SDSS Stripe 82 data. From all fields, image data which is stacked with only 22 individual images and, as a pair of stacked image, single pass data which were included in all stacked image were used. All used fields are cut in $128{\times}128$ pixel size, so total number of image is 17930. 14234 pairs of all images were used for training cGAN and 3696 pairs were used for verify the result. As a result, RMS error of pixel values between generated data from the best condition and target data were $7.67{\times}10^{-4}$ compared to original input data, $1.24{\times}10^{-3}$. We also applied to a few test galaxy images and generated images were similar to stacked images qualitatively compared to other de-noising methods. In addition, with photometry, The number count of stacked-cGAN matched sources is larger than that of single pass-stacked one, especially for fainter objects. Also, magnitude completeness became better in fainter objects. With this work, it is possible to observe reliably 1 magnitude fainter object.

  • PDF

Precision Analysis of the Depth Measurement System Using a Single Camera with a Rotating Mirror (회전 평면경과 단일 카메라를 이용한 거리측정 시스템의 정밀도 분석)

  • ;;;Chun Shin Lin
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.11
    • /
    • pp.626-633
    • /
    • 2003
  • Theoretical analysis of the depth measurement system with the use of a single camera and a rotating mirror has been done. A camera in front of a rotating mirror acquires a sequence of reflected images, from which depth information is extracted. For an object point at a longer distance, the corresponding pixel in the sequence of images moves at a higher speed. Depth measurement based on such pixel movement is investigated. Since the mirror rotates along an axis that is in parallel with the vertical axis of the image plane, the image of an object will only move horizontally. This eases the task of finding corresponding image points. In this paper, the principle of the depth measurement-based on the relation of the pixel movement speed and the depth of objects have been investigated. Also, necessary mathematics to implement the technique is derived and presented. The factors affecting the measurement precision have been studied. Analysis shows that the measurement error increases with the increase of depth. The rotational angle of the mirror between two image-takings also affects the measurement precision. Experimental results using the real camera-mirror setup are reported.

Improved measurement uncertainty of photon detection efficiency for single pixel Silicon photomultiplier

  • Yang, Seul Ki;Lee, Hye-Young;Jeon, Jina;Kim, Sug-Whan;Lee, Jik;Park, Il H.
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.37 no.2
    • /
    • pp.210.1-210.1
    • /
    • 2012
  • We report technique used for improved measurement uncertainties for Photon detection efficiency(PDE) of $1mm^2$ single pixel SiPM. It consists of 470nm LED light source, two 2-inch integrating sphere and two NIST calibrated silicon photodiodes that have ${\pm}2.4%$ calibration error. With raytracing simulation of our experimental setup, we predict number of photon into SiPM and measurement uncertainty. For MPPC, Hamamatsu suggested PDE(1600 micro pixel) including crosstalk and afterpulse is 23.5% at 470 nm. By using new low calibration error photodiode and raytracing simulation, our simulation result has ${\pm}3%$ measurement uncertainty. The technical detail of measurement, simulation are presented with the results and implication.

  • PDF

EpiLoc: Deep Camera Localization Under Epipolar Constraint

  • Xu, Luoyuan;Guan, Tao;Luo, Yawei;Wang, Yuesong;Chen, Zhuo;Liu, WenKai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.2044-2059
    • /
    • 2022
  • Recent works have shown that the geometric constraint can be harnessed to boost the performance of CNN-based camera localization. However, the existing strategies are limited to imposing image-level constraint between pose pairs, which is weak and coarse-gained. In this paper, we introduce a pixel-level epipolar geometry constraint to vanilla localization framework without the ground-truth 3D information. Dubbed EpiLoc, our method establishes the geometric relationship between pixels in different images by utilizing the epipolar geometry thus forcing the network to regress more accurate poses. We also propose a variant called EpiSingle to cope with non-sequential training images, which can construct the epipolar geometry constraint based on a single image in a self-supervised manner. Extensive experiments on the public indoor 7Scenes and outdoor RobotCar datasets show that the proposed pixel-level constraint is valuable, and helps our EpiLoc achieve state-of-the-art results in the end-to-end camera localization task.

The effects of pixel density, sub-pixel structure, luminance, and illumination on legibility of smartphone (화소 밀집도, 화소 하부구조, 휘도, 조명 조도가 스마트폰 가독성에 미치는 영향)

  • Park, JongJin;Li, Hyung-Chul O.;Kim, ShinWoo
    • Science of Emotion and Sensibility
    • /
    • v.17 no.3
    • /
    • pp.3-14
    • /
    • 2014
  • Since the domestic introduction of iPhone in 2009, use of smartphones rapidly increased and many tasks, previously performed by various devices, are now performed by smartphones. In this process the importance of reading little text using small smartphone screen has become highly significant. This research tested how display factors of smartphone (pixel density, sub-pixel structure, luminance) and environmental factor (illumination) affect legibility related discomfort in text reading. The results indicated that legibility related discomfort is largely affected by pixel density, where people experience inconvenience when the pixel density becomes lower than 300 PPI. Illumination has limited effect on legibility related discomfort. Participants reported more legibility related discomfort when stimulus presented in various levels of illumination rather than single illumination level. Sub-pixel structure and luminance did not affected legibility related discomfort. Based on the results we suggest lower limit resolution of smart devices (smartphones, tablet computers) of different sizes for text legibility.

SATELLITE ORBIT AND ATTITUDE MODELING FOR GEOMETRIC CORRECTION OF LINEAR PUSHBROOM IMAGES

  • Park, Myung-Jin;Kim, Tae-Jung
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.543-547
    • /
    • 2002
  • In this paper, we introduce a more improved camera modeling method for linear pushbroom images than the method proposed by Orun and Natarajan(ON). ON model shows an accuracy of within 1 pixel if more than 10 ground control points(GCPs) are provided. In general, there is high correlation between platform position and attitude parameters but ON model ignores attitude variation in order to overcome such correlation. We propose a new method that obtains an optimal solution set of parameters without ignoring the attitude variation. We first assume that attitude parameters are constant and estimate platform position's. Then we estimate platform attitude parameters using the values of estimated position parameters. As a result, we can set up an accurate camera model for a linear pushbroom satellite scene. In particular, we can apply the camera model to its surrounding scenes because our model provide sufficient information on satellite's position and attitude not only for a single scene but also for a whole imaging segment. We tested on two images: one with a pixel size 6.6m$\times$6.6m acquired from EOC(Electro Optical Camera), and the other with a pixel size 10m$\times$l0m acquired from SPOT. Our camera model procedures were applied to the images and gave satisfying results. We had obtained the root mean square errors of 0.5 pixel and 0.3 pixel with 25 GCPs and 23 GCPs, respectively.

  • PDF