• Title/Summary/Keyword: Image pixel

Search Result 2,490, Processing Time 0.035 seconds

Depth Up-Sampling via Pixel-Classifying and Joint Bilateral Filtering

  • Ren, Yannan;Liu, Ju;Yuan, Hui;Xiao, Yifan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3217-3238
    • /
    • 2018
  • In this paper, a depth image up-sampling method is put forward by using pixel classifying and jointed bilateral filtering. By analyzing the edge maps originated from the high-resolution color image and low-resolution depth map respectively, pixels in up-sampled depth maps can be classified into four categories: edge points, edge-neighbor points, texture points and smooth points. First, joint bilateral up-sampling (JBU) method is used to generate an initial up-sampling depth image. Then, for each pixel category, different refinement methods are employed to modify the initial up-sampling depth image. Experimental results show that the proposed algorithm can reduce the blurring artifact with lower bad pixel rate (BPR).

High-sensitivity NIR Sensing with Stacked Photodiode Architecture

  • Hyunjoon Sung;Yunkyung Kim
    • Current Optics and Photonics
    • /
    • v.7 no.2
    • /
    • pp.200-206
    • /
    • 2023
  • Near-infrared (NIR) sensing technology using CMOS image sensors is used in many applications, including automobiles, biological inspection, surveillance, and mobile devices. An intuitive way to improve NIR sensitivity is to thicken the light absorption layer (silicon). However, thickened silicon lacks NIR sensitivity and has other disadvantages, such as diminished optical performance (e.g. crosstalk) and difficulty in processing. In this paper, a pixel structure for NIR sensing using a stacked CMOS image sensor is introduced. There are two photodetection layers, a conventional layer and a bottom photodiode, in the stacked CMOS image sensor. The bottom photodiode is used as the NIR absorption layer. Therefore, the suggested pixel structure does not change the thickness of the conventional photodiode. To verify the suggested pixel structure, sensitivity was simulated using an optical simulator. As a result, the sensitivity was improved by a maximum of 130% and 160% at wavelengths of 850 nm and 940 nm, respectively, with a pixel size of 1.2 ㎛. Therefore, the proposed pixel structure is useful for NIR sensing without thickening the silicon.

Block-Based Low-Power CMOS Image Sensor with a Simple Pixel Structure

  • Kim, Ju-Yeong;Kim, Jeongyeob;Bae, Myunghan;Jo, Sung-Hyun;Lee, Minho;Choi, Byoung-Soo;Choi, Pyung;Shin, Jang-Kyoo
    • Journal of Sensor Science and Technology
    • /
    • v.23 no.2
    • /
    • pp.87-93
    • /
    • 2014
  • In this paper, we propose a block-based low-power complementary metal oxide semiconductor (CMOS) image sensor (CIS) with a simple pixel structure for power efficiency. This method, which uses an additional computation circuit, makes it possible to reduce the power consumption of the pixel array. In addition, the computation circuit for a block-based CIS is very flexible for various types of pixel structures. The proposed CIS was designed and fabricated using a standard CMOS 0.18 ${\mu}m$ process, and the performance of the fabricated chip was evaluated. From a resultant image, the proposed block-based CIS can calculate a differing contrast in the block and control the operating voltage of the unit blocks. Finally, we confirmed that the power consumption in the proposed CIS with a simple pixel structure can be reduced.

Image Encryption Using Phase-Based Virtual Image and Interferometer

  • Seo, Dong-Hoan;Kim, Soo-Joong
    • Journal of the Optical Society of Korea
    • /
    • v.6 no.4
    • /
    • pp.156-160
    • /
    • 2002
  • In this paper, we propose an improved optical security system using three phase-encoded images and the principle of interference. This optical system based on a Mach-Zehnder interferometer consists of one phase-encoded virtual image to be encrypted and two phase-encoded images, en-crypting image and decrypting image, where every pixel in the three images has a phase value of '0'and'$\pi$'. The proposed encryption is performed by the multiplication of an encrypting image and a phase-encoded virtual image which dose not contain any information from the decrypted im-age. Therefore, even if the unauthorized users steal and analyze the encrypted image, they cannot reconstruct the required image. This virtual image protects the original image from counterfeiting and unauthorized access. The decryption of the original image is simply performed by interfering between a reference wave and a direct pixel-to-pixel mapping image of the en crypted image with a decrypting image. Computer simulations confirmed the effectiveness of the proposed optical technique for optical security applications.

Preliminary Application of Synthetic Computed Tomography Image Generation from Magnetic Resonance Image Using Deep-Learning in Breast Cancer Patients

  • Jeon, Wan;An, Hyun Joon;Kim, Jung-in;Park, Jong Min;Kim, Hyoungnyoun;Shin, Kyung Hwan;Chie, Eui Kyu
    • Journal of Radiation Protection and Research
    • /
    • v.44 no.4
    • /
    • pp.149-155
    • /
    • 2019
  • Background: Magnetic resonance (MR) image guided radiation therapy system, enables real time MR guided radiotherapy (RT) without additional radiation exposure to patients during treatment. However, MR image lacks electron density information required for dose calculation. Image fusion algorithm with deformable registration between MR and computed tomography (CT) was developed to solve this issue. However, delivered dose may be different due to volumetric changes during image registration process. In this respect, synthetic CT generated from the MR image would provide more accurate information required for the real time RT. Materials and Methods: We analyzed 1,209 MR images from 16 patients who underwent MR guided RT. Structures were divided into five tissue types, air, lung, fat, soft tissue and bone, according to the Hounsfield unit of deformed CT. Using the deep learning model (U-NET model), synthetic CT images were generated from the MR images acquired during RT. This synthetic CT images were compared to deformed CT generated using the deformable registration. Pixel-to-pixel match was conducted to compare the synthetic and deformed CT images. Results and Discussion: In two test image sets, average pixel match rate per section was more than 70% (67.9 to 80.3% and 60.1 to 79%; synthetic CT pixel/deformed planning CT pixel) and the average pixel match rate in the entire patient image set was 69.8%. Conclusion: The synthetic CT generated from the MR images were comparable to deformed CT, suggesting possible use for real time RT. Deep learning model may further improve match rate of synthetic CT with larger MR imaging data.

Sub-Pixel Analysis of Hyperspectral Image Using Linear Spectral Mixing Model and Convex Geometry Concept

  • Kim, Dae-Sung;Kim, Yong-Il;Lim, Young-Jae
    • Korean Journal of Geomatics
    • /
    • v.4 no.1
    • /
    • pp.1-8
    • /
    • 2004
  • In the middle-resolution remote sensing, the Ground Sampled Distance (GSD) that the detector senses and samples is generally larger than the actual size of the objects (or materials) of interest, and so several objects are embedded in a single pixel. In this case, as it is impossible to detect these objects by the conventional spatial-based image processing techniques, it has to be carried out at sub-pixel level through spectral properties. In this paper, we explain the sub-pixel analysis algorithm, also known as the Linear Spectral Mixing (LSM) model, which has been experimented using the Hyperion data. To find Endmembers used as the prior knowledge for LSM model, we applied the concept of the convex geometry on the two-dimensional scatter plot. The Atmospheric Correction and Minimum Noise Fraction techniques are presented for the pre-processing of Hyperion data. As LSM model is the simplest approach in sub-pixel analysis, the results of our experiment is not good. But we intend to say that the sub-pixel analysis shows much more information in comparison with the image classification.

  • PDF

CCD Pixel Correction Table Generation for MSC

  • Kim Young Sun;Kong Jong-Pil;Heo Haeng-Pal;Park Jong-Euk;Paik Hong-Yul
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.471-474
    • /
    • 2004
  • Not all CCD pixels generate uniform value for the uniform radiance due to the different process of manufacture and each pixel characteristics. And the image data compression is essential in the real time image transmission because of the high line rate and the limited RF bandwidth. This pixel's nonuniformity and the loss compression make CCD pixel correction necessary in on-orbit condition. In the MSC system, the NUC unit, which is a part of MSC PMU, is charge of the correction for CCD each pixel. The correction is performed with the gain and the offset table for the each pixel and the each TDI mode. These correction tables are generated and programmed in the PMU Flash memory through the various image data tests at the ground test. Besides, they can be uploaded from ground station after onorbit calibration. This paper describes the principle of the table generation and the test way of the non-uniformity after NUC

  • PDF

A Method for Estimating Local Intelligibility for Adaptive Digital Image Decimation (적응형 디지털 영상 축소를 위한 국부 가해성 추정 기법)

  • 곽노윤
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.4 no.4
    • /
    • pp.391-397
    • /
    • 2003
  • This paper is about the digital image decimation algorithm which generates a value of decimated element by an average of a target pixel value and a value of neighbor intelligible element to adaptively reflect the merits of ZOD method and FOD method on the decimated image. First, a target pixel located at the center of sliding window is selected, then the gradient amplitudes of its right neighbor pixel and its lower neighbor pixel are calculated using first order derivative operator respectively. Secondly, each gradient amplitude is divided by the summation result of two gradient amplitudes to generate each intelligible weight. Next, a value of neighbor intelligible element is obtained by adding a value of the right neighbor pixel times its intelligible weight to a value of the lower neighbor pixel times its intelligible weight. The decimated image can be acquired by applying the process repetitively to all pixels in input image which generates the value of decimated element by calculating the average of the target pixel value and the value of neighbor intelligible element.

  • PDF

FY-2C S-VISSR2.0 Navigation by MTSAT Image Navigation (MTSAT Image Navigation 알고리즘을 이용한 FY-2C S-VISSR2.0 Navigation)

  • Jeon, Bong-Ki;Kim, Tae-Hoon;Kim, Tae-Young;Ahn, Sang-Il;Sakong, Young-Bo
    • Proceedings of the KSRS Conference
    • /
    • 2007.03a
    • /
    • pp.251-256
    • /
    • 2007
  • FY-2C 위성은 2004년 10월 발사되어 동경 105도 에 서 운영 중인 중국의 정지 궤도 기상위성 이며 관측 영상은 한반도 지역을 포함하고 있다. 현재 FY-2C S-VISSR2.0[l]에 대한 Navigation 알고리즘이 공개되어 있지 않으며,Navigation을 위하여 S-VISSR2.0에 포함되어 있는 Simplified Mapping Block 정보를 사용하여야 한다. Simplified Mapping Block은 5도 간격의 정보만을 제 공하므로 관측 지 역 의 모든 좌표에 대한 Navigation 정보를 얻기 위해서는 보간볍을 사용하여야 한다. 그러나 보간법은 기준 점에서 멀어질수록 오차가 크게 나타날 수 있다. 따라서 본 논문에서는 모든 좌표에 대한 Navigation 정보를 얻을 수 있는 MTSAT Image Navigation 알고리즘을 FY-2C S-VISSR2.0에 적용하여 Simplified Mapping Block과의 차이를 분석하였다. 분석 방법은 Simplified Mapping Block과 MTSAT Image Navigation[2] 알고리즘을 5도 간격의 격자 점(위경도)에서 Column 및 Line 값 비교, Geo-location된 영상의 품질 비교,WDB2 Map Data의 Coast Line과의 비교를 수행하였다. 분석 결과 격자 점에서의 Column, Line 값은 0.5 이내의 차이 값을 나타내었다. 그리고 Geo-location된 영상 비교에서는 격자 점 주변에서 영상의 차이가 없으나 격자 점에서 멸어질수록 영상의 품질은 MTSAT Image Navigation 알고리즘으로 생성한 영상이 더 우수하였다. WDB2 Map Data의 Coast Line과의 비교에서 오차는 동일하게 발생하였으며,영상의 Column 축에 대한 오차는 평균 1.847 Pixel, 최대 6 Pixel, 최소 oPixel 이며, Line 축에 대한 오차는 평균 0.135 Pixel, 최대 4 Pixel, 최소 0 Pixel을 나타내었다.

  • PDF

A Multi-Layer Perceptron for Color Index based Vegetation Segmentation (색상지수 기반의 식물분할을 위한 다층퍼셉트론 신경망)

  • Lee, Moon-Kyu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.43 no.1
    • /
    • pp.16-25
    • /
    • 2020
  • Vegetation segmentation in a field color image is a process of distinguishing vegetation objects of interests like crops and weeds from a background of soil and/or other residues. The performance of the process is crucial in automatic precision agriculture which includes weed control and crop status monitoring. To facilitate the segmentation, color indices have predominantly been used to transform the color image into its gray-scale image. A thresholding technique like the Otsu method is then applied to distinguish vegetation parts from the background. An obvious demerit of the thresholding based segmentation will be that classification of each pixel into vegetation or background is carried out solely by using the color feature of the pixel itself without taking into account color features of its neighboring pixels. This paper presents a new pixel-based segmentation method which employs a multi-layer perceptron neural network to classify the gray-scale image into vegetation and nonvegetation pixels. The input data of the neural network for each pixel are 2-dimensional gray-level values surrounding the pixel. To generate a gray-scale image from a raw RGB color image, a well-known color index called Excess Green minus Excess Red Index was used. Experimental results using 80 field images of 4 vegetation species demonstrate the superiority of the neural network to existing threshold-based segmentation methods in terms of accuracy, precision, recall, and harmonic mean.