• Title/Summary/Keyword: Image Intensity

Search Result 1,681, Processing Time 0.025 seconds

Speckle Removal of SAR Imagery Using a Point-Jacobian Iteration MAP Estimation

  • Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.1
    • /
    • pp.33-42
    • /
    • 2007
  • In this paper, an iterative MAP approach using a Bayesian model based on the lognormal distribution for image intensity and a GRF for image texture is proposed for despeckling the SAR images that are corrupted by multiplicative speckle noise. When the image intensity is logarithmically transformed, the speckle noise is approximately Gaussian additive noise, and it tends to a normal probability much faster than the intensity distribution. MRFs have been used to model spatially correlated and signal-dependent phenomena for SAR speckled images. The MRF is incorporated into digital image analysis by viewing pixel types as slates of molecules in a lattice-like physical system defined on a GRF Because of the MRF-SRF equivalence, the assignment of an energy function to the physical system determines its Gibbs measure, which is used to model molecular interactions. The proposed Point-Jacobian Iterative MAP estimation method was first evaluated using simulation data generated by the Monte Carlo method. The methodology was then applied to data acquired by the ESA's ERS satellite on Nonsan area of Korean Peninsula. In the extensive experiments of this study, The proposed method demonstrated the capability to relax speckle noise and estimate noise-free intensity.

Deep Learning Based Gray Image Generation from 3D LiDAR Reflection Intensity (딥러닝 기반 3차원 라이다의 반사율 세기 신호를 이용한 흑백 영상 생성 기법)

  • Kim, Hyun-Koo;Yoo, Kook-Yeol;Park, Ju H.;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.1
    • /
    • pp.1-9
    • /
    • 2019
  • In this paper, we propose a method of generating a 2D gray image from LiDAR 3D reflection intensity. The proposed method uses the Fully Convolutional Network (FCN) to generate the gray image from 2D reflection intensity which is projected from LiDAR 3D intensity. Both encoder and decoder of FCN are configured with several convolution blocks in the symmetric fashion. Each convolution block consists of a convolution layer with $3{\times}3$ filter, batch normalization layer and activation function. The performance of the proposed method architecture is empirically evaluated by varying depths of convolution blocks. The well-known KITTI data set for various scenarios is used for training and performance evaluation. The simulation results show that the proposed method produces the improvements of 8.56 dB in peak signal-to-noise ratio and 0.33 in structural similarity index measure compared with conventional interpolation methods such as inverse distance weighted and nearest neighbor. The proposed method can be possibly used as an assistance tool in the night-time driving system for autonomous vehicles.

Wide Dynamic Range CMOS Image Sensor with Adjustable Sensitivity Using Cascode MOSFET and Inverter

  • Seong, Donghyun;Choi, Byoung-Soo;Kim, Sang-Hwan;Lee, Jimin;Shin, Jang-Kyoo
    • Journal of Sensor Science and Technology
    • /
    • v.27 no.3
    • /
    • pp.160-164
    • /
    • 2018
  • In this paper, a wide dynamic range complementary metal-oxide-semiconductor (CMOS) image sensor with the adjustable sensitivity by using cascode metal-oxide-semiconductor field-effect transistor (MOSFET) and inverter is proposed. The characteristics of the CMOS image sensor were analyzed through experimental results. The proposed active pixel sensor consists of eight transistors operated under various light intensity conditions. The cascode MOSFET is operated as the constant current source. The current generated from the cascode MOSFET varies with the light intensity. The proposed CMOS image sensor has wide dynamic range under the high illumination owing to logarithmic response to the light intensity. In the proposed active pixel sensor, a CMOS inverter is added. The role of the CMOS inverter is to determine either the conventional mode or the wide dynamic range mode. The cascode MOSFET let the current flow the current if the CMOS inverter is turned on. The number of pixels is $140(H){\times}180(V)$ and the CMOS image sensor architecture is composed of a pixel array, multiplexer (MUX), shift registers, and biasing circuits. The sensor was fabricated using $0.35{\mu}m$ 2-poly 4-metal CMOS standard process.

Image Segmentation Based on Fusion of Range and Intensity Images (거리영상과 밝기영상의 fusion을 이용한 영상분할)

  • Chang, In-Su;Park, Rae-Hong
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.9
    • /
    • pp.95-103
    • /
    • 1998
  • This paper proposes an image segmentation algorithm based on fusion of range and intensity images. Based on Bayesian theory, a priori knowledge is encoded by the Markov random field (MRF). A maximum a posteriori (MAP) estimator is constructed using the features extracted from range and intensity images. Objects are approximated by local planar surfaces in range images, and the parametric space is constructed with the surface parameters estimated pixelwise. In intensity images the ${\alpha}$-trimmed variance constructs the intensity feature. An image is segmented by optimizing the MAP estimator that is constructed using a likelihood function based on edge information. Computer simulation results shw that the proposed fusion algorithm effectively segments the images independentl of shadow, noise, and light-blurring.

  • PDF

A Basic Study on the Conversion of Color Image into Musical Elements based on a Synesthetic Perception (공감각인지기반 컬러이미지-음악요소 변환에 관한 기초연구)

  • Kim, Sung-Il
    • Science of Emotion and Sensibility
    • /
    • v.16 no.2
    • /
    • pp.187-194
    • /
    • 2013
  • The final aim of the present study is to build a system of converting a color image into musical elements based on a synesthetic perception, emulating human synesthetic skills, which make it possible to associate a color image with a specific sound. This can be done on the basis of the similarities between physical frequency information of both light and sound. As a first step, an input true color image is converted into hue, saturation, and intensity domains based on a color model conversion theory. In the next step, musical elements including note, octave, loudness, and duration are extracted from each domain of the HSI color model. A fundamental frequency (F0) is then extracted from both hue and intensity histograms. The loudness and duration are extracted from both intensity and saturation histograms, respectively. In experiments, the proposed system on the conversion of a color image into musical elements was implemented using standard C and Microsoft Visual C++(ver. 6.0). Through the proposed system, the extracted musical elements were synthesized to finally generate a sound source in a WAV file format. The simulation results revealed that the musical elements, which were extracted from an input RGB color image, reflected in its output sound signals.

  • PDF

Intensity Correction of 3D Stereoscopic Images Using Binarization-Based Region Segmentation (이진화기반 영역분할을 이용한 3D입체영상의 밝기보정)

  • Kim, Sang-Hyun;Kim, Jeong-Yeop
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.265-270
    • /
    • 2011
  • In this paper, we propose a method for intensity correction using binarization-based region segmentation in 3D stereoscopic images. In the proposed method, 3D stereoscopic right image is segmented using binarizarion. Small regions in the segmented image are eliminated. For each region in right image, a corresponding region in left image is decided through region matching using correlation coefficient. When region-based matching, in order to prevent overlap between regions, we remove a portion of the area closed to the region boundary using morphological filter. The intensity correction in left and right image can be performed through histogram specification between the corresponding regions. Simulation results show the proposed method has the smallest matching error than the conventional method when we generate the right image from the left image using block based motion compensation.

Rural Land Cover Classification using Multispectral Image and LIDAR Data (디중분광영상과 LIDAR자료를 이용한 농업지역 토지피복 분류)

  • Jang Jae-Dong
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.2
    • /
    • pp.101-110
    • /
    • 2006
  • The accuracy of rural land cover using airborne multispectral images and LEAR (Light Detection And Ranging) data was analyzed. Multispectral image consists of three bands in green, red and near infrared. Intensity image was derived from the first returns of LIDAR, and vegetation height image was calculated by difference between elevation of the first returns and DEM (Digital Elevation Model) derived from the last returns of LIDAR. Using maximum likelihood classification method, three bands of multispectral images, LIDAR vegetation height image, and intensity image were employed for land cover classification. Overall accuracy of classification using all the five images was improved to 85.6% about 10% higher than that using only the three bands of multispectral images. The classification accuracy of rural land cover map using multispectral images and LIDAR images, was improved with clear difference between heights of different crops and between heights of crop and tree by LIDAR data and use of LIDAR intensity for land cover classification.

Development of image tracking technic to moving target (이동중인 표적에 대한 영상추적기법의 개발)

  • 양승윤;이종헌;이만형
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1988.10a
    • /
    • pp.183-186
    • /
    • 1988
  • The problem addressed in this paper is the accurate tracting of a dynamic target using outputs from a forward - looking infrared(FLIR) sensor as measurements. The important variations are 1) the spread of the target intensity pattern in the FLIR image plane, 2) target motion characteristics, and 3) the rms value and both spartial and temporal correlation of the back - ground noise. Based on this insights. design modifications and on - line adaptation copabilities are incorporated to enable this type of filter track highly maneuverable targets such as air-to-air missiles, with spatially distributed and changing image intensity profiles, against, background clutter.

  • PDF

Quantitative Analysis of MR Image in Cerebral Infarction Period (뇌경색 시기별 MR영상의 정량적 분석)

  • Park, Byeong-Rae;Ha, Kwang;Kim, Hak-Jin;Lee, Seok-Hong;Jeon, Gye-Rok
    • Journal of radiological science and technology
    • /
    • v.23 no.1
    • /
    • pp.39-47
    • /
    • 2000
  • In this study, we showed a comparison and analysis making use of DWI(diffusion weighted image) using early diagnosis of cerebral Infarction and with the classified T2 weighted image, FLAIR images signal intensity for brain infarction period. period of cerebral infarction after the condition of a disease by ischemic stroke. To compare 3 types of image, we performed polynomial warping and affined transform for image matching. Using proposed algorithm, calculated signal intensity difference between T2WI, DWI, FLAIR and DWI. The quantification values between hand made and calculated data are almost the same. We quantified the each period and performed pseudo color mapping by comparing signal intensity each other according to previously obtained hand made data, and compared the result of this paper according to obtained quantified data to that of doctors decision. The examined mean and standard deviation for each brain infarction stage are as follows ; the means and standard deviations of signal intensity difference between DWI and T2WI for each period are $197.7{\pm}6.9$ in hyperacute, $110.2{\pm}5.4$ in acute, and $67.8{\pm}7.2$ in subacute. And the means and standard deviations of signal intensity difference between DWI and FLAIR for each period are $199.8{\pm}7.5$ in hyperacute, $115.3{\pm}8.0$ in acute, and $70.9{\pm}5.8$ in subacute. We can quantificate and decide cerebral infarction period objectively. According to this study, DWI is very exact for early diagnosis. We classified the period of infarction occurrence to analyze the region of disease and normal region in DW, T2WI, FLAIR images.

  • PDF

Image Recognition Based on Nonlinear Equalization and Multidimensional Intensity Variation (비선형 평활화와 다차원의 명암변화에 기반을 둔 영상인식)

  • Cho, Yong-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.5
    • /
    • pp.504-511
    • /
    • 2014
  • This paper presents a hybrid recognition method, which is based on the nonlinear histogram equalization and the multidimensional intensity variation of an images. The nonlinear histogram equalization based on a adaptively modified function is applied to improve the quality by adjusting the brightness of the image. The multidimensional intensity variation by considering the a extent of 4-step changes in brightness between the adjacent pixels is also applied to reflect accurately the attributes of image. The statistical correlation that is measured by the normalized cross-correlation(NCC) coefficient, is applied to comprehensively measure the similarity between the images. The NCC is considered by the intensity variation of each 2-direction(x-axis and y-axis) image. The proposed method has been applied to the problem for recognizing the 50-face images of 40*40 pixels. The experimental results show that the proposed method has a superior recognition performances to the method without performing the histogram equalization, or the linear histogram equalization, respectively.