• Title/Summary/Keyword: single pixel

Search Result 281, Processing Time 0.027 seconds

Detection of Group of Targets Using High Resolution Satellite SAR and EO Images (고해상도 SAR 영상 및 EO 영상을 이용한 표적군 검출 기법 개발)

  • Kim, So-Yeon;Kim, Sang-Wan
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.2
    • /
    • pp.111-125
    • /
    • 2015
  • In this study, the target detection using both high-resolution satellite SAR and Elecro-Optical (EO) images such as TerraSAR-X and WorldView-2 is performed, considering the characteristics of targets. The targets of our interest are featured by being stationary and appearing as cluster targets. After the target detection of SAR image by using Constant False Alarm Rate (CFAR) algorithm, a series of processes is performed in order to reduce false alarms, including pixel clustering, network clustering and coherence analysis. We extend further our algorithm by adopting the fast and effective ellipse detection in EO image using randomized hough transform, which is significantly reducing the number of false alarms. The performance of proposed algorithm has been tested and analyzed on TerraSAR-X SAR and WordView-2 EO images. As a result, the average false alarm for group of targets is 1.8 groups/$64km^2$ and the false alarms of single target range from 0.03 to 0.3 targets/$km^2$. The results show that groups of targets are successfully identified with very low false alarms.

Object Tracking And Elimination Using Lod Edge Maps Generated from Modified Canny Edge Maps (수정된 캐니 에지 맵으로부터 만들어진 LOD 에지 맵을 이용한 물체 추적 및 소거)

  • Park, Ji-Hun;Jang, Yung-Dae;Lee, Dong-Hun;Lee, Jong-Kwan;Ham, Mi-Ok
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.171-182
    • /
    • 2007
  • We propose a simple method for tracking a nonparameterized subject contour in a single video stream with a moving camera and changing background. Then we present a method to eliminate the tracked contour object by replacing with the background scene we get from other frame. First we track the object using LOD (Level-of-Detail) canny edge maps, then we generate background of each image frame and replace the tracked object in a scene by a background image from other frame that is not occluded by the tracked object. Our tracking method is based on level-of-detail (LOD) modified Canny edge maps and graph-based routing operations on the LOD maps. We get more edge pixels along LOD hierarchy. Our accurate tracking is based on reducing effects from irrelevant edges by selecting the stronger edge pixels, thereby relying on the current frame edge pixel as much as possible. The first frame background scene is determined by camera motion, camera movement between two image frames, and other background scenes are computed from the previous background scenes. The computed background scenes are used to eliminate the tracked object from the scene. In order to remove the tracked object, we generate approximated background for the first frame. Background images for subsequent frames are based on the first frame background or previous frame images. This approach is based on computing camera motion. Our experimental results show that our method works nice for moderate camera movement with small object shape changes.

Classification of Natural and Artificial Forests from KOMPSAT-3/3A/5 Images Using Deep Neural Network (심층신경망을 이용한 KOMPSAT-3/3A/5 영상으로부터 자연림과 인공림의 분류)

  • Baek, Won-Kyung;Lee, Yong-Suk;Park, Sung-Hwan;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_3
    • /
    • pp.1965-1974
    • /
    • 2021
  • Satellite remote sensing approach can be actively used for forest monitoring. Especially, it is much meaningful to utilize Korea multi-purpose satellites, an independently operated satellite in Korea, for forest monitoring of Korea, Recently, several studies have been performed to exploit meaningful information from satellite remote sensed data via machine learning approaches. The forest information produced through machine learning approaches can be used to support the efficiency of traditional forest monitoring methods, such as in-situ survey or qualitative analysis of aerial image. The performance of machine learning approaches is greatly depending on the characteristics of study area and data. Thus, it is very important to survey the best model among the various machine learning models. In this study, the performance of deep neural network to classify artificial or natural forests was analyzed in Samcheok, Korea. As a result, the pixel accuracy was about 0.857. F1 scores for natural and artificial forests were about 0.917 and 0.433 respectively. The F1 score of artificial forest was low. However, we can find that the artificial and natural forest classification performance improvement of about 0.06 and 0.10 in F1 scores, compared to the results from single layered sigmoid artificial neural network. Based on these results, it is necessary to find a more appropriate model for the forest type classification by applying additional models based on a convolutional neural network.

Image Processing of Pseudo-rate-distortion Function Based on MSSSIM and KL-Divergence, Using Multiple Video Processing Filters for Video Compression (MSSSIM 및 쿨백-라이블러 발산 기반 의사 율-왜곡 평가 함수와 복수개의 영상처리 필터를 이용한 동영상 전처리 방법)

  • Seok, Jinwuk;Cho, Seunghyun;Kim, Hui Yong;Choi, Jin Soo
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.768-779
    • /
    • 2018
  • In this paper, we propose a novel video quality function for video processing based on MSSSIM to select an appropriate video processing filter and to accommodate multiple processing filters to each pixel block in a picture frame by a mathematical selection law so as to maintain video quality and to reduce the bitrate of compressed video. In viewpoint of video compression, since the properties of video quality and bitrate is different for each picture of video frames and for each areas in the same frame, it is difficult for the video filter with single property to satisfy the object of increasing video quality and decreasing bitrate. Consequently, to maintain the subjective video quality in spite of decreasing bitrate, we propose the methodology about the MSSSIM as the measure of subjective video quality, the KL-Divergence as the measure of bitrate, and the combination method of those two measurements. Moreover, using the proposed combinatorial measurement, when we use the multiple image filters with mutually different properties as a pre-processing filter for video, we can verify that it is possible to compress video with maintaining the video quality under decreasing the bitrate, as possible.

Classification of Forest Vertical Structure Using Machine Learning Analysis (머신러닝 기법을 이용한 산림의 층위구조 분류)

  • Kwon, Soo-Kyung;Lee, Yong-Suk;Kim, Dae-Seong;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.2
    • /
    • pp.229-239
    • /
    • 2019
  • All vegetation colonies have layered structure. This layer is called 'forest vertical structure.' Nowadays it is considered as an important indicator to estimate forest's vital condition, diversity and environmental effect of forest. So forest vertical structure should be surveyed. However, vertical structure is a kind of inner structure, so forest surveys are generally conducted through field surveys, a traditional forest inventory method which costs plenty of time and budget. Therefore, in this study, we propose a useful method to classify the vertical structure of forests using remote sensing aerial photographs and machine learning capable of mass data mining in order to reduce time and budget for forest vertical structure investigation. We classified it as SVM (Support Vector Machine) using RGB airborne photos and LiDAR (Light Detection and Ranging) DSM (Digital Surface Model) DTM (Digital Terrain Model). Accuracy based on pixel count is 66.22% when compared to field survey results. It is concluded that classification accuracy of layer classification is relatively high for single-layer and multi-layer classification, but it was concluded that it is difficult in multi-layer classification. The results of this study are expected to further develop the field of machine learning research on vegetation structure by collecting various vegetation data and image data in the future.

A modified U-net for crack segmentation by Self-Attention-Self-Adaption neuron and random elastic deformation

  • Zhao, Jin;Hu, Fangqiao;Qiao, Weidong;Zhai, Weida;Xu, Yang;Bao, Yuequan;Li, Hui
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.1-16
    • /
    • 2022
  • Despite recent breakthroughs in deep learning and computer vision fields, the pixel-wise identification of tiny objects in high-resolution images with complex disturbances remains challenging. This study proposes a modified U-net for tiny crack segmentation in real-world steel-box-girder bridges. The modified U-net adopts the common U-net framework and a novel Self-Attention-Self-Adaption (SASA) neuron as the fundamental computing element. The Self-Attention module applies softmax and gate operations to obtain the attention vector. It enables the neuron to focus on the most significant receptive fields when processing large-scale feature maps. The Self-Adaption module consists of a multiplayer perceptron subnet and achieves deeper feature extraction inside a single neuron. For data augmentation, a grid-based crack random elastic deformation (CRED) algorithm is designed to enrich the diversities and irregular shapes of distributed cracks. Grid-based uniform control nodes are first set on both input images and binary labels, random offsets are then employed on these control nodes, and bilinear interpolation is performed for the rest pixels. The proposed SASA neuron and CRED algorithm are simultaneously deployed to train the modified U-net. 200 raw images with a high resolution of 4928 × 3264 are collected, 160 for training and the rest 40 for the test. 512 × 512 patches are generated from the original images by a sliding window with an overlap of 256 as inputs. Results show that the average IoU between the recognized and ground-truth cracks reaches 0.409, which is 29.8% higher than the regular U-net. A five-fold cross-validation study is performed to verify that the proposed method is robust to different training and test images. Ablation experiments further demonstrate the effectiveness of the proposed SASA neuron and CRED algorithm. Promotions of the average IoU individually utilizing the SASA and CRED module add up to the final promotion of the full model, indicating that the SASA and CRED modules contribute to the different stages of model and data in the training process.

Transferring Calibrations Between on Farm Whole Grain NIR Analysers

  • Clancy, Phillip J.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1210-1210
    • /
    • 2001
  • On farm analysis of protein, moisture and oil in cereals and oil seeds is quickly being adopted by Australian farmers. The benefits of being able to measure protein and oil in grains and oil seeds are several : $\square$ Optimize crop payments $\square$ Monitor effects of fertilization $\square$ Blend on farm to meet market requirements $\square$ Off farm marketing - sell crop with load by load analysis However farmers are not NIR spectroscopists and the process of calibrating instruments has to the duty of the supplier. With the potential number of On Farm analyser being in the thousands, then the task of calibrating each instrument would be impossible, let alone the problems encountered with updating calibrations from season to season. As such, NIR technology Australia has developed a mechanism for \ulcorner\ulcorner\ulcorner their range of Cropscan 2000G NIR analysers so that a single calibration can be transferred from the master instrument to every slave instrument. Whole grain analysis has been developed over the last 10 years using Near Infrared Transmission through a sample of grain with a pathlength varying from 5-30mm. A continuous spectrum from 800-1100nm is the optimal wavelength coverage fro these applications and a grating based spectrophotometer has proven to provide the best means of producing this spectrum. The most important aspect of standardizing NIB instruments is to duplicate the spectral information. The task is to align spectrum from the slave instruments to the master instrument in terms of wavelength positioning and then to adjust the spectral response at each wavelength in order that the slave instruments mimic the master instrument. The Cropscan 2000G and 2000B Whole Grain Analyser use flat field spectrographs to produce a spectrum from 720-1100nm and a silicon photodiode array detector to collect the spectrum at approximately 10nm intervals. The concave holographic gratings used in the flat field spectrographs are produced by a process of photo lithography. As such each grating is an exact replica of the original. To align wavelengths in these instruments, NIR wheat sample scanned on the master and the slave instruments provides three check points in the spectrum to make a more exact alignment. Once the wavelengths are matched then many samples of wheat, approximately 10, exhibiting absorbances from 2 to 4.5 Abu, are scanned on the master and then on each slave. Using a simple linear regression technique, a slope and bias adjustment is made for each pixel of the detector. This process corrects the spectral response at each wavelength so that the slave instruments produce the same spectra as the master instrument. It is important to use as broad a range of absorbances in the samples so that a good slope and bias estimate can be calculated. These Slope and Bias (S'||'&'||'B) factors are then downloaded into the slave instruments. Calibrations developed on the master instrument can then be downloaded onto the slave instruments and perform similarly to the master instrument. The data shown in this paper illustrates the process of calculating these S'||'&'||'B factors and the transfer of calibrations for wheat, barley and sorghum between several instruments.

  • PDF

Development of Agricultural Products Screening System through X-ray Density Analysis

  • Eunhyeok Baek;Young-Tae Kwak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.4
    • /
    • pp.105-112
    • /
    • 2023
  • In this paper, we propose a new method for displaying colored defects by measuring the relative density with the wide-area and local densities of X-ray. The relative density of one pixel represents a relative difference from the surrounding pixels, and we also suggest a colorization of X-ray images representing these pixels as normal and defective. The traditional method mainly inspects materials such as plastics and metals, which have large differences in transmittance to the object. Our proposed method can be used to detect defects such as sprouts or holes in images obtained by an inspection machine that detects X-rays. In the experiment, the products that could not be seen with the naked eye were colored with pests or sprouts in a specific color so that they could be used in the agricultural product selection system. Products that are uniformly filled with a single ingredient inside, such as potatoes, carrots, and apples, can be detected effectively. However, it does not work well with bumpy products, such as peppers and paprika. The advantage of this method is that, unlike machine learning, it doesn't require large amounts of data. The proposed method could be applied to a screening system using X-rays and used not only in agricultural product screening systems but also in manufacturing processes such as processed food and parts manufacturing, so that it can be actively used to select defective products.

Verification of Indicator Rotation Correction Function of a Treatment Planning Program for Stereotactic Radiosurgery (방사선수술치료계획 프로그램의 지시자 회전 오차 교정 기능 점검)

  • Chung, Hyun-Tai;Lee, Re-Na
    • Journal of Radiation Protection and Research
    • /
    • v.33 no.2
    • /
    • pp.47-51
    • /
    • 2008
  • Objective: This study analyzed errors due to rotation or tilt of the magnetic resonance (MR) imaging indicator during image acquisition for a stereotactic radiosurgery. The error correction procedure of a commercially available stereotactic neurosurgery treatment planning program has been verified. Materials and Methods: Software virtual phantoms were built with stereotactic images generated by a commercial programming language, Interactive Data Language (version 5.5). The thickness of an image slice was 0.5 mm, pixel size was $0.5{\times}0.5mm$, field of view was 256 mm, and image resolution was $512{\times}512$. The images were generated under the DICOM 3.0 standard in order to be used with Leksell GammaPlan$^{(R)}$. For the verification of the rotation error correction function of Leksell GammaPlan$^{(R)}$, 45 measurement points were arranged in five axial planes. On each axial plane, there were nine measurement points along a square of length 100 mm. The center of the square was located on the z-axis and a measurement point was on the z-axis, too. Five axial planes were placed at z=-50.0, -30.0, 0.0, 30.0, 50.0 mm, respectively. The virtual phantom was rotated by $3^{\circ}$ around one of x, y, and z-axis. It was also rotated by $3^{\circ}$ around two axes of x, y, and z-axis, and rotated by $3^{\circ}$ along all three axes. The errors in the position of rotated measurement points were measured with Leksell GammaPlan$^{(R)}$ and the correction function was verified. Results: The image registration errors of the virtual phantom images was $0.1{\pm}0.1mm$ and it was within the requirement of stereotactic images. The maximum theoretical errors in position of measurement points were 2.6 mm for a rotation around one axis, 3.7 mm for a rotation around two axes, and 4.5 mm for a rotation around three axes. The measured errors in position was $0.1{\pm}0.1mm$ for a rotation around single axis, $0.2{\pm}0.2mm$ for double and triple axes. These small errors verified that the rotation error correction function of Leksell GammaPlan$^{(R)}$ is working fine. Conclusion: A virtual phantom was built to verify software functions of stereotactic neurosurgery treatment planning program. The error correction function of a commercial treatment planning program worked within nominal error range. The virtual phantom of this study can be applied in many other fields to verify various functions of treatment planning programs.

Assembly and Testing of a Visible and Near-infrared Spectrometer with a Shack-Hartmann Wavefront Sensor (샤크-하트만 센서를 이용한 가시광 및 근적외선 분광기 조립 및 평가)

  • Hwang, Sung Lyoung;Lee, Jun Ho;Jeong, Do Hwan;Hong, Jin Suk;Kim, Young Soo;Kim, Yeon Soo;Kim, Hyun Sook
    • Korean Journal of Optics and Photonics
    • /
    • v.28 no.3
    • /
    • pp.108-115
    • /
    • 2017
  • We report the assembly procedure and performance evaluation of a visible and near-infrared spectrometer in the wavelength region of 400-900 nm, which is later to be combined with fore-optics (a telescope) to form a f/2.5 imaging spectrometer with a field of view of ${\pm}7.68^{\circ}$. The detector at the final image plane is a $640{\times}480$ charge-coupled device with a $24{\mu}m$ pixel size. The spectrometer is in an Offner relay configuration consisting of two concentric, spherical mirrors, the secondary of which is replaced by a convex grating mirror. A double-pass test method with an interferometer is often applied in the assembly process of precision optics, but was excluded from our study due to a large residual wavefront error (WFE) in optical design of 210 nm ($0.35{\lambda}$ at 600 nm) root-mean-square (RMS). This results in a single-path test method with a Shack-Hartmann sensor. The final assembly was tested to have a RMS WFE increase of less than 90 nm over the entire field of view, a keystone of 0.08 pixels, a smile of 1.13 pixels and a spectral resolution of 4.32 nm. During the procedure, we confirmed the validity of using a Shack-Hartmann wavefront sensor to monitor alignment in the assembly of an Offner-like spectrometer.