• Title/Summary/Keyword: single pixel

Search Result 282, Processing Time 0.025 seconds

IGRINS First Light Instrumental Performance

  • Park, Chan;Yuk, In-Soo;Chun, Moo-Young;Pak, Soojong;Kim, Kang-Min;Pavel, Michael;Lee, Hanshin;Oh, Heeyoung;Jeong, Ueejeong;Sim, Chae Kyung;Lee, Hye-In;Le, Huynh Anh Nguyen;Strubhar, Joseph;Gully-Santiago, Michael;Oh, Jae Sok;Cha, Sang-Mok;Moon, Bongkon;Park, Kwijong;Brooks, Cynthia;Ko, Kyeongyeon;Han, Jeong-Yeol;Nah, Jakyuong;Hill, Peter C.;Lee, Sungho;Barnes, Stuart;Park, Byeong-Gon;T., Daniel
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.1
    • /
    • pp.52.2-52.2
    • /
    • 2014
  • The Immersion Grating Infrared Spectrometer (IGRINS) is an unprecedentedly minimized infrared cross-dispersed echelle spectrograph with a high-resolution and high-sensitivity optical performance. A silicon immersion grating features the instrument for the first time in this field. IGRINS will cover the entire portion of the wavelength range between 1.45 and $2.45{\mu}m$ accessible from the ground in a single exposure with spectral resolution of 40,000. Individual volume phase holographic (VPH) gratings serve as cross-dispersing elements for separate spectrograph arms covering the H and K bands. On the 2.7m Harlan J. Smith telescope at the McDonald Observatory, the slit size is $1^{\prime\prime}{\times}15^{\prime\prime}$. IGRINS has a $0.27^{\prime\prime}$ pixel-1 plate scale on a $2048{\times}2048$ pixel Teledyne Scientific & Imaging HAWAII-2RG detector with SIDECAR ASIC cryogenic controller. The instrument includes four subsystems; a calibration unit, an input relay optics module, a slit-viewing camera, and nearly identical H and K spectrograph modules. The use of a silicon immersion grating and a compact white pupil design allows the spectrograph collimated beam size to be 25mm, which permits the entire cryogenic system to be contained in a moderately sized rectangular vacuum chamber. The fabrication and assembly of the optical and mechanical hardware components were completed in 2013. In this presentation, we describe the major design characteristics of the instrument and the early performance estimated from the first light commissioning at the McDonald Observatory.

  • PDF

An Application of Artificial Intelligence System for Accuracy Improvement in Classification of Remotely Sensed Images (원격탐사 영상의 분류정확도 향상을 위한 인공지능형 시스템의 적용)

  • 양인태;한성만;박재국
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.20 no.1
    • /
    • pp.21-31
    • /
    • 2002
  • This study applied each Neural Networks theory and Fuzzy Set theory to improve accuracy in remotely sensed images. Remotely sensed data have been used to map land cover. The accuracy is dependent on a range of factors related to the data set and methods used. Thus, the accuracy of maps derived from conventional supervised image classification techniques is a function of factors related to the training, allocation, and testing stages of the classification. Conventional image classification techniques assume that all the pixels within the image are pure. That is, that they represent an area of homogeneous cover of a single land-cover class. But, this assumption is often untenable with pixels of mixed land-cover composition abundant in an image. Mixed pixels are a major problem in land-cover mapping applications. For each pixel, the strengths of class membership derived in the classification may be related to its land-cover composition. Fuzzy classification techniques are the concept of a pixel having a degree of membership to all classes is fundamental to fuzzy-sets-based techniques. A major problem with the fuzzy-sets and probabilistic methods is that they are slow and computational demanding. For analyzing large data sets and rapid processing, alterative techniques are required. One particularly attractive approach is the use of artificial neural networks. These are non-parametric techniques which have been shown to generally be capable of classifying data as or more accurately than conventional classifiers. An artificial neural networks, once trained, may classify data extremely rapidly as the classification process may be reduced to the solution of a large number of extremely simple calculations which may be performed in parallel.

Flow Visualization in the Branching Duct by Using Particle Imaging Velocimetry (입자영상유속계를 이용한 분기관내 유동가시화)

  • No, Hyeong-Un;Seo, Sang-Ho;Yu, Sang-Sin
    • Journal of Biomedical Engineering Research
    • /
    • v.20 no.1
    • /
    • pp.29-36
    • /
    • 1999
  • The objective of this study is to analyse the flow field in the branching duct by visualizing the flow phenomena using the PIV system. A bifurcation model is fabricated with transparent acrylic resin to visualize the whole flow field with the PIV system. Water was used as the working fluid and the conifer powder as the tracer particles. The single-frame and two-frame methods of the PIV system and 2-frame of the grey level correlation method are applied to obtain the velocity vectors from the images captured in the flow filed. The velocity distributions in a lid-driven cavity flow are compared with the so-called standard experimental data, which was obtained from by 4-frame method in order to validate experimental results of the PIV measurements. The flow patterns of a Newtonian fluid in a branching duct were successfully visualized by using the PIV system and the sub-pixel and the area interpolation method were used to obtain the final velocity vectors. The velocity vectors obtained from the PIV system are in good agreement with the numerical results of the 3-dimensional branch flow. The results of numerical analyses and the PIV experiments for the three-dimensional flows in the branch ing duct show the recirculation zone distal to the branching point and the sizes of the recirculation length and height of the tow different methods are in good agreement.

  • PDF

Haze Removal of Electro-Optical Sensor using Super Pixel (슈퍼픽셀을 활용한 전자광학센서의 안개 제거 기법 연구)

  • Noh, Sang-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.6
    • /
    • pp.634-638
    • /
    • 2018
  • Haze is a factor that degrades the performance of various image processing algorithms, such as those for detection, tracking, and recognition using an electro-optical sensor. For robust operation of an electro-optical sensor-based unmanned system used outdoors, an algorithm capable of effectively removing haze is needed. As a haze removal method using a single electro-optical sensor, the dark channel prior using statistical properties of the electro-optical sensor is most widely known. Previous methods used a square filter in the process of obtaining a transmission using the dark channel prior. When a square filter is used, the effect of removing haze becomes smaller as the size of the filter becomes larger. When the size of the filter becomes excessively small, over-saturation occurs, and color information in the image is lost. Since the size of the filter greatly affects the performance of the algorithm, a relatively large filter is generally used, or a small filter is used so that no over-saturation occurs, depending on the image. In this paper, we propose an improved haze removal method using color image segmentation. The parameters of the color image segmentation are automatically set according to the information complexity of the image, and the over-saturation phenomenon does not occur by estimating the amount of transmission based on the parameters.

Performance Evaluation of Snow Detection Using Himawari-8 AHI Data (Himawari-8 AHI 적설 탐지의 성능 평가)

  • Jin, Donghyun;Lee, Kyeong-sang;Seo, Minji;Choi, Sungwon;Seong, Noh-hun;Lee, Eunkyung;Han, Hyeon-gyeong;Han, Kyung-soo
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_1
    • /
    • pp.1025-1032
    • /
    • 2018
  • Snow Cover is a form of precipitation that is defined by snow on the surface and is the single largest component of the cryosphere that plays an important role in maintaining the energy balance between the earth's surface and the atmosphere. It affects the regulation of the Earth's surface temperature. However, since snow cover is mainly distributed in area where human access is difficult, snow cover detection using satellites is actively performed, and snow cover detection in forest area is an important process as well as distinguishing between cloud and snow. In this study, we applied the Normalized Difference Snow Index (NDSI) and the Normalized Difference Vegetation Index (NDVI) to the geostationary satellites for the snow detection of forest area in existing polar orbit satellites. On the rest of the forest area, the snow cover detection using $R_{1.61{\mu}m}$ anomaly technique and NDSI was performed. As a result of the indirect validation using the snow cover data and the Visible Infrared Imaging Radiometer (VIIRS) snow cover data, the probability of detection (POD) was 99.95 % and the False Alarm Ratio (FAR) was 16.63 %. We also performed qualitative validation using the Himawari-8 Advanced Himawari Imager (AHI) RGB image. The result showed that the areas detected by the VIIRS Snow Cover miss pixel are mixed with the area detected by the research false pixel.

Local Prominent Directional Pattern for Gender Recognition of Facial Photographs and Sketches (Local Prominent Directional Pattern을 이용한 얼굴 사진과 스케치 영상 성별인식 방법)

  • Makhmudkhujaev, Farkhod;Chae, Oksam
    • Convergence Security Journal
    • /
    • v.19 no.2
    • /
    • pp.91-104
    • /
    • 2019
  • In this paper, we present a novel local descriptor, Local Prominent Directional Pattern (LPDP), to represent the description of facial images for gender recognition purpose. To achieve a clearly discriminative representation of local shape, presented method encodes a target pixel with the prominent directional variations in local structure from an analysis of statistics encompassed in the histogram of such directional variations. Use of the statistical information comes from the observation that a local neighboring region, having an edge going through it, demonstrate similar gradient directions, and hence, the prominent accumulations, accumulated from such gradient directions provide a solid base to represent the shape of that local structure. Unlike the sole use of gradient direction of a target pixel in existing methods, our coding scheme selects prominent edge directions accumulated from more samples (e.g., surrounding neighboring pixels), which, in turn, minimizes the effect of noise by suppressing the noisy accumulations of single or fewer samples. In this way, the presented encoding strategy provides the more discriminative shape of local structures while ensuring robustness to subtle changes such as local noise. We conduct extensive experiments on gender recognition datasets containing a wide range of challenges such as illumination, expression, age, and pose variations as well as sketch images, and observe the better performance of LPDP descriptor against existing local descriptors.

Detection of the Coastal Wetlands Using the Sentinel-2 Satellite Image and the SRTM DEM Acquired in Gomsoman Bay, West Coasts of South Korea (Sentinel-2 위성영상과 SRTM DEM을 활용한 연안습지 탐지: 서해안 곰소만을 사례로)

  • CHOUNG, Yun-Jae;KIM, Kyoung-Seop;PARK, Insun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.24 no.2
    • /
    • pp.52-63
    • /
    • 2021
  • In previous research, the coastal wetlands were detected by using the vegetation indices or land cover classification maps derived from the multispectral bands of the satellite or aerial imagery, and this approach caused the various limitations for detecting the coastal wetlands with high accuracy due to the difficulty of acquiring both land cover and topographic information by using the single remote sensing data. This research suggested the efficient methodology for detecting the coastal wetlands using the sentinel-2 satellite image and SRTM(Shuttle Radar Topography Mission) DEM (Digital Elevation Model) acquired in Gomsoman Bay, west coasts of South Korea through the following steps. First, the NDWI(Normalized Difference Water Index) image was generated using the green and near-infrared bands of the given Sentinel-2 satellite image. Then, the binary image that separating lands and waters was generated from the NDWI image based on the pixel intensity value 0.2 as the threshold and the other binary image that separating the upper sea level areas and the under sea level areas was generated from the SRTM DEM based on the pixel intensity value 0 as the threshold. Finally, the coastal wetland map was generated by overlaying analysis of these binary images. The generated coastal wetland map had the 94% overall accuracy. In addition, the other types of wetlands such as inland wetlands or mountain wetlands were not detected in the generated coastal wetland map, which means that the generated coastal wetland map can be used for the coastal wetland management tasks.

Hierarchical Clustering Approach of Multisensor Data Fusion: Application of SAR and SPOT-7 Data on Korean Peninsula

  • Lee, Sang-Hoon;Hong, Hyun-Gi
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.65-65
    • /
    • 2002
  • In remote sensing, images are acquired over the same area by sensors of different spectral ranges (from the visible to the microwave) and/or with different number, position, and width of spectral bands. These images are generally partially redundant, as they represent the same scene, and partially complementary. For many applications of image classification, the information provided by a single sensor is often incomplete or imprecise resulting in misclassification. Fusion with redundant data can draw more consistent inferences for the interpretation of the scene, and can then improve classification accuracy. The common approach to the classification of multisensor data as a data fusion scheme at pixel level is to concatenate the data into one vector as if they were measurements from a single sensor. The multiband data acquired by a single multispectral sensor or by two or more different sensors are not completely independent, and a certain degree of informative overlap may exist between the observation spaces of the different bands. This dependence may make the data less informative and should be properly modeled in the analysis so that its effect can be eliminated. For modeling and eliminating the effect of such dependence, this study employs a strategy using self and conditional information variation measures. The self information variation reflects the self certainty of the individual bands, while the conditional information variation reflects the degree of dependence of the different bands. One data set might be very less reliable than others in the analysis and even exacerbate the classification results. The unreliable data set should be excluded in the analysis. To account for this, the self information variation is utilized to measure the degrees of reliability. The team of positively dependent bands can gather more information jointly than the team of independent ones. But, when bands are negatively dependent, the combined analysis of these bands may give worse information. Using the conditional information variation measure, the multiband data are split into two or more subsets according the dependence between the bands. Each subsets are classified separately, and a data fusion scheme at decision level is applied to integrate the individual classification results. In this study. a two-level algorithm using hierarchical clustering procedure is used for unsupervised image classification. Hierarchical clustering algorithm is based on similarity measures between all pairs of candidates being considered for merging. In the first level, the image is partitioned as any number of regions which are sets of spatially contiguous pixels so that no union of adjacent regions is statistically uniform. The regions resulted from the low level are clustered into a parsimonious number of groups according to their statistical characteristics. The algorithm has been applied to satellite multispectral data and airbone SAR data.

  • PDF

SuperDepthTransfer: Depth Extraction from Image Using Instance-Based Learning with Superpixels

  • Zhu, Yuesheng;Jiang, Yifeng;Huang, Zhuandi;Luo, Guibo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4968-4986
    • /
    • 2017
  • In this paper, we primarily address the difficulty of automatic generation of a plausible depth map from a single image in an unstructured environment. The aim is to extrapolate a depth map with a more correct, rich, and distinct depth order, which is both quantitatively accurate as well as visually pleasing. Our technique, which is fundamentally based on a preexisting DepthTransfer algorithm, transfers depth information at the level of superpixels. This occurs within a framework that replaces a pixel basis with one of instance-based learning. A vital superpixels feature enhancing matching precision is posterior incorporation of predictive semantic labels into the depth extraction procedure. Finally, a modified Cross Bilateral Filter is leveraged to augment the final depth field. For training and evaluation, experiments were conducted using the Make3D Range Image Dataset and vividly demonstrate that this depth estimation method outperforms state-of-the-art methods for the correlation coefficient metric, mean log10 error and root mean squared error, and achieves comparable performance for the average relative error metric in both efficacy and computational efficiency. This approach can be utilized to automatically convert 2D images into stereo for 3D visualization, producing anaglyph images that are visually superior in realism and simultaneously more immersive.

Boundary Depth Estimation Using Hough Transform and Focus Measure (허프 변환과 초점정보를 이용한 경계면 깊이 추정)

  • Kwon, Dae-Sun;Lee, Dae-Jong;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.1
    • /
    • pp.78-84
    • /
    • 2015
  • Depth estimation is often required for robot vision, 3D modeling, and motion control. Previous method is based on the focus measures which are calculated for a series of image by a single camera at different distance between and object. This method, however, has disadvantage of taking a long time for calculating the focus measure since the mask operation is performed for every pixel in the image. In this paper, we estimates the depth by using the focus measure of the boundary pixels located between the objects in order to minimize the depth estimate time. To detect the boundary of an object consisting of a straight line and a circle, we use the Hough transform and estimate the depth by using the focus measure. We performed various experiments for PCB images and obtained more effective depth estimation results than previous ones.