• Title/Summary/Keyword: Data Classification Scheme

Search Result 283, Processing Time 0.029 seconds

Standardizing Agriculture-related Land Cover Classification Scheme Using IKONOS Satellite Imagery (IKONOS 영상자료를 이용한 농업관련 토지피복 분류기준 설정 연구)

  • 홍성민;정인균;김성준
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2004.03a
    • /
    • pp.261-265
    • /
    • 2004
  • The purpose of this study is to present a standardized scheme for providing agriculture-related information at various spatial resolutions of satellite images including Landsat+ETM, KOMPSAT-1 EOC, ASTER VNIR, and IKONOS panchromatic and multi-spectral images. The satellite images were interpreted especially for identifying agricultural areas, crop types, agricultural facilities and structures. The results were compared with the land cover/land use classification system suggested by Ministry of Construction & Transportation based on NGIS (National Geographic Information System) and Ministry of Environment based on satellite remote sensing data. As a result, high-resolution agricultural land cover map from IKONOS imageries was made out. The results by IKONOS image will be provided to KOMPSAT-2 project for agricultural application.

  • PDF

A Study on the Database basic structure of Accident Data Management for the Purpose of Railway Safety Management (철도안전관리를 위한 사고자료관리 D/B구조에 관한 기초연구)

  • Hong Seon Ho;Wang Jong Bae;Kwak Sang Log;Lee Yoo Jun
    • Proceedings of the KSR Conference
    • /
    • 2003.10b
    • /
    • pp.241-246
    • /
    • 2003
  • In this paper, necessity and application scope of the risk-analysis D/B which assesses the railway safety condition has been introduced. In addition, normalization of analysis work, which is one of the DB development procedures has been conducted. And the structure of accident data management has been introduced through the analysis on the classification scheme used in Korea. Also the improvement of railway accident classification and management scheme which is necessary to accident risk assesment has been presented by these procedures.

  • PDF

Hierarchical Clustering Approach of Multisensor Data Fusion: Application of SAR and SPOT-7 Data on Korean Peninsula

  • Lee, Sang-Hoon;Hong, Hyun-Gi
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.65-65
    • /
    • 2002
  • In remote sensing, images are acquired over the same area by sensors of different spectral ranges (from the visible to the microwave) and/or with different number, position, and width of spectral bands. These images are generally partially redundant, as they represent the same scene, and partially complementary. For many applications of image classification, the information provided by a single sensor is often incomplete or imprecise resulting in misclassification. Fusion with redundant data can draw more consistent inferences for the interpretation of the scene, and can then improve classification accuracy. The common approach to the classification of multisensor data as a data fusion scheme at pixel level is to concatenate the data into one vector as if they were measurements from a single sensor. The multiband data acquired by a single multispectral sensor or by two or more different sensors are not completely independent, and a certain degree of informative overlap may exist between the observation spaces of the different bands. This dependence may make the data less informative and should be properly modeled in the analysis so that its effect can be eliminated. For modeling and eliminating the effect of such dependence, this study employs a strategy using self and conditional information variation measures. The self information variation reflects the self certainty of the individual bands, while the conditional information variation reflects the degree of dependence of the different bands. One data set might be very less reliable than others in the analysis and even exacerbate the classification results. The unreliable data set should be excluded in the analysis. To account for this, the self information variation is utilized to measure the degrees of reliability. The team of positively dependent bands can gather more information jointly than the team of independent ones. But, when bands are negatively dependent, the combined analysis of these bands may give worse information. Using the conditional information variation measure, the multiband data are split into two or more subsets according the dependence between the bands. Each subsets are classified separately, and a data fusion scheme at decision level is applied to integrate the individual classification results. In this study. a two-level algorithm using hierarchical clustering procedure is used for unsupervised image classification. Hierarchical clustering algorithm is based on similarity measures between all pairs of candidates being considered for merging. In the first level, the image is partitioned as any number of regions which are sets of spatially contiguous pixels so that no union of adjacent regions is statistically uniform. The regions resulted from the low level are clustered into a parsimonious number of groups according to their statistical characteristics. The algorithm has been applied to satellite multispectral data and airbone SAR data.

  • PDF

A determination of linear decision function using GA and its application to the construction of binary decision tree (유전 알고리즘을 이용한 선형 결정 함수의 결정 및 이진 결정 트리 구성에의 적용)

  • 정순원;박귀태
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1996.10a
    • /
    • pp.271-274
    • /
    • 1996
  • In this paper a new determination scheme of linear decision function is proposed. In this scheme, the weights in linear decision function is obtained by genetic algorithm. The result considering balance between clusters as well as classification error can be obtained by properly selecting the fitness function of genetic algorithm in determination of linear decision function and this has the merit in applying this scheme to the construction of binary decision tree. The proposed scheme is applied to the artificial two dimensional data and real multi dimensional data. Experimental results show the usefulness of the proposed scheme.

  • PDF

A Resetting Scheme for Process Parameters using the Mahalanobis-Taguchi System

  • Park, Chang-Soon
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.4
    • /
    • pp.589-603
    • /
    • 2012
  • Mahalanobis-Taguchi system(MTS) is a statistical tool for classifying the normal group and abnormal group in multivariate data structures. In addition to the classification itself, the MTS uses a method for selecting variables useful for the classification. This method can be used efficiently especially when the abnormal group data are scattered without a specific directionality. When the feedback adjustment procedure through the measurements of the process output for controlling process input variables is not practically possible, the reset procedure can be an alternative one. This article proposes a reset procedure using the MTS. Moreover, a method for identifying input variables to reset is also proposed by the use of the contribution. The identification of the root-cause parameters using the existing dimension-reduced contribution tends to be difficult due to the variety of correlation relationships of multivariate data structures. However, it became possible to provide an improved decision when used together with the location-centered contribution and the individual-parameter contribution.

Study on scheme for screening, quantification and interpretation of trace amounts of hazardous inorganic substances influencing hazard classification of a substance in REACH registration (REACH 물질 등록 시 분류에 영향을 주는 미량 유해 무기물질의 스크리닝·정량·해석을 위한 체계도 연구)

  • Kwon, Hyun-ah;Park, Kwang Seo;Son, Seung Hwan;Choe, Eun Kyung;Kim, Sanghun
    • Analytical Science and Technology
    • /
    • v.32 no.6
    • /
    • pp.233-242
    • /
    • 2019
  • Substance identification is the first step of the REACH registration. It is essential in terms of Classification, Labelling and Packaging (CLP) regulation and because even trace amounts of impurities or additives can affect the classification. In this study, a scheme for the screening, quantification, and interpretation of trace amounts of hazardous inorganic substances is proposed to detect the presence of more than 0.1% hazardous inorganic substances that have been affecting the hazard classification. An exemplary list of hazardous inorganic substances was created from the substances of very high concern (SVHCs) in REACH. Among 201 SVHCs, there were 67 inorganic SVHCs containing at least one or ~2-3 heavy metals, such as As, Cd, Co, Cr, Pb, Sb, and Sn, in their molecular formula. The inorganic SVHCs are listed in excel format with a search function for these heavy metals so that the hazardous inorganic substances, including each heavy metal and the calculated ratio of its atomic weight to molecular weight of the hazardous inorganic substance containing it, can be searched. The case study was conducted to confirm the validity of the established scheme with zinc oxide (ZnO). In a substance that is made of ZnO, Pb was screened by XRF analysis and measured to be 0.04% (w/w) by ICP-OES analysis. After referring to the list, the presence of Pb was interpreted just as an impurity, but not as an impurity relevant for the classification. Future studies are needed to expand on this exemplary list of hazardous inorganic substances using proper regulatory data sources.

The Automatic Management of Classification Scheme with Interoperability on Heterogeneous Data (이기종 데이터 간 상호운용적 분류체계 관리를 위한 분류체계 자동화 방안)

  • Lee, Won-Goo;Hwang, Myung-Gwon;Lee, Min-Ho;Shin, Sung-Ho;Kim, Kwang-Young;Yoon, Hwa-Mook;Sung, Won-Kyung;Jeon, Do-Heon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.12
    • /
    • pp.2609-2618
    • /
    • 2011
  • Under the knowledge-based economy in 21C, the convergence and complexity in science and technology are being more active. Interoperability between heterogeneous domains is a very important point considered in the field of scholarly information service as well information standardization. Thus we suggest the systematic solution method to flexibly extend classification scheme in order for content management and service organizations. Especially, This paper shows that automatic method for interoperability between heterogeneous scholarly classification code structures will be effective in enhancing the information service system.

A Dynamic Variable Window-based Topographical Classification Method Using Aerial LiDAR Data (항공 라이다 데이터를 이용한 동적 가변 윈도우 기반 지형 분류 기법)

  • Sung, Chul-Woong;Lee, Sung-Gyu;Park, Chang-Hoo;Lee, Ho-Jun;Kim, Yoo-Sung
    • Spatial Information Research
    • /
    • v.18 no.5
    • /
    • pp.13-26
    • /
    • 2010
  • In this paper, a dynamic variable window-based topographical classification method is proposed which has the changeable classification units depending on topographical properties. In the proposed scheme, to im prove the classification efficiency, the unit of topographical classification can be changeable dynamically according to the topographical properties and repeated patterns. Also, in this paper, the classification efficiency and accuracy of the proposed method are analyzed in order to find an optimal maximum decision window-size through the experiment. According to the experiment results, the proposed dynamic variable window-based topographical classification method maintains similar accuracy but remarkably reduce computing time than that of a fixed window-size based one, respectively.

Efficient Data Management for Hull Condition Assessment

  • Jaramillo, David;Cabos, Christian;Renard, Philippe
    • International Journal of CAD/CAM
    • /
    • v.6 no.1
    • /
    • pp.9-17
    • /
    • 2006
  • Performing inspections for Hull Condition Monitoring and Assessment as stipulated in IACS unified requirements and IMO's Condition Assessment Scheme (CAS) IMO Resolution MEPC.94(46), 2001, Condition Assessment Scheme, IMO Resolution MEPC.111(50), 2003, Amendments to regulation 13G, addition of new regulation 13H involves a huge amount of measurement data to be collected, processed, analysed and maintained. Information to be recorded consists of thickness measurements and visual assessment of coating and cracks. The amount of data and increasing requirements with respect to condition assessment demand efficient computer support. Currently, due to the lack of standardization for this kind of data, the thickness measurements are recorded manually on ship drawings or tables. In this form, handling of the measurements is tedious and error-prone and assessment is difficult. Data reporting and analysis takes a long time, leading to some repairs being performed only at the next docking of the ship or making an additional docking necessary. The recently started ED funded project CAS addresses this topic and develops-as a first step-a data model for Hull Condition Monitoring and Assessment (HCMA) based on XML-technology. The model includes simple geometry representation to facilitate a graphically supported data collection as well as an easy visualisation of the measurement results. In order to ensure compatibility with the current way of working, the content of the data model is strictly confined to the requirements of the measurement process. Appropriate data interfaces to classification software will enable rapid assessment by the classification societies, thus improving the process in terms of time and cost savings. In particular, decision-making can be done while the ship is still in the dock for maintenance.

Memory-Efficient NBNN Image Classification

  • Lee, YoonSeok;Yoon, Sung-Eui
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.1
    • /
    • pp.1-8
    • /
    • 2017
  • Naive Bayes nearest neighbor (NBNN) is a simple image classifier based on identifying nearest neighbors. NBNN uses original image descriptors (e.g., SIFTs) without vector quantization for preserving the discriminative power of descriptors and has a powerful generalization characteristic. However, it has a distinct disadvantage. Its memory requirement can be prohibitively high while processing a large amount of data. To deal with this problem, we apply a spherical hashing binary code embedding technique, to compactly encode data without significantly losing classification accuracy. We also propose using an inverted index to identify nearest neighbors among binarized image descriptors. To demonstrate the benefits of our method, we apply our method to two existing NBNN techniques with an image dataset. By using 64 bit length, we are able to reduce memory 16 times with higher runtime performance and no significant loss of classification accuracy. This result is achieved by our compact encoding scheme for image descriptors without losing much information from original image descriptors.