• Title/Summary/Keyword: Data Classification Scheme

Search Result 284, Processing Time 0.03 seconds

Conceptual Data Modeling on the KRR-1&2 Decommissioning Database

  • Park, Hee-Seoung;Park, Seung-Kook;Lee, Kune-Woo;Park, Jin-Ho
    • Nuclear Engineering and Technology
    • /
    • v.34 no.6
    • /
    • pp.610-618
    • /
    • 2002
  • A study of the conceptual data modeling to realize the decommissioning database on the HRR-1&2 was carried out. In this study, the current state of the abroad decommissioning database was investigated to make a reference of the database. A scope of the construction of decommissioning database has been set up based on user requirements. Then, a theory of the database construction was established and a scheme on the decommissioning information was classified . The facility information, work information, radioactive waste information, and radiological information dealing with the decommissioning database were extracted through interviews with an expert group and also decided upon the system configuration of the decommissioning database. A code which is composed of 17 bit was produced considering the construction, scheme and information. The results of the conceptual data modeling and the classification scheme will be used as basic data to create a prototype design of the decommissioning database.

A Study on Development of Policy Attributes Taxonomy for Data-based Decision Making (데이터기반 의사결정을 위한 정책 및 사업 속성 분류체계 개발 연구)

  • Kim, Sarang
    • The Journal of Information Systems
    • /
    • v.29 no.3
    • /
    • pp.1-34
    • /
    • 2020
  • Purpose Due to the complexity of policy environment in modern society, it is accepted as common basics of policy design to mix up a variety of policy instruments aiming the multiple functions. However, under the current situation of written-down policy specification, not only the public officers but also the policy researchers cannot easily grasp such frameworks as policy portfolio. The purpose of this study is to develop "Policy Attributes Taxonomy" identifying and classifying the public programs to help making decisions for allocative efficiency with effectiveness-based information. Design/methodology/approach To figure out the main scheme and classification criteria of Policy Attributes Taxonomy which represents characteristics of public policies, previous theories and researches on policy components were explored. In addition, to test taxonomic feasibility of certain information system, a set of "Feasibility Standards" was drawn from "requirements for well-organized criteria" of eminent taxonomy literatures. Finally, current government classification system in the area of social service was tested to visualize the application of Taxonomy and Standards. Findings Program Taxonomy Schemes were set including "policy goals", "policy targets", "policy tools", "logical relation" and "delivery system". Each program and project could be condensed into these attributes, making their design more easily distinguishable. Policy portfolio could be readily made out by extracting certain characteristics according to this scheme. Moreover, this taxonomy could be used for rearrangement of present "Program Budget System" or estimation of "Basic Income".

Despeckling and Classification of High Resolution SAR Imagery (고해상도 SAR 영상 Speckle 제거 및 분류)

  • Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.5
    • /
    • pp.455-464
    • /
    • 2009
  • Lee(2009) proposed the boundary-adaptive despeckling method using a Bayesian model which is based on the lognormal distribution for image intensity and a Markov random field(MRF) for image texture. This method employs the Point-Jacobian iteration to obtain a maximum a posteriori(MAP) estimate of despeckled imagery. The boundary-adaptive algorithm is designed to use less information from more distant neighbors as the pixel is closer to boundary. It can reduce the possibility to involve the pixel values of adjacent region with different characteristics. The boundary-adaptive scheme was comprehensively evaluated using simulation data and the effectiveness of boundary adaption was proved in Lee(2009). This study, as an extension of Lee(2009), has suggested a modified iteration algorithm of MAP estimation to enhance computational efficiency and to combine classification. The experiment of simulation data shows that the boundary-adaption results in yielding clear boundary as well as reducing error in classification. The boundary-adaptive scheme has also been applied to high resolution Terra-SAR data acquired from the west coast of Youngjong-do, and the results imply that it can improve analytical accuracy in SAR application.

Classification System Model Design for Algorithm Education for Elementary and Secondary Students (초중등학생 대상 알고리즘 교육을 위한 분류체계 모형 설계)

  • Lee, Young-ho;Koo, Duk-hoi
    • Journal of The Korean Association of Information Education
    • /
    • v.21 no.3
    • /
    • pp.297-307
    • /
    • 2017
  • The purpose of this study is to propose algorithm classification system for algorithm education for Elementary and Secondary Students. We defines the components of the algorithm and expresses the algorithm classification system by the analysis synthesis method. The contents of the study are as follows. First, we conducted a theoretical search on the classification purpose and classification. Second, the contents and limitations of the classification system for the proposed algorithm contents were examined. In addition, we examined the contents and selection criteria of algorithms used in algorithm education research. Third, the algorithm components were redefined using the core idea and crosscutting concept proposed by the NRC. And the crosscutting concept of algorithm is subdivided into algorithm data structure and algorithm design strategy, and its contents are presented using analytic synthesis classification scheme. Finally, the validity of the proposed contents was verified by the review of the expert group. It is expected that the study on the algorithm classification system will provide many implications for the contents selection and training method in the algorithm education.

The Classification of the Software Quality by the Rough Tolerance Class

  • Choi, Wan-Kyoo;Lee, Sung-Joo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.249-253
    • /
    • 2004
  • When we decide the software quality on the basis of the software measurement, the transitive property which is a requirement for an equivalence relation is not always satisfied. Therefore, we propose a scheme for classifying the software quality that employs a tolerance relation instead of an equivalence relation. Given the experimental data set, the proposed scheme generates the tolerant classes for elements in the experiment data set, and generates the tolerant ranges for classifying the software quality by clustering the means of the tolerance classes. Through the experiment, we showed that the proposed scheme could product very useful and valid results. That is, it has no problems that we use as the criteria for classifying the software quality the tolerant ranges generated by the proposed scheme.

Text Classification for Patents: Experiments with Unigrams, Bigrams and Different Weighting Methods

  • Im, ChanJong;Kim, DoWan;Mandl, Thomas
    • International Journal of Contents
    • /
    • v.13 no.2
    • /
    • pp.66-74
    • /
    • 2017
  • Patent classification is becoming more critical as patent filings have been increasing over the years. Despite comprehensive studies in the area, there remain several issues in classifying patents on IPC hierarchical levels. Not only structural complexity but also shortage of patents in the lower level of the hierarchy causes the decline in classification performance. Therefore, we propose a new method of classification based on different criteria that are categories defined by the domain's experts mentioned in trend analysis reports, i.e. Patent Landscape Report (PLR). Several experiments were conducted with the purpose of identifying type of features and weighting methods that lead to the best classification performance using Support Vector Machine (SVM). Two types of features (noun and noun phrases) and five different weighting schemes (TF-idf, TF-rf, TF-icf, TF-icf-based, and TF-idcef-based) were experimented on.

Land Cover Classification of RapidEye Satellite Images Using Tesseled Cap Transformation (TCT)

  • Moon, Hogyung;Choi, Taeyoung;Kim, Guhyeok;Park, Nyunghee;Park, Honglyun;Choi, Jaewan
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.1
    • /
    • pp.79-88
    • /
    • 2017
  • The RapidEye satellite sensor has various spectral wavelength bands, and it can capture large areas with high temporal resolution. Therefore, it affords advantages in generating various types of thematic maps, including land cover maps. In this study, we applied a supervised classification scheme to generate high-resolution land cover maps using RapidEye images. To improve the classification accuracy, object-based classification was performed by adding brightness, yellowness, and greenness bands by Tasseled Cap Transformation (TCT) and Normalized Difference Water Index (NDWI) bands. It was experimentally confirmed that the classification results obtained by adding TCT and NDWI bands as input data showed high classification accuracy compared with the land cover map generated using the original RapidEye images.

A Study on the Application of Interpolation and Terrain Classification for Accuracy Improvement of Digital Elevation Model (수지표고지형의 정확도 향상을 위한 지형의 분류와 보간법의 상용에 관한 연구)

  • 문두열
    • Journal of Ocean Engineering and Technology
    • /
    • v.8 no.2
    • /
    • pp.64-79
    • /
    • 1994
  • In this study, terrain classification, which was done by using the quantitative classification parameters and suitable interpolation method was applied to improve the accuracy of digital elevation models, and to increase its practical use of aerial photogrammetry. A terrain area was classified into three groups using the quantitative classification parameters to the ratio of horizontal, inclined area, magnitude of harmonic vectors, deviation of vector, the number of breakline and proposed the suitable interpolation. Also, the accuracy of digital elevation models was improved in case of large grid intervals by applying combined interpolation suitable for each terrain group. As a result of this study, I have an algorithm to perform the classification of the topography in the area of interest objectively and decided optimal data interpolation scheme for given topography.

  • PDF

Text Document Classification Scheme using TF-IDF and Naïve Bayes Classifier (TF-IDF와 Naïve Bayes 분류기를 활용한 문서 분류 기법)

  • Yoo, Jong-Yeol;Hyun, Sang-Hyun;Yang, Dong-Min
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.242-245
    • /
    • 2015
  • Recently due to large-scale data spread in digital economy, the era of big data is coming. Through big data, unstructured text data consisting of technical text document, confidential document, false information documents are experiencing serious problems in the runoff. To prevent this, the need of art to sort and process the document consisting of unstructured text data has increased. In this paper, we propose a novel text classification scheme which learns some data sets and correctly classifies unstructured text data into two different categories, True and False. For the performance evaluation, we implement our proposed scheme using $Na{\ddot{i}}ve$ Bayes document classifier and TF-IDF modules in Python library, and compare it with the existing document classifier.

  • PDF

Standardizing Agriculture-related Land Cover Classification Scheme using IKONOS Satellite Imagery (IKONOS 영상자료를 이용한 농업지역 토지피복 분류기준 설정)

  • Hong Seong-Min;Jung In-Kyun;Kim Seong-Joon
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.4
    • /
    • pp.253-259
    • /
    • 2004
  • The purpose of this study is to present a standardized scheme for providing agriculture-related information at various spatial resolutions of satellite images including Landsat + ETM, KOMPSAT-1 EOC, ASTER VNIR, and IKONOS panchromatic and multi-spectral images. The satellite images were interpreted especially for identifying agricultural areas, crop types, agricultural facilities and structures. The results were compared with the land cover/land use classification system suggested by National Geographic Information based on aerial photograph and Ministry of Environment based on satellite remote sensing data. As a result, high-resolution agricultural land cover map from IKONOS imageries was made out. The classification result by IKONOS image will be provided to KOMPSAT-2 project for agricultural application.