• Title/Summary/Keyword: research data repository

Search Result 175, Processing Time 0.026 seconds

Does "Pyegi" Mean Holding One's Breath? (폐기는 숨을 참는 것인가?)

  • Ahn, Hun Mo;Na, Sam Sik;Kang, Han Joo
    • Journal of Korean Medical Ki-Gong Academy
    • /
    • v.20 no.1
    • /
    • pp.1-14
    • /
    • 2020
  • Objective : The purpose of this study is to grasp the concept of Pyegi and to find out the points of attention in the study of the classics of Korean Medicine. Methods : We searched domestic and foreign databases for examples of Pyegi used in classics and prior research on Pyegi, Taesik, and Respiration. Results : As a result of searching the Kanseki Repository, Pyegi appeared 1,710 times in 335 kinds of literatures. In the Korean Traditional Medicine Knowledge DB, Pyegi appeared 61 times in 21 kinds of literature. In the Korean Classics Comprehensive DB, a total of 25 data were searched for Pyegi, and 10 of them were translated. Conclusions : The term "Pyegi" should not be interpreted in a linguistic sense. In order to understand the intention of the sentence at the time, "Pyegi" must be translated into breathing control exercises during Naedan training methods.

A Study on the Private Key Backup and Restoration using Biometric Information in Blockchain Environment

  • Seungjin, Han
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.3
    • /
    • pp.59-65
    • /
    • 2023
  • As research on blockchain applications in various fields is actively increasing, management of private keys that prove users of blockchain has become important. If you lose your private key, you lose all your data. In order to solve this problem, previously, blockchain wallets, private key recovery using partial information, and private key recovery through distributed storage have been proposed. In this paper, we propose a safe private key backup and recovery method using Shamir's Secrete Sharing (SSS) scheme and biometric information, and evaluate its safety. In this paper, we propose a safe private key backup and recovery method using Shamir's Secrete Sharing (SSS) scheme and biometric information, and evaluate its safety against robustness during message exchange, replay attack, man-in-the-middle attack and forgery and tampering attack.

Leveraging Deep Learning and Farmland Fertility Algorithm for Automated Rice Pest Detection and Classification Model

  • Hussain. A;Balaji Srikaanth. P
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.959-979
    • /
    • 2024
  • Rice pest identification is essential in modern agriculture for the health of rice crops. As global rice consumption rises, yields and quality must be maintained. Various methodologies were employed to identify pests, encompassing sensor-based technologies, deep learning, and remote sensing models. Visual inspection by professionals and farmers remains essential, but integrating technology such as satellites, IoT-based sensors, and drones enhances efficiency and accuracy. A computer vision system processes images to detect pests automatically. It gives real-time data for proactive and targeted pest management. With this motive in mind, this research provides a novel farmland fertility algorithm with a deep learning-based automated rice pest detection and classification (FFADL-ARPDC) technique. The FFADL-ARPDC approach classifies rice pests from rice plant images. Before processing, FFADL-ARPDC removes noise and enhances contrast using bilateral filtering (BF). Additionally, rice crop images are processed using the NASNetLarge deep learning architecture to extract image features. The FFA is used for hyperparameter tweaking to optimise the model performance of the NASNetLarge, which aids in enhancing classification performance. Using an Elman recurrent neural network (ERNN), the model accurately categorises 14 types of pests. The FFADL-ARPDC approach is thoroughly evaluated using a benchmark dataset available in the public repository. With an accuracy of 97.58, the FFADL-ARPDC model exceeds existing pest detection methods.

Nonlinear Feature Extraction using Class-augmented Kernel PCA (클래스가 부가된 커널 주성분분석을 이용한 비선형 특징추출)

  • Park, Myoung-Soo;Oh, Sang-Rok
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.5
    • /
    • pp.7-12
    • /
    • 2011
  • In this papwer, we propose a new feature extraction method, named as Class-augmented Kernel Principal Component Analysis (CA-KPCA), which can extract nonlinear features for classification. Among the subspace method that was being widely used for feature extraction, Class-augmented Principal Component Analysis (CA-PCA) is a recently one that can extract features for a accurate classification without computational difficulties of other methods such as Linear Discriminant Analysis (LDA). However, the features extracted by CA-PCA is still restricted to be in a linear subspace of the original data space, which limites the use of this method for various problems requiring nonlinear features. To resolve this limitation, we apply a kernel trick to develop a new version of CA-PCA to extract nonlinear features, and evaluate its performance by experiments using data sets in the UCI Machine Learning Repository.

Analysis of Repository Systems for Designing a Archive System of Large Science Data (대용량 과학데이터 아카이브 시스템 설계를 위한 리포지터리 시스템 분석)

  • Lim, Jongtae;Seo, Indeok;Song, Heesub;Yoo, Seunghun;Jeong, Jaeyun;Cho, Jungkwon;Paul, Aniruddha;Ko, Geonsik;Kim, Byounghoon;Park, Yunjeong;Song, Jinwoo;Lee, Seohee;Jeon, Hyeonwook;Choi, Minwoong;Noh, Yeonwoo;Choi, Dojin;Kim, Yeonwoo;Bok, Kyoungsoo;Lee, Jeonghoon;Lee, Sanghwon;Yoo, Jaesoo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2016.05a
    • /
    • pp.21-22
    • /
    • 2016
  • 본 논문에서는 대용량 과학데이터 아카이브 시스템 설계를 위해 기존 리포지터리 시스템을 분석한다. 대용량 과학데이터를 효율적으로 수집하고 저장하기 위한 아카이브 시스템 아키텍쳐 설계를 위하여 현재 서비스되고 있는 다양한 과학데이터 리포지터리 시스템을 분석한다. 분석한 내용을 바탕으로 대용량 과학데이터 아카이브 시스템 아키텍쳐를 설계하기 위한 기술적인 요구사항을 도출한다.

  • PDF

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.

Development of Cyber R&D Platform on Total System Performance Assessment for a Potential HLW Repository ; Application for Development of Scenario through QA Procedures (고준위 방사성폐기물 처분 종합 성능 평가 (TSPA)를 위한 Cyber R&D Platform 개발 ; 시나리오 도출 과정에서의 품질보증 적용 사례)

  • Seo Eun-Jin;Hwang Yong-soo;Kang Chul-Hyung
    • Proceedings of the Korean Radioactive Waste Society Conference
    • /
    • 2005.06a
    • /
    • pp.311-318
    • /
    • 2005
  • Transparency on the Total System Performance Assessment (TSPA) is the key issue to enhance the public acceptance for a permanent high level radioactive repository. To approve it, all performances on TSPA through Quality Assurance is necessary. The integrated Cyber R&D Platform is developed by KAERI using the T2R3 principles applicable for five major steps in R&D's. The proposed system is implemented in the web-based system so that all participants in TSPA are able to access the system. It is composed of FEAS (FEp to Assessment through Scenario development) showing systematic approach from the FEPs to Assessment methods flow chart, PAID (Performance Assessment Input Databases) showing PA(Performance Assessment) input data set in web based system and QA system receding those data. All information is integrated into Cyber R&D Platform so that every data in the system can be checked whenever necessary. For more user-friendly system, system upgrade included input data & documentation package is under development. Throughout the next phase R&D, Cyber R&D Platform will be connected with the assessment tool for TSPA so that it will be expected to search the whole information in one unified system.

  • PDF

Use of Multivariate Statistical Approaches for Decoding Chemical Evolution of Groundwater near Underground Storage Caverns (다변량통계기법을 이용한 지하저장시설 주변의 지하수질 변동에 관한 연구)

  • Lee, Jeonghoon
    • Journal of the Korean earth science society
    • /
    • v.35 no.4
    • /
    • pp.225-236
    • /
    • 2014
  • Multivariate statistical analyses have been extensively applied to hydrochemical measurements to analyze and interpret the data. This study examines anthropogenic factors obtained from applications of correspondence analysis (CA) and principal component analysis (PCA) to a hydrogeochemical data set. The goal was to synthesize the hydrogeochemical information using these multivariate statistical techniques by incorporating hydrogeochemical speciation results calculated by the program, commonly used, WATEQ4F included in the NETPATH. The selected case study was LPG underground storage caverns, which is located in the southeastern Korea. The highly alkaline groundwaters at this study area are an analogue for the repository system. High pH, speciation of Al and possible precipitation of calcite characterize these groundwaters. Available groundwater quality monitoring data were used to confirm these statistical models. The present study focused on understanding the hydrogeochemical attributes and establishing the changes of phase when two anthropogenic effects (i.e., disinfection activity and cement pore water) in the study area have been introduced. Comparisons made between two statistical results presented and the findings of previous investigations highlight the descriptive capabilities of PCA using calculated saturation index and CA as exploratory tools in hydrogeochemical research.

A Research in Applying Big Data and Artificial Intelligence on Defense Metadata using Multi Repository Meta-Data Management (MRMM) (국방 빅데이터/인공지능 활성화를 위한 다중메타데이터 저장소 관리시스템(MRMM) 기술 연구)

  • Shin, Philip Wootaek;Lee, Jinhee;Kim, Jeongwoo;Shin, Dongsun;Lee, Youngsang;Hwang, Seung Ho
    • Journal of Internet Computing and Services
    • /
    • v.21 no.1
    • /
    • pp.169-178
    • /
    • 2020
  • The reductions of troops/human resources, and improvement in combat power have made Korean Department of Defense actively adapt 4th Industrial Revolution technology (Artificial Intelligence, Big Data). The defense information system has been developed in various ways according to the task and the uniqueness of each military. In order to take full advantage of the 4th Industrial Revolution technology, it is necessary to improve the closed defense datamanagement system.However, the establishment and usage of data standards in all information systems for the utilization of defense big data and artificial intelligence has limitations due to security issues, business characteristics of each military, anddifficulty in standardizing large-scale systems. Based on the interworking requirements of each system, data sharing is limited through direct linkage through interoperability agreement between systems. In order to implement smart defense using the 4th Industrial Revolution technology, it is urgent to prepare a system that can share defense data and make good use of it. To technically support the defense, it is critical to develop Multi Repository Meta-Data Management (MRMM) that supports systematic standard management of defense data that manages enterprise standard and standard mapping for each system and promotes data interoperability through linkage between standards which obeys the Defense Interoperability Management Development Guidelines. We introduced MRMM, and implemented by using vocabulary similarity using machine learning and statistical approach. Based on MRMM, We expect to simplify the standardization integration of all military databases using artificial intelligence and bigdata. This will lead to huge reduction of defense budget while increasing combat power for implementing smart defense.

The State-of-the Art of the Borehole Disposal Concept for High Level Radioactive Waste (고준위방사성폐기물의 시추공 처분 개념 연구 현황)

  • Ji, Sung-Hoon;Koh, Yong-Kwon;Choi, Jong-Won
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.10 no.1
    • /
    • pp.55-62
    • /
    • 2012
  • As an alternative of the high-level radioactive waste disposal in the subsurface repository, a deep borehole disposal is reviewed by several nuclear advanced countries. In this study, the state of the art on the borehole disposal researches was reviewed, and the possibility of borehole disposal in Korean peninsula was discussed. In the deep borehole disposal concept radioactive waste is disposed at the section of 3 - 5km depth in a deep borehole, and it has known that it has advantages in performance and cost due to the layered structure of deep groundwater and small surface disposal facility. The results show that it is necessary to acquisite data on deep geologic conditions of Korean peninsula, and to research the engineering barrier system, numerical modeling tools and disposal techniques for deep borehole disposal.