• Title/Summary/Keyword: Data 누락

Search Result 263, Processing Time 0.039 seconds

Ergonomic evaluation for the safety of machinery and construction of modified data based for a hand and an arm of Korean human body (한국인 인체 중 손과 팔의 기계류 안전에 대한 인간공학적 평가 및 DB 구축)

  • Cho, Bong-Jo;Son, Kwon;Kim, Seong-Jin;Jeong, Yun-Seok
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2002.11a
    • /
    • pp.219-223
    • /
    • 2002
  • 본 연구에서는 한국인 인체 자료를 바탕으로 감성환경에 적용하기 위한 기계류 안전에 대한 설계 및 측정 자료의 DB를 구축하고 인간공학적으로 평가하였다. 연구 내용은 기계류 안전에 대한 국제 표준 규격(ISO 15534)과 관련된 총 23가지의 인체 치수 자료이다. 유럽인을 모집단으로 하여 측정된 표준 규격에 비해 일치하지 않거나 측정에서 누락된 한국인 자료에 대해 몇몇 가정을 통해 수정함으로써 표준 규격에 상응하는 한국인 인체 자료를 생성시키는 기법을 개발하였다.

  • PDF

Classification of Statistical Error Types Through Analysis of Wind and Flood Damage History Data (풍수해 피해이력 자료 분석을 통한 통계적 오류유형 분류)

  • Kim, Ku-Yoon;Lee, Mi-Ran;Lee, Jun-Woo
    • Proceedings of the Korean Society of Disaster Information Conference
    • /
    • 2022.10a
    • /
    • pp.135-136
    • /
    • 2022
  • 최근 기후변화의 영향으로 태풍 및 국지성 집중호우 등 자연재해 발생빈도가 증가함에 따라 풍수해로 인한 인명피해와 재산피해가 증가하고 있다. 국내에서는 재해연보를 통해 자연재난 피해이력 통계정보를 제공하고 있으며, 당해연도 자연재해상황을 기간별, 시도별, 수계별, 월별, 원인별 총괄통계와 인명피해, 시설피해와 관련된 피해면적, 피해액, 복구액 등 세부내용으로 구성하여 정보를 제공하고 있다. 행정안전부는 국가재난정보시스템을 통해 취합된 지자체 피해이력 통계자료를 입력하고 있는데 입력하는 과정에서 누락, 오기 등의 오류가 발생할 가능성이 있다. 경제적 손실이 증가하고 있는 풍수해 재난이 발생하게 될 경우 피해비용 집계, 피해액 산정 등 정확한 자료로서 구축되지 않으면 연구 및 분석을 수행하기 위한 통계자료로서 활용될 수 없다. 이러한 문제점을 개선하기 위해서 본 연구에서는 1985년부터 2018년까지 재해연보에 대해서 기간별-시군구별 자료분석을 통해 피해이력 데이터 오류 유형에 대해 분류하였다.

  • PDF

Real-Time Hybrid Broadcasting Algorithm Considering Data Property in Mobile Computing Environments (이동 컴퓨팅 환경에서 데이타 특성을 고려한 실시간 혼성 방송 알고리즘)

  • Yoon Hyesook;Kim Young-Kuk
    • Journal of KIISE:Information Networking
    • /
    • v.32 no.3
    • /
    • pp.339-349
    • /
    • 2005
  • For recent years, data broadcast technology has been recognized as a very effective data delivery mechanism in mobile computing environment with a large number of cli;ents. Especially, a hybrid broadcast algorithm in real-time environment, which integrates one-way broadcast and on-demand broadcast, has an advantage of adapting the requests of clients to a limited up-link bandwidth and following the change of data access pattern. However, previous hybrid broadcasting algorithms has a problem in the methods to get a grip on the change of data access Pattern. It is caused by the diminution of requests for the data items which are contained in periodic broadcasting schedule because they are already broadcasted. To solve this problem, existing researches may remove data items in periodic broadcasting schedule over a few cycles multiplying cooling factor or find out the requests of data items with extracting them on purpose. Both of them are the artificial methods not considering the property of data. In this paper, we propose a real-time adaptive hybrid broadcasting based on data type(RTAHB-DT) to broadcast considering data property and analysis the performance of our aigorithm through simulation study.

Missing Data Imputation Using Permanent Traffic Counts on National Highways (일반국토 상시 교통량자료를 이용한 교통량 결측자료 추정)

  • Ha, Jeong-A;Park, Jae-Hwa;Kim, Seong-Hyeon
    • Journal of Korean Society of Transportation
    • /
    • v.25 no.1 s.94
    • /
    • pp.121-132
    • /
    • 2007
  • Up to now Permanent traffic volumes have been counted by Automatic Vehicle Classification (AVC) on National Highways. When counted data have missing items or errors, the data must be revised to stay statistically reliable This study was carried out to estimate correct data based on outoregression and seasonal AutoRegressive Integrated Moving Average (ARIMA). As a result of verification through seasonal ARIMA, the longer the missed period is, the greater the error. Autoregression results in better verification results than seasonal ARIMA. Traffic data is affected by the present state mote than past patterns. However. autoregression can be applied only to the cases where data include similar neighborhood patterns and even in this case. the data cannot be corrected when data are missing due to low qualify or errors Therefore, these data shoo)d be corrected using past patterns and seasonal ARIMA when the missing data occurs in short periods.

Utility Analysis of Federated Learning Techniques through Comparison of Financial Data Performance (금융데이터의 성능 비교를 통한 연합학습 기법의 효용성 분석)

  • Jang, Jinhyeok;An, Yoonsoo;Choi, Daeseon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.2
    • /
    • pp.405-416
    • /
    • 2022
  • Current AI technology is improving the quality of life by using machine learning based on data. When using machine learning, transmitting distributed data and collecting it in one place goes through a de-identification process because there is a risk of privacy infringement. De-identification data causes information damage and omission, which degrades the performance of the machine learning process and complicates the preprocessing process. Accordingly, Google announced joint learning in 2016, a method of de-identifying data and learning without the process of collecting data into one server. This paper analyzed the effectiveness by comparing the difference between the learning performance of data that went through the de-identification process of K anonymity and differential privacy reproduction data using actual financial data. As a result of the experiment, the accuracy of original data learning was 79% for k=2, 76% for k=5, 52% for k=7, 50% for 𝜖=1, and 82% for 𝜖=0.1, and 86% for Federated learning.

A Study on Metadata-based Data Quality Management in a Container Terminal (컨테이너터미널의 메타데이터 기반 데이터 품질관리 방안에 관한 연구)

  • Kang, Yang-Suk;Choi, Hyung-Rim;Kim, Hyun-Soo;Hong, Soon-Goo;Jung, Jae-Un;Park, Jae-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.2
    • /
    • pp.321-329
    • /
    • 2009
  • Due to the massive increase of data that should be managed, the problems in data quality management have been issued. In addition the lack of integrated management of the data causes duplication of data, low qualify services, and, missing data. To overcome these problems, this study attempts to examine the way of the data qualify management. To do this, metadata was defined, and its current management status in various view points was analyzed, and finally the metadata management was applied to the container terminal. for the "A" container terminal, we performed data standardization, and reflected major constraints and developed the pilot metadata repository. The contributions of this study are in improvement of the data qualify in the container terminal, and its practical application with metadata management method. Limitations of this study is its partial implementation of the metadata management to the company and interoperability of the metadata management for business to business data integration for the future research.

A Study on Archiving of Government Survey Data (정부 여론조사자료 아카이브 구축방안에 관한 연구)

  • Nam, Young-Joon;Seo, Man-Deok
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.9 no.1
    • /
    • pp.175-196
    • /
    • 2009
  • The government and public institutions are conducting public-opinion surveys as means of drawing the scientific policy decisions throughout the country's whole sectors ; society, culture, welfare, education, etc. The Government Survey Data have been under management by each governmental department, but the improper ways of data collection and management have led to omissions and losses of some data. Also there are limitations on the statistic usage and access to the data due to the fact that the data are managed dispersedly in the form of printed materials. Accordingly, to make the survey data available for the long-term preservation and usage, this study suggests collection policy, standard of evaluation, metadata, the procedure and method of converting materials, preservation format.

A Study on the Guideline for the Data Deletion (데이터 폐기 지침 마련을 위한 기초 연구)

  • Lim, Tae-Hoon;Seo, Jik-Soo;Kim, Sun-Young
    • Journal of Information Management
    • /
    • v.41 no.4
    • /
    • pp.165-186
    • /
    • 2010
  • This study aims to suggest the basis and criterion for the data deletion guideline to make the information systems effective and reduce the cost of system management. To make a frame of the guideline, we researched the laws and policies of USA, UK and Australia and the domestic laws and regulations related with the deletion of records. From this paper research, we prepared the draft guideline and gathered the opinion about it. Through this research and survey, we produced out the guideline including the criteria, the process and the way of data deletion. Adopting the guideline to a sample organization, we couldn't find any problem in deleting the unused data.

Application and Comparison of Data Mining Technique to Prevent Metal-Bush Omission (메탈부쉬 누락예방을 위한 데이터마이닝 기법의 적용 및 비교)

  • Sang-Hyun Ko;Dongju Lee
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.139-147
    • /
    • 2023
  • The metal bush assembling process is a process of inserting and compressing a metal bush that serves to reduce the occurrence of noise and stable compression in the rotating section. In the metal bush assembly process, the head diameter defect and placement defect of the metal bush occur due to metal bush omission, non-pressing, and poor press-fitting. Among these causes of defects, it is intended to prevent defects due to omission of the metal bush by using signals from sensors attached to the facility. In particular, a metal bush omission is predicted through various data mining techniques using left load cell value, right load cell value, current, and voltage as independent variables. In the case of metal bush omission defect, it is difficult to get defect data, resulting in data imbalance. Data imbalance refers to a case where there is a large difference in the number of data belonging to each class, which can be a problem when performing classification prediction. In order to solve the problem caused by data imbalance, oversampling and composite sampling techniques were applied in this study. In addition, simulated annealing was applied for optimization of parameters related to sampling and hyper-parameters of data mining techniques used for bush omission prediction. In this study, the metal bush omission was predicted using the actual data of M manufacturing company, and the classification performance was examined. All applied techniques showed excellent results, and in particular, the proposed methods, the method of mixing Random Forest and SA, and the method of mixing MLP and SA, showed better results.

Improvement of Factory Data in Industrial Land Information System (산업입지정보시스템 공장정보 개선에 관한 연구)

  • Choe, Yu-Jeong;Lim, Jae-Deok;Kim, Seong-Geon
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.10
    • /
    • pp.97-106
    • /
    • 2020
  • The factory information provided by the Industrial Location Information System (ILIS) is provided as raw data by the Korea Industrial Complex Corporation and registered after a filtering process, so the new factory information update is slow. In this study, to solve the problem of updating factory information of industrial location information system, using building data of road name address with relatively fast renewal cycle and building data of real estate, we compared the factory information of existing ILIS and extracted new factory information. In the process of comparison, a method was proposed to compare spatial objects of different types with point data and polygon data. Attribute information matching and object matching were performed, and attribute values of new factory information were extracted. The accuracy evaluation of the proposed spatial analysis method showed 79% accuracy, and the above matching technique was used to confirm the possibility of convergence of road name address data, real estate data and factory information of ILIS.