• 제목/요약/키워드: data journal

검색결과 188,903건 처리시간 0.111초

공공데이터 융합역량 수준에 따른 데이터 기반 조직 역량의 연구 (A Study on the Data-Based Organizational Capabilities by Convergence Capabilities Level of Public Data)

  • 정병호;주형근
    • 디지털산업정보학회논문지
    • /
    • 제18권4호
    • /
    • pp.97-110
    • /
    • 2022
  • The purpose of this study is to analyze the level of public data convergence capabilities of administrative organizations and to explore important variables in data-based organizational capabilities. The theoretical background was summarized on public data and use activation, joint use, convergence, administrative organization, and convergence constraints. These contents were explained Public Data Act, the Electronic Government Act, and the Data-Based Administrative Act. The research model was set as the data-based organizational capabilities effect by a data-based administrative capability, public data operation capabilities, and public data operation constraints. It was also set whether there is a capabilities difference data-based on an organizational operation by the level of data convergence capabilities. This study analysis was conducted with hierarchical cluster analysis and multiple regression analysis. As the research result, First, hierarchical cluster analysis was classified into three groups. It was classified into a group that uses only public data and structured data, a group that uses public data on both structured and unstructured data, and a group that uses both public and private data. Second, the critical variables of data-based organizational operation capabilities were found in the data-based administrative planning and administrative technology, the supervisory organizations and technical systems by public data convergence, and the data sharing and market transaction constraints. Finally, the essential independent variables on data-based organizational competencies differ by group. This study contributed. As a theoretical implication, this research is updated on management information systems by explaining the Public Data Act, the Electronic Government Act, and the Data-Based Administrative Act. As a practical implication, the activity reinforcement of public data should be promoting the establishment of data standardization and search convenience and elimination of the lukewarm attitudes and Selfishness behavior for data sharing.

원천 데이터 품질이 빅데이터 분석결과의 유용성과 활용도에 미치는 영향 (An Empirical Study on the Effects of Source Data Quality on the Usefulness and Utilization of Big Data Analytics Results)

  • 박소현;이국희;이아연
    • Journal of Information Technology Applications and Management
    • /
    • 제24권4호
    • /
    • pp.197-214
    • /
    • 2017
  • This study sheds light on the source data quality in big data systems. Previous studies about big data success have called for future research and further examination of the quality factors and the importance of source data. This study extracted the quality factors of source data from the user's viewpoint and empirically tested the effects of source data quality on the usefulness and utilization of big data analytics results. Based on the previous researches and focus group evaluation, four quality factors have been established such as accuracy, completeness, timeliness and consistency. After setting up 11 hypotheses on how the quality of the source data contributes to the usefulness, utilization, and ongoing use of the big data analytics results, e-mail survey was conducted at a level of independent department using big data in domestic firms. The results of the hypothetical review identified the characteristics and impact of the source data quality in the big data systems and drew some meaningful findings about big data characteristics.

A case study of ECN data conversion for Korean and foreign ecological data integration

  • Lee, Hyeonjeong;Shin, Miyoung;Kwon, Ohseok
    • Journal of Ecology and Environment
    • /
    • 제41권5호
    • /
    • pp.142-144
    • /
    • 2017
  • In recent decades, as it becomes increasingly important to monitor and research long-term ecological changes, worldwide attempts are being conducted to integrate and manage ecological data in a unified framework. Especially domestic ecological data in South Korea should be first standardized based on predefined common protocols for data integration, since they are often scattered over many different systems in various forms. Additionally, foreign ecological data should be converted into a proper unified format to be used along with domestic data for association studies. In this study, our interest is to integrate ECN data with Korean domestic ecological data under our unified framework. For this purpose, we employed our semi-automatic data conversion tool to standardize foreign data and utilized ground beetle (Carabidae) datasets collected from 12 different observatory sites of ECN. We believe that our attempt to convert domestic and foreign ecological data into a standardized format in a systematic way will be quite useful for data integration and association analysis in many ecological and environmental studies.

A Study on Big Data Analytics Services and Standardization for Smart Manufacturing Innovation

  • Kim, Cheolrim;Kim, Seungcheon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제14권3호
    • /
    • pp.91-100
    • /
    • 2022
  • Major developed countries are seriously considering smart factories to increase their manufacturing competitiveness. Smart factory is a customized factory that incorporates ICT in the entire process from product planning to design, distribution and sales. This can reduce production costs and respond flexibly to the consumer market. The smart factory converts physical signals into digital signals, connects machines, parts, factories, manufacturing processes, people, and supply chain partners in the factory to each other, and uses the collected data to enable the smart factory platform to operate intelligently. Enhancing personalized value is the key. Therefore, it can be said that the success or failure of a smart factory depends on whether big data is secured and utilized. Standardized communication and collaboration are required to smoothly acquire big data inside and outside the factory in the smart factory, and the use of big data can be maximized through big data analysis. This study examines big data analysis and standardization in smart factory. Manufacturing innovation by country, smart factory construction framework, smart factory implementation key elements, big data analysis and visualization, etc. will be reviewed first. Through this, we propose services such as big data infrastructure construction process, big data platform components, big data modeling, big data quality management components, big data standardization, and big data implementation consulting that can be suggested when building big data infrastructure in smart factories. It is expected that this proposal can be a guide for building big data infrastructure for companies that want to introduce a smart factory.

데이터처리전문기관의 역할 및 보안 강화방안 연구: 버몬트주 데이터브로커 비교를 중심으로 (A Study on the Role and Security Enhancement of the Expert Data Processing Agency: Focusing on a Comparison of Data Brokers in Vermont)

  • 김수한;권헌영
    • 한국IT서비스학회지
    • /
    • 제22권3호
    • /
    • pp.29-47
    • /
    • 2023
  • With the recent advancement of information and communication technologies such as artificial intelligence, big data, cloud computing, and 5G, data is being produced and digitized in unprecedented amounts. As a result, data has emerged as a critical resource for the future economy, and overseas countries have been revising laws for data protection and utilization. In Korea, the 'Data 3 Act' was revised in 2020 to introduce institutional measures that classify personal information, pseudonymized information, and anonymous information for research, statistics, and preservation of public records. Among them, it is expected to increase the added value of data by combining pseudonymized personal information, and to this end, "the Expert Data Combination Agency" and "the Expert Data Agency" (hereinafter referred to as the Expert Data Processing Agency) system were introduced. In comparison to these domestic systems, we would like to analyze similar overseas systems, and it was recently confirmed that the Vermont government in the United States enacted the first "Data Broker Act" in the United States as a measure to protect personal information held by data brokers. In this study, we aim to compare and analyze the roles and functions of the "Expert Data Processing Agency" and "Data Broker," and to identify differences in designated standards, security measures, etc., in order to present ways to contribute to the activation of the data economy and enhance information protection.

A Simulation Framework for Wireless Compressed Data Broadcast

  • Seokjin Im
    • International Journal of Advanced Culture Technology
    • /
    • 제11권2호
    • /
    • pp.315-322
    • /
    • 2023
  • Intelligent IoT environments that accommodate a very large number of clients require technologies that provide secure information service regardless of the number of clients. Wireless data broadcast is an information service technique that ensures scalability to deliver data to all clients simultaneously regardless of the number of clients. In wireless data broadcasting, clients access the wireless channel linearly to explore the data, so the access time of clients is greatly affected by the broadcast cycle. Data compression-based data broadcasting can reduce the broadcast cycle and thus reduce client access time. Therefore, a simulation framework that can evaluate the performance of data broadcasting by applying different data compression algorithms is essential and important. In this paper, we propose a simulation framework to evaluate the performance of data broadcasting that can adopt data compression. We design the framework that enables to apply different data compression algorithms according to the data characteristics. In addition to evaluating the performance according to the data, the proposed framework can also evaluate the performance according to the data scheduling technique and the kind of queries the client wants to process. We implement the proposed framework and evaluate the performance of data broadcasting using the framework applying data compression algorithms to demonstrate the performances of data compression broadcasting.

Scientific Data 학술지 분석을 통한 데이터 논문 현황에 관한 연구 (An Investigation on Scientific Data for Data Journal and Data Paper)

  • 정은경
    • 정보관리학회지
    • /
    • 제36권1호
    • /
    • pp.117-135
    • /
    • 2019
  • 데이터 학술지와 데이터 논문이 오픈과학 패러다임에서 데이터 공유와 재이용이라는 학술활동이 등장하여 지속적으로 성장하고 있다. 본 논문은 영향력있는 다학제적 분야의 데이터 학술지인 Scientific Data에 게제된 총 713건의 논문을 대상으로 저자, 인용, 주제분야 측면을 분석하였다. 그 결과 저자의 주된 주제 영역은 생명공학, 물리학 등으로 나타났으며, 공저자 수는 평균 12명이다. 공저 형태를 네트워크로 살펴보면, 특정 연구자 그룹이 패쇄적으로 공저활동을 수행하는 것으로 나타났다. 인용의 주제영역을 살펴보면, 데이터 논문 저자의 주제영역과 크게 다르지 않게 나타났으나, 방법론을 주로 다루는 학술지의 인용 비중이 높은 것은 데이터 논문의 특징으로 볼 수 있다. 데이터 논문 저자의 키워드를 사용하여 동시출현단어분석 네트워크로 살펴본 데이터 논문의 주제영역은 생물학이 중심이며, 구체적으로 해양생태, 암, 게놈, 데이터베이스, 기온 등의 세부 주제 영역을 확인할 수 있다. 이러한 결과는 다학제학문 분야를 다루는 데이터 학술지이지만, 데이터 학술지 출간에 관한 논의를 일찍부터 시작해온 생명공학 분야에 집중된 현상을 보여준다.

Research on Railway Safety Common Data Model and DDS Topic for Real-time Railway Safety Data Transmission

  • Park, Yunjung;Kim, Sang Ahm
    • 한국컴퓨터정보학회논문지
    • /
    • 제21권5호
    • /
    • pp.57-64
    • /
    • 2016
  • In this paper, we propose the design of railway safety common data model to provide common transformation method for collecting data from railway facility fields to Real-time railway safety monitoring and control system. This common data model is divided into five abstract sub-models according to the characteristics of data such as 'StateInfoMessage', 'ControlMessage', 'RequestMessage', 'ResponseMessage' and 'ExtendedXXXMessage'. This kind of model structure allows diverse heterogeneous data acquisitions and its common conversion method to DDS (Data Distribution Service) format to share data to the sub-systems of Real-time railway safety monitoring and control system. This paper contains the design of common data model and its DDS Topic expression for DDS communication, and presents two kinds of data transformation case studied for verification of the model design.

추세 시계열 자료의 부트스트랩 적용 (Applying Bootstrap to Time Series Data Having Trend)

  • 박진수;김윤배;송기범
    • 한국경영과학회지
    • /
    • 제38권2호
    • /
    • pp.65-73
    • /
    • 2013
  • In the simulation output analysis, bootstrap method is an applicable resampling technique to insufficient data which are not significant statistically. The moving block bootstrap, the stationary bootstrap, and the threshold bootstrap are typical bootstrap methods to be used for autocorrelated time series data. They are nonparametric methods for stationary time series data, which correctly describe the original data. In the simulation output analysis, however, we may not use them because of the non-stationarity in the data set caused by the trend such as increasing or decreasing. In these cases, we can get rid of the trend by differencing the data, which guarantees the stationarity. We can get the bootstrapped data from the differenced stationary data. Taking a reverse transform to the bootstrapped data, finally, we get the pseudo-samples for the original data. In this paper, we introduce the applicability of bootstrap methods to the time series data having trend, and then verify it through the statistical analyses.

Saliency Score-Based Visualization for Data Quality Evaluation

  • Kim, Yong Ki;Lee, Keon Myung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제15권4호
    • /
    • pp.289-294
    • /
    • 2015
  • Data analysts explore collections of data to search for valuable information using various techniques and tricks. Garbage in, garbage out is a well-recognized idiom that emphasizes the importance of the quality of data in data analysis. It is therefore crucial to validate the data quality in the early stage of data analysis, and an effective method of evaluating the quality of data is hence required. In this paper, a method to visually characterize the quality of data using the notion of a saliency score is introduced. The saliency score is a measure comprising five indexes that captures certain aspects of data quality. Some experiment results are presented to show the applicability of proposed method.