• 제목/요약/키워드: Technology Data Analysis

검색결과 15,902건 처리시간 0.052초

스마트 팩토리의 제조 프로세스 마이닝에 관한 실증 연구 (An Empirical Study on Manufacturing Process Mining of Smart Factory)

  • 김태성
    • 대한안전경영과학회지
    • /
    • 제24권4호
    • /
    • pp.149-156
    • /
    • 2022
  • Manufacturing process mining performs various data analyzes of performance on event logs that record production. That is, it analyzes the event log data accumulated in the information system and extracts useful information necessary for business execution. Process data analysis by process mining analyzes actual data extracted from manufacturing execution systems (MES) to enable accurate manufacturing process analysis. In order to continuously manage and improve manufacturing and manufacturing processes, there is a need to structure, monitor and analyze the processes, but there is a lack of suitable technology to use. The purpose of this research is to propose a manufacturing process analysis method using process mining and to establish a manufacturing process mining system by analyzing empirical data. In this research, the manufacturing process was analyzed by process mining technology using transaction data extracted from MES. A relationship model of the manufacturing process and equipment was derived, and various performance analyzes were performed on the derived process model from the viewpoint of work, equipment, and time. The results of this analysis are highly effective in shortening process lead times (bottleneck analysis, time analysis), improving productivity (throughput analysis), and reducing costs (equipment analysis).

비모수 프런티어 접근을 통한 ICT 효율성 분석 연구 (An Efficiency Analysis of Information and Communications Technologies (ICT) using Nonparametric Frontier Analysis)

  • 김창희;양홍석;김수욱
    • 한국IT서비스학회지
    • /
    • 제16권4호
    • /
    • pp.1-13
    • /
    • 2017
  • This study examines how specific technology from Information and Communications Technology (ICT)-which plays a critical role in increasing productivity by promoting a spread of technology across the society though the use of big data, mobile or wearable devices-impacts of the productivity of society and productivity of added values, respectively. The impact of technology was studied from the perspective of efficiency levels of input. In order to provide an analysis, we have categorized ICT into 16 specific technologies and have set the number of companies and number of employees each as an input factor while setting the respective output and the output of added values as an output factor. Afterwards, we have applied data envelopment analysis (DEA) which is a form of nonparametric frontier analysis and measured the productivity and efficiency of added values for each technology. According to the analysis results, there were 2 technologies by the CRS standards, and 3 technologies by the VRS standards that showed relative efficiency levels. We have also presented some efficiency improvement strategies for specific technologies that revealed relative inefficiency and offered a reference set and projection point. In addition, we provide an analysis on scale efficiencies (SE), diminishing returns to scale (DRS), and increasing returns to scale (IRS) of each ICT.

데이터 사이언티스트의 역량과 빅데이터 분석성과의 PLS 경로모형분석 : Kaggle 플랫폼을 중심으로 (PLS Path Modeling to Investigate the Relations between Competencies of Data Scientist and Big Data Analysis Performance : Focused on Kaggle Platform)

  • 한경진;조근태
    • 대한산업공학회지
    • /
    • 제42권2호
    • /
    • pp.112-121
    • /
    • 2016
  • This paper focuses on competencies of data scientists and behavioral intention that affect big data analysis performance. This experiment examined nine core factors required by data scientists. In order to investigate this, we conducted a survey to gather data from 103 data scientists who participated in big data competition at Kaggle platform and used factor analysis and PLS-SEM for the analysis methods. The results show that some key competency factors have influential effect on the big data analysis performance. This study is to provide a new theoretical basis needed for relevant research by analyzing the structural relationship between the individual competencies and performance, and practically to identify the priorities of the core competencies that data scientists must have.

Keywords Analysis on the Personal Information Protection Act: Focusing on South Korea, the European Union and the United States

  • Park, Sung-Uk;Park, Moon-Soo;Park, Soo-Hyun;Yun, Young-Mi
    • Asian Journal of Innovation and Policy
    • /
    • 제9권3호
    • /
    • pp.339-359
    • /
    • 2020
  • The policy change in the Data 3 Act is one of the issues that should be noted at a time when non-face-to-face business strategies become important after COVID-19. The Data 3 Act was implemented in South Korea on August 5, 2020, calling 'Big Data 3 Act' and 'Data Economy 3 Act,' and so personal information that was not able to identify a particular individual could be utilized without the consent of the individual. With the implementation of the Data 3 Act, it is possible to establish a fair economic ecosystem by ensuring fair access to data and various uses. In this paper, the law on the protection of personal information, which is the core of the Data 3 Act, was compared around Korea, the European Union and the United States, and the implications were derived through network analysis of keywords.

기술평가 자료를 이용한 중소기업의 생존율 추정 및 생존요인 분석 (A Study on the Survival Probability and Survival Factors of Small and Medium-sized Enterprises Using Technology Rating Data)

  • 이영찬
    • 지식경영연구
    • /
    • 제11권2호
    • /
    • pp.95-109
    • /
    • 2010
  • The objectives of this study are to identify the survival function (hazard function) of small and medium enterprises by using technology rating data for the companies guaranteed by Korea Technology Finance Corporation (KOTEC), and to figure out the factors that affects their survival. To serve the purposes, this study uses Kaplan-Meier Analysis as a non-parametric method and Cox proportional hazards model as a semi-parametric one. The 17,396 guaranteed companies that assessed from July 1st in 2005 to December 31st in 2009 are selected as samples (16,504 censored data and 829 accident data). The survival time is computed with random censoring (Type III) from July in 2005 as a starting point. The results of the analysis show that Kaplan-Meier Analysis and Cox proportional hazards model are able to readily estimate survival and hazard function and to perform comparative study among group variables such as industry and technology rating level. In particular, Cox proportional hazards model is recognized that it is useful to understand which technology rating items are meaningful to company's survival and how much they affect it. It is considered that these results will provide valuable knowledge for practitioners to find and manage the significant items for survival of the guaranteed companies through future technology rating.

  • PDF

중소중견 제조기업을 위한 공정 및 품질데이터 통합형 분석 플랫폼 (Process and Quality Data Integrated Analysis Platform for Manufacturing SMEs)

  • 최혜민;안세환;이동형;조용주
    • 산업경영시스템학회지
    • /
    • 제41권3호
    • /
    • pp.176-185
    • /
    • 2018
  • With the recent development of manufacturing technology and the diversification of consumer needs, not only the process and quality control of production have become more complicated but also the kinds of information that manufacturing facilities provide the user about process have been diversified. Therefore the importance of big data analysis also has been raised. However, most small and medium enterprises (SMEs) lack the systematic infrastructure of big data management and analysis. In particular, due to the nature of domestic manufacturing companies that rely on foreign manufacturers for most of their manufacturing facilities, the need for their own data analysis and manufacturing support applications is increasing and research has been conducted in Korea. This study proposes integrated analysis platform for process and quality analysis, considering manufacturing big data database (DB) and data characteristics. The platform is implemented in two versions, Web and C/S, to enhance accessibility which perform template based quality analysis and real-time monitoring. The user can upload data from their local PC or DB and run analysis by combining single analysis module in template in a way they want since the platform is not optimized for a particular manufacturing process. Also Java and R are used as the development language for ease of system supplementation. It is expected that the platform will be available at a low price and evolve the ability of quality analysis in SMEs.

Research Progress and Development of Technology in Tourism Research: A Bibliometric Analysis

  • Zhong, Lina;Zhu, Mengyao;Sun, Sunny;Law, Rob
    • Journal of Smart Tourism
    • /
    • 제1권2호
    • /
    • pp.3-12
    • /
    • 2021
  • The interaction between technology and tourism has been a dynamic research area recently. This study aims to review the progress and development of technology in tourism research via a bibliometric analysis. We derive the source data from the Web of Science (WoS) core collection and use CiteSpace for bibliometric analysis, including countries, institutions, authors, categories, references, and keywords. The analysis results are as follows: i) The number of published articles on the role of technology in tourism has increased in recent years. ii) Technology-related articles in tourism are abundant in Tourism Management, Journal of Travel Research, and Annals of Tourism Research. iii) The countries with the most contributions are China, the US, and the UK. The most active institutions are the Hong Kong Polytechnic University, University of Central Florida, Bournemouth University, University of Queensland, and Kyung Hee University. iv) The reference analysis results identify eight extensively researched topics from the most cited papers, and the keyword burst analysis results present an emerging trend. This study identifies the effect and development of technology in tourism research. Our findings provide implications for researchers about the current research focus of technology and the future research trend of technology in the tourism field.

선박 추진용 2행정 저속엔진의 고장모드 데이터 개발 및 LSTM 알고리즘을 활용한 특성인자 신뢰성 검증연구 (The Study of Failure Mode Data Development and Feature Parameter's Reliability Verification Using LSTM Algorithm for 2-Stroke Low Speed Engine for Ship's Propulsion)

  • 박재철;권혁찬;김철환;장화섭
    • 대한조선학회논문집
    • /
    • 제60권2호
    • /
    • pp.95-109
    • /
    • 2023
  • In the 4th industrial revolution, changes in the technological paradigm have had a direct impact on the maintenance system of ships. The 2-stroke low speed engine system integrates with the core equipment required for propulsive power. The Condition Based Management (CBM) is defined as a technology that predictive maintenance methods in existing calender-based or running time based maintenance systems by monitoring the condition of machinery and diagnosis/prognosis failures. In this study, we have established a framework for CBM technology development on our own, and are engaged in engineering-based failure analysis, data development and management, data feature analysis and pre-processing, and verified the reliability of failure mode DB using LSTM algorithms. We developed various simulated failure mode scenarios for 2-stroke low speed engine and researched to produce data on onshore basis test_beds. The analysis and pre-processing of normal and abnormal status data acquired through failure mode simulation experiment used various Exploratory Data Analysis (EDA) techniques to feature extract not only data on the performance and efficiency of 2-stroke low speed engine but also key feature data using multivariate statistical analysis. In addition, by developing an LSTM classification algorithm, we tried to verify the reliability of various failure mode data with time-series characteristics.

Analysis of Computational Science and Engineering SW Data Format for Multi-physics and Visualization

  • Ryu, Gimyeong;Kim, Jaesung;Lee, Jongsuk Ruth
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권2호
    • /
    • pp.889-906
    • /
    • 2020
  • Analysis of multi-physics systems and the visualization of simulation data are crucial and difficult in computational science and engineering. In Korea, Korea Institute of Science and Technology Information KISTI developed EDISON, a web-based computational science simulation platform, and it is now the ninth year since the service started. Hitherto, the EDISON platform has focused on providing a robust simulation environment and various computational science analysis tools. However, owing to the increasing issues in collaborative research, data format standardization has become more important. In addition, as the visualization of simulation data becomes more important for users to understand, the necessity of analyzing input / output data information for each software is increased. Therefore, it is necessary to organize the data format and metadata for the representative software provided by EDISON. In this paper, we analyzed computational fluid dynamics (CFD) and computational structural dynamics (CSD) simulation software in the field of mechanical engineering where several physical phenomena (fluids, solids, etc.) are complex. Additionally, in order to visualize various simulation result data, we used existing web visualization tools developed by third parties. In conclusion, based on the analysis of these data formats, it is possible to provide a foundation of multi-physics and a web-based visualization environment, which will enable users to focus on simulation more conveniently.

An Automatic Urban Function District Division Method Based on Big Data Analysis of POI

  • Guo, Hao;Liu, Haiqing;Wang, Shengli;Zhang, Yu
    • Journal of Information Processing Systems
    • /
    • 제17권3호
    • /
    • pp.645-657
    • /
    • 2021
  • Along with the rapid development of the economy, the urban scale has extended rapidly, leading to the formation of different types of urban function districts (UFDs), such as central business, residential and industrial districts. Recognizing the spatial distributions of these districts is of great significance to manage the evolving role of urban planning and further help in developing reliable urban planning programs. In this paper, we propose an automatic UFD division method based on big data analysis of point of interest (POI) data. Considering that the distribution of POI data is unbalanced in a geographic space, a dichotomy-based data retrieval method was used to improve the efficiency of the data crawling process. Further, a POI spatial feature analysis method based on the mean shift algorithm is proposed, where data points with similar attributive characteristics are clustered to form the function districts. The proposed method was thoroughly tested in an actual urban case scenario and the results show its superior performance. Further, the suitability of fit to practical situations reaches 88.4%, demonstrating a reasonable UFD division result.