• Title/Summary/Keyword: Process Data Analysis

Search Result 9,332, Processing Time 0.049 seconds

Analysis of Business Process in the SCM Sector Using Data Mining (데이터마이닝을 활용한 SCM 부문에서의 비즈니스 프로세스 분석)

  • Lee, Sang-Young;Lee, Yun-Suk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.6 s.44
    • /
    • pp.59-67
    • /
    • 2006
  • If apply BPM that is a business process management tool to SCM sector, efficient process management and control are available. Also, BPM can execute integrating process that compose SCM effectively. These access method does to manage progress process of SCM process more efficiently and do monitoring. Also, It is can be establish plan about improvement of process analyzing process achievement result. Thus, in this paper, introduce this BPM into SCM environment. Also, SCM process presents plan that executes integration and improves business process effectively applying data mining technique.

  • PDF

Introduction to the Indian Buffet Process: Theory and Applications (인도부페 프로세스의 소개: 이론과 응용)

  • Lee, Youngseon;Lee, Kyoungjae;Lee, Kwangmin;Lee, Jaeyong;Seo, Jinwook
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.2
    • /
    • pp.251-267
    • /
    • 2015
  • The Indian Buffet Process is a stochastic process on equivalence classes of binary matrices having finite rows and infinite columns. The Indian Buffet Process can be imposed as the prior distribution on the binary matrix in an infinite feature model. We describe the derivation of the Indian buffet process from a finite feature model, and briefly explain the relation between the Indian buffet process and the beta process. Using a Gaussian linear model, we describe three algorithms: Gibbs sampling algorithm, Stick-breaking algorithm and variational method, with application for finding features in image data. We also illustrate the use of the Indian Buffet Process in various type of analysis such as dyadic data analysis, network data analysis and independent component analysis.

Development of Failure Reporting Analysis and Corrective Action System

  • Hong, Yeon-Woong
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2006.11a
    • /
    • pp.97-112
    • /
    • 2006
  • FRACAS(Failure Reporting, Analysis and Corrective Action System) is intended to provide management visibility and control for reliability and maintainability improvement of hardware and associated software by timely and disciplined utilization of failure and maintenance data to generate and implement effective corrective actions to prevent failure recurrence and to simplify or reduce the maintenance tasks. This process applies to acquisition for the design, development, fabrication, test, and operation or military systems, equipment, and associated computer programs. This paper shows the FRACAS development process and developed FRACAS system for a defense equipment.

  • PDF

Principles of Multivariate Data Visualization

  • Huh, Moon Yul;Cha, Woon Ock
    • Communications for Statistical Applications and Methods
    • /
    • v.11 no.3
    • /
    • pp.465-474
    • /
    • 2004
  • Data visualization is the automation process and the discovery process to data sets in an effort to discover underlying information from the data. It provides rich visual depictions of the data. It has distinct advantages over traditional data analysis techniques such as exploring the structure of large scale data set both in the sense of number of observations and the number of variables by allowing great interaction with the data and end-user. We discuss the principles of data visualization and evaluate the characteristics of various tools of visualization according to these principles.

Die Design of Drawing for the Copper Bus-bar (동부스바 인발 금형설계)

  • 권혁홍;이정로
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.11 no.4
    • /
    • pp.82-88
    • /
    • 2002
  • Copper bus-bar is made by drawing process and used in many part of industry. Ohen design drawing die for copper bus-bar, design factor is focused on the deformation of die-land by drawing force and shrink fit. In this paper it is analyzed to determine shrink fit value by shrink fit analysis program which is used with APDL/UIDL language in a commercial FEM package, ANSYS. The shrink fit analysis has been developed that enables optimal desist of the dies taking into account the elastic deflections. Elastic deflection is generated in shrink fitting the die inserts and that caused by the stresses generated using DEFORM software for drawing process analysis. This data can be processed as load input data fir a finite element die-stress analysis. Process simulation and stress analysis are thus combined during the drawing die design. The stress analysis of the dies is used to determine optimized dimension of die-land.

Finding the Optimal Data Classification Method Using LDA and QDA Discriminant Analysis

  • Kim, SeungJae;Kim, SungHwan
    • Journal of Integrative Natural Science
    • /
    • v.13 no.4
    • /
    • pp.132-140
    • /
    • 2020
  • With the recent introduction of artificial intelligence (AI) technology, the use of data is rapidly increasing, and newly generated data is also rapidly increasing. In order to obtain the results to be analyzed based on these data, the first thing to do is to classify the data well. However, when classifying data, if only one classification technique belonging to the machine learning technique is applied to classify and analyze it, an error of overfitting can be accompanied. In order to reduce or minimize the problems caused by misclassification of the classification system such as overfitting, it is necessary to derive an optimal classification by comparing the results of each classification by applying several classification techniques. If you try to interpret the data with only one classification technique, you will have poor reasoning and poor predictions of results. This study seeks to find a method for optimally classifying data by looking at data from various perspectives and applying various classification techniques such as LDA and QDA, such as linear or nonlinear classification, as a process before data analysis in data analysis. In order to obtain the reliability and sophistication of statistics as a result of big data analysis, it is necessary to analyze the meaning of each variable and the correlation between the variables. If the data is classified differently from the hypothesis test from the beginning, even if the analysis is performed well, unreliable results will be obtained. In other words, prior to big data analysis, it is necessary to ensure that data is well classified to suit the purpose of analysis. This is a process that must be performed before reaching the result by analyzing the data, and it may be a method of optimal data classification.

HUMAN ERRORS DURING THE SIMULATIONS OF AN SGTR SCENARIO: APPLICATION OF THE HERA SYSTEM

  • Jung, Won-Dea;Whaley, April M.;Hallbert, Bruce P.
    • Nuclear Engineering and Technology
    • /
    • v.41 no.10
    • /
    • pp.1361-1374
    • /
    • 2009
  • Due to the need of data for a Human Reliability Analysis (HRA), a number of data collection efforts have been undertaken in several different organizations. As a part of this effort, a human error analysis that focused on a set of simulator records on a Steam Generator Tube Rupture (SGTR) scenario was performed by using the Human Event Repository and Analysis (HERA) system. This paper summarizes the process and results of the HERA analysis, including discussions about the usability of the HERA system for a human error analysis of simulator data. Five simulated records of an SGTR scenario were analyzed with the HERA analysis process in order to scrutinize the causes and mechanisms of the human related events. From this study, the authors confirmed that the HERA was a serviceable system that can analyze human performance qualitatively from simulator data. It was possible to identify the human related events in the simulator data that affected the system safety not only negatively but also positively. It was also possible to scrutinize the Performance Shaping Factors (PSFs) and the relevant contributory factors with regard to each identified human event.

Optimization of VIGA Process Parameters for Power Characteristics of Fe-Si-Al-P Soft Magnetic Alloy using Machine Learning

  • Sung-Min, Kim;Eun-Ji, Cha;Do-Hun, Kwon;Sung-Uk, Hong;Yeon-Joo, Lee;Seok-Jae, Lee;Kee-Ahn, Lee;Hwi-Jun, Kim
    • Journal of Powder Materials
    • /
    • v.29 no.6
    • /
    • pp.459-467
    • /
    • 2022
  • Soft magnetic powder materials are used throughout industries such as motors and power converters. When manufacturing Fe-based soft magnetic composites, the size and shape of the soft magnetic powder and the microstructure in the powder are closely related to the magnetic properties. In this study, Fe-Si-Al-P alloy powders were manufactured using various manufacturing process parameter sets, and the process parameters of the vacuum induction melt gas atomization process were set as melt temperature, atomization gas pressure, and gas flow rate. Process variable data that records are converted into 6 types of data for each powder recovery section. Process variable data that recorded minute changes were converted into 6 types of data and used as input variables. As output variables, a total of 6 types were designated by measuring the particle size, flowability, apparent density, and sphericity of the manufactured powders according to the process variable conditions. The sensitivity of the input and output variables was analyzed through the Pearson correlation coefficient, and a total of 6 powder characteristics were analyzed by artificial neural network model. The prediction results were compared with the results through linear regression analysis and response surface methodology, respectively.

Big Data Analysis and Prediction of Traffic in Los Angeles

  • Dauletbak, Dalyapraz;Woo, Jongwook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.841-854
    • /
    • 2020
  • The paper explains the method to process, analyze and predict traffic patterns in Los Angeles county using Big Data and Machine Learning. The dataset is used from a popular navigating platform in the USA, which tracks information on the road using connected users' devices and also collects reports shared by the users through the app. The dataset mainly consists of information about traffic jams and traffic incidents reported by users, such as road closure, hazards, accidents. The major contribution of this paper is to give a clear view of how the large-scale road traffic data can be stored and processed using the Big Data system - Hadoop and its ecosystem (Hive). In addition, analysis is explained with the help of visuals using Business Intelligence and prediction with classification machine learning model on the sampled traffic data is presented using Azure ML. The process of modeling, as well as results, are interpreted using metrics: accuracy, precision and recall.

Data-centric Smart Street Light Monitoring and Visualization Platform for Campus Management

  • Somrudee Deepaisarn;Paphana Yiwsiw;Chanon Tantiwattanapaibul;Suphachok Buaruk;Virach Sornlertlamvanich
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.3
    • /
    • pp.216-224
    • /
    • 2023
  • Smart lighting systems have become increasingly popular in several public sectors because of trends toward urbanization and intelligent technologies. In this study, we designed and implemented a web application platform to explore and monitor data acquired from lighting devices at Thammasat University (Rangsit Campus, Thailand). The platform provides a convenient interface for administrative and operative staff to monitor, control, and collect data from sensors installed on campus in real time for creating geographically specific big data. Platform development focuses on both back- and front-end applications to allow a seamless process for recording and displaying data from interconnected devices. Responsible persons can interact with devices and acquire data effortlessly, minimizing workforce and human error. The collected data were analyzed using an exploratory data analysis process. Missing data behavior caused by system outages was also investigated.