• Title/Summary/Keyword: Process Data

Search Result 23,840, Processing Time 0.045 seconds

Development of Optimal Facility Management (FM) Process Using Spatial-data-based Mean Time Between Failure (MTBF) Analysis (공간정보 기반 MTBF 분석을 활용한 최적의 FM 프로세스 개발)

  • Yoon, Jonghan;Cha, Heesung
    • Korean Journal of Construction Engineering and Management
    • /
    • v.19 no.3
    • /
    • pp.43-51
    • /
    • 2018
  • Facility Management (FM) phase in building lifecycle management is the most crucial phase concerning building value and life cycle cost management. Nevertheless, systematic and rational FM process is not yet constructed, leading to failure of facility value and cost management from accurate and proactive FM. This is because there has been minimal approach regarding construction of optimal FM process based on rational FM data analysis. The purpose of this study is to provide optimal FM process with quantitative FM data analysis method using spatial data. This study investigated existing FM data structure and derive the limitation of it from both expert interview and practical FM material analysis. As a solution for this limitation, this study provided optimal FM process with MTBF (Mean Time Between Failure), which is quantitative FM data analysis method. The effect of the provided process was validated with a case study. It is expected that this process allows rational and objective FM data analysis, resulting in accurate and proactive FM. And it is expected that it can be used as a useful basic data for developing an effective system for the FM process.

Tabu Search-Genetic Process Mining Algorithm for Discovering Stochastic Process Tree (확률적 프로세스 트리 생성을 위한 타부 검색 -유전자 프로세스 마이닝 알고리즘)

  • Joo, Woo-Min;Choi, Jin Young
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.42 no.4
    • /
    • pp.183-193
    • /
    • 2019
  • Process mining is an analytical technique aimed at obtaining useful information about a process by extracting a process model from events log. However, most existing process models are deterministic because they do not include stochastic elements such as the occurrence probabilities or execution times of activities. Therefore, available information is limited, resulting in the limitations on analyzing and understanding the process. Furthermore, it is also important to develop an efficient methodology to discover the process model. Although genetic process mining algorithm is one of the methods that can handle data with noises, it has a limitation of large computation time when it is applied to data with large capacity. To resolve these issues, in this paper, we define a stochastic process tree and propose a tabu search-genetic process mining (TS-GPM) algorithm for a stochastic process tree. Specifically, we define a two-dimensional array as a chromosome to represent a stochastic process tree, fitness function, a procedure for generating stochastic process tree and a model trace as a string of activities generated from the process tree. Furthermore, by storing and comparing model traces with low fitness values in the tabu list, we can prevent duplicated searches for process trees with low fitness value being performed. In order to verify the performance of the proposed algorithm, we performed a numerical experiment by using two kinds of event log data used in the previous research. The results showed that the suggested TS-GPM algorithm outperformed the GPM algorithm in terms of fitness and computation time.

Cost-Efficient and Automatic Large Volume Data Acquisition Method for On-Chip Random Process Variation Measurement

  • Lee, Sooeun;Han, Seungho;Lee, Ikho;Sim, Jae-Yoon;Park, Hong-June;Kim, Byungsub
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.15 no.2
    • /
    • pp.184-193
    • /
    • 2015
  • This paper proposes a cost-efficient and automatic method for large data acquisition from a test chip without expensive equipment to characterize random process variation in an integrated circuit. Our method requires only a test chip, a personal computer, a cheap digital-to-analog converter, a controller and multimeters, and thus large volume measurement can be performed on an office desk at low cost. To demonstrate the proposed method, we designed a test chip with a current model logic driver and an array of 128 current mirrors that mimic the random process variation of the driver's tail current mirror. Using our method, we characterized the random process variation of the driver's voltage due to the random process variation on the driver's tail current mirror from large volume measurement data. The statistical characteristics of the driver's output voltage calculated from the measured data are compared with Monte Carlo simulation. The difference between the measured and the simulated averages and standard deviations are less than 20% showing that we can easily characterize the random process variation at low cost by using our cost-efficient automatic large data acquisition method.

Analysis of case reports based on dental hygiene process (치위생과정 기반의 임상치위생 증례보고서 분석)

  • Lee, Su-Young;Choi, Ha-Na
    • Journal of Korean society of Dental Hygiene
    • /
    • v.11 no.5
    • /
    • pp.749-758
    • /
    • 2011
  • Objectives : The purpose of this study was to analyse case reports performed through a dental hygiene process and provide basic data on clinical education of dental hygiene. Methods : 154 case reports which collected for six years were analysed. This study applied dental hygiene process model in dental hygiene diagnosis. Dental hygiene diagnosis was more cleared by dental a hygiene process model. Data analysis was performed by the Frequency statistics using SPSS 12.0 for Windows. Results : 1. The clients are mainly comprised 20's university student(91.9%). 2. In assessment phase, clients finished 100% test of subjective data. 3. When applied a dental hygiene process model in dental hygiene diagnosis, students have identified 23 type of dental hygiene problem and analysed dental hygiene problem frequently used as bleeding of gingiva, calculus and deposit of dental plaque. 4. In case of plan of dental hygiene intervention, Fluoride application showed the most high level(98.1%) in clinical intervention. 5. Results of intervention showed that performance rate(98.7%) of scaling is the most high level. Conclusions : Dental hygiene process model is more useful than other diagnostic models in clinical practice based on dental hygiene process.

One-class Classification based Fault Classification for Semiconductor Process Cyclic Signal (단일 클래스 분류기법을 이용한 반도체 공정 주기 신호의 이상분류)

  • Cho, Min-Young;Baek, Jun-Geol
    • IE interfaces
    • /
    • v.25 no.2
    • /
    • pp.170-177
    • /
    • 2012
  • Process control is essential to operate the semiconductor process efficiently. This paper consider fault classification of semiconductor based cyclic signal for process control. In general, process signal usually take the different pattern depending on some different cause of fault. If faults can be classified by cause of faults, it could improve the process control through a definite and rapid diagnosis. One of the most important thing is a finding definite diagnosis in fault classification, even-though it is classified several times. This paper proposes the method that one-class classifier classify fault causes as each classes. Hotelling T2 chart, kNNDD(k-Nearest Neighbor Data Description), Distance based Novelty Detection are used to perform the one-class classifier. PCA(Principal Component Analysis) is also used to reduce the data dimension because the length of process signal is too long generally. In experiment, it generates the data based real signal patterns from semiconductor process. The objective of this experiment is to compare between the proposed method and SVM(Support Vector Machine). Most of the experiments' results show that proposed method using Distance based Novelty Detection has a good performance in classification and diagnosis problems.

Nursing Process of Abdominal Surgery Patients (복부수술환자의 간호과정)

  • Yoo, Hyung-Sook
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.8 no.3
    • /
    • pp.411-430
    • /
    • 2002
  • Purpose : This study was to develop Nursing Process Model of abdominal surgery patient using nursing diagnoses of NANDA, Nursing Interventions Classification(NIC), and Nursing Outcomes Classification(NOC). Method : The data in database were collected from nursing records in sixty patients with abdominal surgery admitted in a university hospital and open questionnaires of thirteen nurses. Systematic nursing process resulting from each nursing diagnoses, most common, was developed by the statistical analysis through database query from clinical database of abdominal surgery patients. Result : 51 nursing diagnoses were identified in abdominal surgery patients. The most commonly occurred nursing diagnoses were Pain, Risk for Infection, Sleep Pattern Disturbance, Hyperthermia, Altered Nutrition: Less Than Body Requirements in order. The linkage lists of NANDA to NIC and NANDA to NOC, and the nursing activities according to nursing diagnoses of abdominal surgery patients were identified in unit. Conclusion : Nursing Process of abdominal surgery patients was comprised of core nursing diagnoses, core nursing interventions, core nursing outcomes which provides the most reliable data in unit and could make nurses facilitate nursing process easily without full consideration of knowledge about nursing language classification system. Therefore, it could support nurses' decision making and recording of nursing process especially in the computerized patient record system if unit nursing process model using standardized nursing language system which contains of their own core nursing process data was developed.

  • PDF

An Operating Methodology of SPC System in LCD Industries

  • Lee, Chang-Young;Nam, Ho-Soo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.3
    • /
    • pp.507-514
    • /
    • 2005
  • In this paper we consider an operating methodology of SPC(statistical process control) system in the TFT-LCD industries. The main contents are real time process monitoring, significant difference test, outlying glass analysis, process capability analysis and chart viewing.

  • PDF

A Six Sigma Methodology Using Data Mining : A Case Study of "P" Steel Manufacturing Company (데이터 마이닝 기반의 6 시그마 방법론 : 철강산업 적용사례)

  • Jang, Gil-Sang
    • The Journal of Information Systems
    • /
    • v.20 no.3
    • /
    • pp.1-24
    • /
    • 2011
  • Recently, six sigma has been widely adopted in a variety of industries as a disciplined, data-driven problem solving approach or methodology supported by a handful of powerful statistical tools in order to reduce variation through continuous process improvement. Also, data mining has been widely used to discover unknown knowledge from a large volume of data using various modeling techniques such as neural network, decision tree, regression analysis, etc. This paper proposes a six sigma methodology based on data mining for effectively and efficiently processing massive data in driving six sigma projects. The proposed methodology is applied in the hot stove system which is a major energy-consuming process in a "P" steel company for improvement of heat efficiency through reduction of energy consumption. The results show optimal operation conditions and reduction of the hot stove energy cost by 15%.

A Compact Divide-and-conquer Algorithm for Delaunay Triangulation with an Array-based Data Structure (배열기반 데이터 구조를 이용한 간략한 divide-and-conquer 삼각화 알고리즘)

  • Yang, Sang-Wook;Choi, Young
    • Korean Journal of Computational Design and Engineering
    • /
    • v.14 no.4
    • /
    • pp.217-224
    • /
    • 2009
  • Most divide-and-conquer implementations for Delaunay triangulation utilize quad-edge or winged-edge data structure since triangles are frequently deleted and created during the merge process. How-ever, the proposed divide-and-conquer algorithm utilizes the array based data structure that is much simpler than the quad-edge data structure and requires less memory allocation. The proposed algorithm has two important features. Firstly, the information of space partitioning is represented as a permutation vector sequence in a vertices array, thus no additional data is required for the space partitioning. The permutation vector represents adaptively divided regions in two dimensions. The two-dimensional partitioning of the space is more efficient than one-dimensional partitioning in the merge process. Secondly, there is no deletion of edge in merge process and thus no bookkeeping of complex intermediate state for topology change is necessary. The algorithm is described in a compact manner with the proposed data structures and operators so that it can be easily implemented with computational efficiency.

Study of Script Conversion for Data Extraction of Constrained Objects

  • Choi, Chul Young
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.155-160
    • /
    • 2022
  • In recent years, Unreal Engine has been increasingly included in the animation process produced in the studio. In this case, there will be more than one of main software, and it is very important to accurately transfer data between the software and Unreal Engine. In animation data, not only the animation data of the character but also the animation data of objects interacting with the character must be individually produced and transferred. Most of the objects that interact with the character have a condition of constraints with the part of character. In this paper, I tried to stipulate the production process for extracting animation data of constrained objects, and to analyze why users experience difficulties due to the complexity of the regulations in the process of executing them. And based on the flowchart prescribed for user convenience, I created a program using a Python script to prove the user's convenience. Finally, by comparing the results generated according to the manual flowchart with the results generated through the script command, it was found that the data were consistent.