• Title/Summary/Keyword: Information codes

Search Result 2,112, Processing Time 0.029 seconds

A Product Model Centered Integration Methodology for Design and Construction Information (프로덕트 모델 중심의 설계, 시공 정보 통합 방법론)

  • Lee Keun-Hyoung;Kim Jae-Jun
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • autumn
    • /
    • pp.99-106
    • /
    • 2002
  • Researches on integration of design and construction information from earlier era focused on the conceptual data models. Development and prevalent use of commercial database management system led many researchers to design database schemas for enlightening of relationship between non-graphic data items. Although these researches became the foundation fur the proceeding researches. they did not utilize the graphic data providable from CAD system which is already widely used. 4D CAD concept suggests a way of integrating graphic data with schedule data. Although this integration provided a new possibility for integration, there exists a limitation in data dependency on a specific application. This research suggests a new approach on integration for design and construction information, 'Product Model Centered Integration Methodology'. This methodology achieves integration by preliminary research on existing methodology using 4D CAD concept. and by development and application of new integration methodology, 'Product Model Centered Integration Methodology'. 'Design Component' can be converted into digital format by object based CAD system. 'Unified Object-based Graphic Modeling' shows how to model graphic product model using CAD system. Possibility of reusing design information in latter stage depends on the ways of creating CAD model, so modeling guidelines and specifications are suggested. Then prototype system for integration management, and exchange are presented, using 'Product Frameworker', and 'Product Database' which also supports multiple-viewpoints. 'Product Data Model' is designed, and main data workflows are represented using 'Activity Diagram', one of UML diagrams. These can be used for writing programming codes and developing prototype in order to automatically create activity items in actual schedule management system. Through validation processes, 'Product Model Centered Integration Methodology' is suggested as the new approach for integration of design and construction information.

  • PDF

Applying Meta-model Formalization of Part-Whole Relationship to UML: Experiment on Classification of Aggregation and Composition (UML의 부분-전체 관계에 대한 메타모델 형식화 이론의 적용: 집합연관 및 복합연관 판별 실험)

  • Kim, Taekyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.99-118
    • /
    • 2015
  • Object-oriented programming languages have been widely selected for developing modern information systems. The use of concepts relating to object-oriented (OO, in short) programming has reduced efforts of reusing pre-existing codes, and the OO concepts have been proved to be a useful in interpreting system requirements. In line with this, we have witnessed that a modern conceptual modeling approach supports features of object-oriented programming. Unified Modeling Language or UML becomes one of de-facto standards for information system designers since the language provides a set of visual diagrams, comprehensive frameworks and flexible expressions. In a modeling process, UML users need to consider relationships between classes. Based on an explicit and clear representation of classes, the conceptual model from UML garners necessarily attributes and methods for guiding software engineers. Especially, identifying an association between a class of part and a class of whole is included in the standard grammar of UML. The representation of part-whole relationship is natural in a real world domain since many physical objects are perceived as part-whole relationship. In addition, even abstract concepts such as roles are easily identified by part-whole perception. It seems that a representation of part-whole in UML is reasonable and useful. However, it should be admitted that the use of UML is limited due to the lack of practical guidelines on how to identify a part-whole relationship and how to classify it into an aggregate- or a composite-association. Research efforts on developing the procedure knowledge is meaningful and timely in that misleading perception to part-whole relationship is hard to be filtered out in an initial conceptual modeling thus resulting in deterioration of system usability. The current method on identifying and classifying part-whole relationships is mainly counting on linguistic expression. This simple approach is rooted in the idea that a phrase of representing has-a constructs a par-whole perception between objects. If the relationship is strong, the association is classified as a composite association of part-whole relationship. In other cases, the relationship is an aggregate association. Admittedly, linguistic expressions contain clues for part-whole relationships; therefore, the approach is reasonable and cost-effective in general. Nevertheless, it does not cover concerns on accuracy and theoretical legitimacy. Research efforts on developing guidelines for part-whole identification and classification has not been accumulated sufficient achievements to solve this issue. The purpose of this study is to provide step-by-step guidelines for identifying and classifying part-whole relationships in the context of UML use. Based on the theoretical work on Meta-model Formalization, self-check forms that help conceptual modelers work on part-whole classes are developed. To evaluate the performance of suggested idea, an experiment approach was adopted. The findings show that UML users obtain better results with the guidelines based on Meta-model Formalization compared to a natural language classification scheme conventionally recommended by UML theorists. This study contributed to the stream of research effort about part-whole relationships by extending applicability of Meta-model Formalization. Compared to traditional approaches that target to establish criterion for evaluating a result of conceptual modeling, this study expands the scope to a process of modeling. Traditional theories on evaluation of part-whole relationship in the context of conceptual modeling aim to rule out incomplete or wrong representations. It is posed that qualification is still important; but, the lack of consideration on providing a practical alternative may reduce appropriateness of posterior inspection for modelers who want to reduce errors or misperceptions about part-whole identification and classification. The findings of this study can be further developed by introducing more comprehensive variables and real-world settings. In addition, it is highly recommended to replicate and extend the suggested idea of utilizing Meta-model formalization by creating different alternative forms of guidelines including plugins for integrated development environments.

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

Cache Performance Analysis of Multiprocessor Systems for OLTP Applications based on a Memory-Resident DBMS (메모리 상주 DBMS 기반의 OLTP 응용을 위한 다중프로세서 시스템 캐쉬 성능 분석)

  • Chung, Yong-Wha;Hahn, Woo-Jong;Yoon, Suk-Han;Park, Jin-Won;Lee, Kang-Woo;Kim, Yang-Woo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.4
    • /
    • pp.383-392
    • /
    • 2000
  • Currently, multiprocessors are evaluated almost exclusively with scientific applications. Commercial applications are rarely explored because it is difficult to obtain the source codes of commercial DBMS. Even when the source code is available, such as for POSTGRES, understanding the source code enough to perform detailed meaningful performance evaluations is a daunting task for computer architects.To evaluate multiprocessors with commercial applications, we have developed our own DBMS, called EZDB. EZDB is a parallelized DBMS, loosely inspired from POSTGRES, and running on top of a software architecture simulator. It is capable of executing parallel programs written in SQL. Contrary to POSTGRES, EZDB is not intended as a prototype for a production-quality DBMS. Its purpose is to easily run and evaluate the performance of commercial applications on multiprocessor architectures. To illustrate the usefulness of EZDB, we showed the cache performance data collected for the TPC-B benchmark on a shared-memory multiprocessor. The simulation results showed that the data structures exhibited unique sharing characteristics and that their locality properties and working sets were very different from those in scientific applications.

  • PDF

Isogeometric Analysis of Mindlin Plate Structures Using Commercial CAD Codes (상용 CAD와 연계한 후판 구조의 아이소-지오메트릭 해석)

  • Lee, Seung-Wook;Koo, Bon-Yong;Yoon, Min-Ho;Lee, Jae-Ok;Cho, Seon-Ho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.24 no.3
    • /
    • pp.329-335
    • /
    • 2011
  • The finite element method (FEM) has been used for various fields like mathematics and engineering. However, the FEM has a difficulty in describing the geometric shape exactly due to its property of piecewise linear discretization. Recently, however, a so-called isogeometric analysis method that uses the non-uniform rational B-spline(NURBS) basis function has been developed. The NURBS can be used to describe the geometry exactly and play a role of basis functions for the response analysis. Nevertheless, constructing the NURBS basis functions in analysis is as costly as a meshing process in the FEM. Since the isogeometric method shares geometric data with CAD, it is possible to intactly import the model data from commercial CAD tools. In this paper, we use the Rhinoceros 3D software to create CAD models and export in the form of STEP file. The information of knot vectors and control points in the NURBS is utilized in the isogeometric analysis. Through some numerical examples, the accuracy of isogeometric method is compared with that of FEM. Also, the efficiency of the isogeometric method that includes the CAD and CAE in a unified framework is verified.

A Design on the Multimedia Fingerprinting code based on Feature Point for Forensic Marking (포렌식 마킹을 위한 특징점 기반의 동적 멀티미디어 핑거프린팅 코드 설계)

  • Rhee, Kang-Hyeon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.4
    • /
    • pp.27-34
    • /
    • 2011
  • In this paper, it was presented a design on the dynamic multimedia fingerprinting code for anti-collusion code(ACC) in the protection of multimedia content. Multimedia fingerprinting code for the conventional ACC, is designed with a mathematical method to increase k to k+1 by transform from BIBD's an incidence matrix to a complement matrix. A codevector of the complement matrix is allowanced fingerprinting code to a user' authority and embedded into a content. In the proposed algorithm, the feature points were drawing from a content which user bought, with based on these to design the dynamical multimedia fingerprinting code. The candidate codes of ACC which satisfied BIBD's v and k+1 condition is registered in the codebook, and then a matrix is generated(Below that it calls "Rhee matrix") with ${\lambda}+1$ condition. In the experimental results, the codevector of Rhee matrix based on a feature point of the content is generated to exist k in the confidence interval at the significance level ($1-{\alpha}$). Euclidean distances between row and row and column and column each other of Rhee matrix is working out same k value as like the compliment matrices based on BIBD and Graph. Moreover, first row and column of Rhee matrix are an initial firing vector and to be a forensic mark of content protection. Because of the connection of the rest codevectors is reported in the codebook, when trace a colluded code, it isn't necessity to solve a correlation coefficient between original fingerprinting code and the colluded code but only search the codebook then a trace of the colluder is easy. Thus, the generated Rhee matrix in this paper has an excellent robustness and fidelity more than the mathematically generated matrix based on BIBD as ACC.

A Study on the Problems and Policy Implementation for Open-Source Software Industry in Korea: Soft System Methodology Approach (소프트시스템 모델 방법론을 통해 진단한 국내 공개 SW 산업의 문제점과 정책전략 연구)

  • Kang, Songhee;Shim, Dongnyok;Pack, Pill Ho
    • The Journal of Society for e-Business Studies
    • /
    • v.20 no.4
    • /
    • pp.193-208
    • /
    • 2015
  • In knowledge based society, information technology (IT) has been playing a key role in economic growth. In recent years, it is surprisingly notable that the source of value creation moved from hardware to software in IT industry. Especially, among many kinds of software products, the economic potential of open source was realized by many government agencies. Open source means software codes made by voluntary and open participation of worldwide IT developers, and many policies to promote open source activities were implemented for the purpose of fast growth in IT industry. But in many cases, especially in Korea, the policies promoting open source industry and its ecosystem were not considered successful. Therefore, this study provides the practical reasons for the low performance of Korean open source industry and suggests the pragmatic requisites for effective open source policy. For this purpose, this study applies soft system model (SSM) which is frequently used in academy and industry as a methodology for problem-solving and we link the problems with corresponding policy solutions based on SSM. Given concerns which Korean open source faces now, this study suggests needs for the three different kinds of government policies promoting multiple dimensions of industry: research and development (R&D)-side, supply-side, and computing environment-side. The implications suggested by this research will contribute to implement the practical policy solutions to boost open source industry in Korea.

Patterns of Unintentional Domestic Injuries in Korea (우리나라 주택 내에서 발생하는 비의도적 손상의 양상)

  • Lee, Eun-Jung;Lee, Jin-Seok;Kim, Yoon;Park, Kun-Hee;Eun, Sang-Jun;Suh, Soo-Kyung;Kim, Yong-Ik
    • Journal of Preventive Medicine and Public Health
    • /
    • v.43 no.1
    • /
    • pp.84-92
    • /
    • 2010
  • Objectives: To investigate the patterns of unintentional home injuries in Korea. Methods: The study population was 12,382,088 people who utilized National Health Insurance services due to injuries (main diagnosis codes S00 to T28) during 2006. Stratified samples(n=459,501) were randomly selected by sex, age group and severity of injury. A questionnaire was developed based on the International Classification of External Causes of Injury and 18,000 cases surveyed by telephone were analyzed after being projected into population proportionately according to the response rates of their strata. Domestic injury cases were finally included. Results: Domestic injuries (n=3,804) comprised 21.1% of total daily life injuries during 2006. Women were vulnerable to home injuries, with the elderly and those of lower income (medical-aid users) tending to suffer more severe injuries. Injury occurred most often due to a slipping fall (33.9%), overexertion (15.3%), falling (9.5%) and stumbling (9.4%), with severe injury most often resulting from slipping falls, falls and stumbles. Increasing age correlated with domestic injury-related disability. Conclusions: The present findings provide basic information for development of home injury prevention strategies, with focus on the elderly.

A Hierarchical Group-Based CAVLC Decoder (계층적 그룹 기반의 CAVLC 복호기)

  • Ham, Dong-Hyeon;Lee, Hyoung-Pyo;Lee, Yong-Surk
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.2
    • /
    • pp.26-32
    • /
    • 2008
  • Video compression schemes have been developed and used for many years. Currently, H.264/AVC is the most efficient video coding standard. The H.264/AVC baseline profile adopts CAVLC(Context-Adaptive Variable Length Coding) method as an entropy coding method. CAVLC gives better performance in compression ratios than conventional VLC(Variable Length Coding). However, because CAVLC decoder uses a lot of VLC tables, the CAVLC decoder requires a lot of area in terms of hardware. Conversely, since it must look up the VLC tables, it gives a worse performance in terms of software. In this paper, we propose a new hierarchical grouping method for the VLC tables. We can obtain an index of codes in the reconstructed VLC tables by simple arithmetic operations. In this method, the VLC tables are accessed just once in decoding a symbol. We modeled the proposed algorithm in C language, compiled under ARM ADS1.2 and simulated it with Armulator. Experimental results show that the proposed algorithm reduces execution time by about 80% and 15% compared with the H.264/AVC reference program JM(Joint Model) 10.2 and the arithmetic operation algorithm which is recently proposed, respectively.

Analysis, Detection and Prediction of some of the Structural Motifs in Proteins

  • Guruprasad, Kunchur
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.325-330
    • /
    • 2005
  • We are generally interested in the analysis, detection and prediction of structural motifs in proteins, in order to infer compatibility of amino acid sequence to structure in proteins of known three-dimensional structure available in the Protein Data Bank. In this context, we are analyzing some of the well-characterized structural motifs in proteins. We have analyzed simple structural motifs, such as, ${\beta}$-turns and ${\gamma}$-turns by evaluating the statistically significant type-dependent amino acid positional preferences in enlarged representative protein datasets and revised the amino acid preferences. In doing so, we identified a number of ‘unexpected’ isolated ${\beta}$-turns with a proline amino acid residue at the (i+2) position. We extended our study to the identification of multiple turns, continuous turns and to peptides that correspond to the combinations of individual ${\beta}$ and ${\gamma}$-turns in proteins and examined the hydrogen-bond interactions likely to stabilize these peptides. This led us to develop a database of structural motifs in proteins (DSMP) that would primarily allow us to make queries based on the various fields in the database for some well-characterized structural motifs, such as, helices, ${\beta}$-strands, turns, ${\beta}$-hairpins, ${\beta}$-${\alpha}$-${\beta}$, ${\psi}$-loops, ${\beta}$-sheets, disulphide bridges. We have recently implemented this information for all entries in the current PDB in a relational database called ODSMP using Oracle9i that is easy to update and maintain and added few additional structural motifs. We have also developed another relational database corresponding to amino acid sequences and their associated secondary structure for representative proteins in the PDB called PSSARD. This database allows flexible queries to be made on the compatibility of amino acid sequences in the PDB to ‘user-defined’ super-secondary structure conformation and vice-versa. Currently, we have extended this database to include nearly 23,000 protein crystal structures available in the PDB. Further, we have analyzed the ‘structural plasticity’ associated with the ${\beta}$-propeller structural motif We have developed a method to automatically detect ${\beta}$-propellers from the PDB codes. We evaluated the accuracy and consistency of predicting ${\beta}$ and ${\gamma}$-turns in proteins using the residue-coupled model. I will discuss results of our work and describe databases and software applications that have been developed.

  • PDF