• Title/Summary/Keyword: Modeling methodology

Search Result 1,776, Processing Time 0.026 seconds

Response Surface Methodology based on the D-optimal Design for Cell Gap Characteristic for Flexible Liquid Crystal Display (D-optimal Design을 이용한 Flexible 액정 디스플레이용 셀 갭 특성에 대한 반응 표면 분석)

  • Ko, Young-Don;Hwang, Jeoung-Yeon;Seo, Dae-Shik;Yun, Il-Gu
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2004.11a
    • /
    • pp.510-513
    • /
    • 2004
  • This paper represents the response surface model for the cell gap on the flexible liquid crystal display (LCD) process. Using response surface methodology (RSM). D-optimal design is carried out to build the design space and the cell gap is characterized by the quadratic model. The statistical analysis is used to verify the response surface model. This modeling technique can predict the characteristics of the desired response, cell gap, varying with process conditions.

  • PDF

Rationally modeling collapse due to bending and external pressure in pipelines

  • Nogueira, Andre C.
    • Earthquakes and Structures
    • /
    • v.3 no.3_4
    • /
    • pp.473-494
    • /
    • 2012
  • The capacity of pipelines to resist collapse under external pressure and bending moment is a major aspect of deepwater pipeline design. Existing design codes present interaction equations that quantify pipeline capacities under such loadings, although reasonably accurate, are based on empirical data fitting of the bending strain, and assumed simplistic interaction with external pressure collapse. The rational model for collapse of deepwater pipelines, which are relatively thick with a diameter-to-thickness ratio less than 40, provides a unique theoretical basis since it is derived from first principles such as force equilibrium and compatibility equations. This paper presents the rational model methodology and compares predicted results and recently published full scale experimental data on the subject. Predictive capabilities of the rational model are shown to be excellent. The methodology is extended for the problem of pipeline collapse under point load, longitudinal bending and external pressure. Due to its rational derivation and excellent prediction capabilities, it is recommended that design codes adopt the rational model methodology.

A Study on the Software Test Case Development using Systems Engineering Methodology (시스템엔지니어링 방법론을 적용한 소프트웨어 테스트 케이스 개발에 관한 연구)

  • Salim, Shelly;Shin, Junguk;Kim, Jinil
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.14 no.2
    • /
    • pp.83-88
    • /
    • 2018
  • Software has become an integral part of almost any system, triggered by the ever-growing demand for automation and artificial intelligent throughout engineering domains. The complexities of software-centric systems are also increasing, which make software test efforts become essential in software development projects. In this study, we applied systems engineering methodology in generating software test cases. We found out the similarities between requirements analysis and traceability concept of systems engineering and test specification contents of software test. In terms of acceptance test, software test cases could be considered as validation requirements. We also suggested a method to determine test order using a SysML modeling tool.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

A Study on General Purpose Analysis Technique for Railway Electrification System (정교한 전차선로의 회로모델링 및 범용성 해석 기법 개발)

  • 홍재승;오광해;창상훈;김발호;김정훈
    • Proceedings of the KSR Conference
    • /
    • 1999.11a
    • /
    • pp.296-301
    • /
    • 1999
  • This paper presents a new static circuit modeling methodology amenable to analysing the electric railway system. The accuracy and practicability of the proposed approach are demonstrated with the railway system containing 2 to 6 train

  • PDF

Similarity Measurement Using Ontology in Vessel Clearance Process (온톨로지를 이용한 선박 통관 프로세스의 유사성 측정)

  • Yahya, Bernardo N.;Park, Jae-Hun;Bae, Hye-Rim;Mo, Jung-Kwan
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.37 no.2
    • /
    • pp.153-162
    • /
    • 2011
  • The demands of complicated data communications have issued a new challenge to port logistics systems. Customers expect ports to handle their generated administrative data while a vessel is docked in a port. One port logistics system, known as the Vessel Clearance Process (VCP), manages large numbers of documents related to port of entry. In the VCP, information flows through many organizations such as the port authority, shipping agents, marine offices, immigration offices, and others. Therefore, for effective management of the Business Process (BP) of the VCP, a standardized method of BP modeling is essential, especially in heterogeneous system environments. In a port, according to port policy, terms and data are sued that are similar to but different from those of other logistics partners, which hinders standardized modeling of the BP. In order to avoid tedious and time-consuming document customization work, more convenient modeling of BP for VCP is essential. This paper proposes an ontology-based process similarity measurement to assist designer for process modeling in port domain, especially VCP. We expect that this methodology will use convenient and quick modeling of port business processes.

DEVS-based Modeling Methodology for Cybersecurity Simulations from a Security Perspective

  • Kim, Jiyeon;Kim, Hyung-Jong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.5
    • /
    • pp.2186-2203
    • /
    • 2020
  • Security administrators of companies and organizations need to come up with proper countermeasures against cyber-attacks considering infrastructures and security policies in their possession. In order to develop and verify such countermeasures, the administrators should be able to reenact both cyber-attacks and defenses. Simulations can be useful for the reenactment by overcoming its limitations including high risk and cost. If the administrators are able to design various scenarios of cyber-attacks and to develop simulation models from their viewpoints, they can simulate desired situations and observe the results more easily. It is challenging to simulate cyber-security issues, because there is lack of theoretical basis for modeling a wide range of the security field as well as pre-defined basic components used to model cyber-attacks. In this paper, we propose a modeling method for cyber-security simulations by developing a basic component and a composite model, called Abstracted Cyber-Security Unit Model (ACSUM) and Abstracted Cyber-security SIMulation model (ACSIM), respectively. The proposed models are based on DEVS(Discrete Event systems Specification) formalism, a modeling theory for discrete event simulations. We develop attack scenarios by sequencing attack behaviors using ACSUMs and then model ACSIMs by combining and abstracting the ACSUMs from a security perspective. The concepts of ACSUM and ACSIM enable the security administrators to simulate numerous cyber-security issues from their viewpoints. As a case study, we model a worm scenario using ACSUM and simulate three types of simulation models based on ACSIM from a different security perspective.

Visual Cohesion Improvement Technology by Clustering of Abstract Object (추상화 객체의 클러스터링에 의한 가시적 응집도 향상기법)

  • Lee Jeong-Yeal;Kim Jeong-Ok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.4 s.32
    • /
    • pp.61-69
    • /
    • 2004
  • The user interface design needs to support the complex interactions between human and computers. It also requires comprehensive knowledges many areas to collect customer's requirements and negotiate with them. The user interface designer needs to be a graphic expert, requirement analyst, system designer, programmer, technical expert, social activity scientist, and so on. Therefore, it is necessary to research on an designing methodology of user interface for satisfying various expertise areas. In the paper, We propose the 4 business event's abstract object visualizing phases such as fold abstract object modeling, task abstract object modeling, transaction abstract object modeling, and form abstract object modeling. As a result, this modeling method allows us to enhance visual cohesion of UI, and help unskilled designer to can develope the higy-qualified user interface.

  • PDF