• Title/Summary/Keyword: large-scale systems

Search Result 1,879, Processing Time 0.036 seconds

Development Process of Systems Engineering Management Plan(SEMP) for Large-Scale Complex System Programs (대형 복합 시스템 개발을 위한 효과적인 시스템공학 관리계획 개발 프로세스)

  • 유일상;박영원
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.26 no.4
    • /
    • pp.82-90
    • /
    • 2003
  • The Systems Engineering, as a methodology for engineering and management of today's ever-growing complex system, is a comprehensive and iterative problem-solving process. The process centers on the analysis and management of the stakeholders' needs throughout the entire life-cycle of a system and searches for an optimized system architecture. There are many essential needs and requirements to be met when a system development task is carried out. Systems Engineering Management Plan(SEMP), as a specification for system development process, must be established to satisfy constraints and requirements of stakeholders successfully and to prevent cost overrun and schedule delay. SEMP defines technical management functions and comprehensive plans for managing and controlling the entire system development process, specialty engineering processes, etc. Especially. in the case of a large-scale complex system development program where various disciplinary engineering such as mechanical; electrical; electronics; control; telecommunication; material; civil engineering etc. must be synthesized, it Is essential to develop SEMP to ensure systematic and continuous process improvements for quality and to prevent cost/schedule overruns. This study will enable the process knowledge management on the subject of SEMP as a core systems engineering management effort, that is, definitely defining and continuously managing specification of development process about requirements, functions, and process realization of it using a computer-aided systems engineering software. The paper suggests a systematic SEMP development process and demonstrates a data model and schema for computer-aided systems engineering software, RDD-100, for use in the development and management of SEMP. These are being applied to the systems engineering technology development task for the next-generation high-speed railway systems in progress.

Large eddy simulation of turbulent premixed flame with dynamic sub-grid scale G-equation model in turbulent channel flow (Dynamic Sub-grid Scale G-방정식 모델에 의한 평행평판간 난류의 예 혼합 연소에 관한 대 와동 모사)

  • Ko Sang-Cheol;Park Nam-Seob
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.29 no.8
    • /
    • pp.849-854
    • /
    • 2005
  • The laminar flame concept in turbulent reacting flow is considered applicable to many practical combustion systems For turbulent premixed combustion under widely used flamelet concept, the flame surface is described as an infinitely thin propagating surface that such a Propagating front can be represented as a level contour of a continuous function G. In this study, for the Purpose of validating the LES of G-equation combustion model. LES of turbulent Premixed combustion with dynamic SGS model of G-equation in turbulent channel flow are carried out A constant density assumption is used. The Predicted flame propagating speed is goof agreement with the DNS result of G. Bruneaux et al.

Logic circuit design for high-speed computing of dynamic response in real-time hybrid simulation using FPGA-based system

  • Igarashi, Akira
    • Smart Structures and Systems
    • /
    • v.14 no.6
    • /
    • pp.1131-1150
    • /
    • 2014
  • One of the issues in extending the range of applicable problems of real-time hybrid simulation is the computation speed of the simulator when large-scale computational models with a large number of DOF are used. In this study, functionality of real-time dynamic simulation of MDOF systems is achieved by creating a logic circuit that performs the step-by-step numerical time integration of the equations of motion of the system. The designed logic circuit can be implemented to an FPGA-based system; FPGA (Field Programmable Gate Array) allows large-scale parallel computing by implementing a number of arithmetic operators within the device. The operator splitting method is used as the numerical time integration scheme. The logic circuit consists of blocks of circuits that perform numerical arithmetic operations that appear in the integration scheme, including addition and multiplication of floating-point numbers, registers to store the intermediate data, and data busses connecting these elements to transmit various information including the floating-point numerical data among them. Case study on several types of linear and nonlinear MDOF system models shows that use of resource sharing in logic synthesis is crucial for effective application of FPGA to real-time dynamic simulation of structural response with time step interval of 1 ms.

A Walsh-Based Distributed Associative Memory with Genetic Algorithm Maximization of Storage Capacity for Face Recognition

  • Kim, Kyung-A;Oh, Se-Young
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.640-643
    • /
    • 2003
  • A Walsh function based associative memory is capable of storing m patterns in a single pattern storage space with Walsh encoding of each pattern. Furthermore, each stored pattern can be matched against the stored patterns extremely fast using algorithmic parallel processing. As such, this special type of memory is ideal for real-time processing of large scale information. However this incredible efficiency generates large amount of crosstalk between stored patterns that incurs mis-recognition. This crosstalk is a function of the set of different sequencies [number of zero crossings] of the Walsh function associated with each pattern to be stored. This sequency set is thus optimized in this paper to minimize mis-recognition, as well as to maximize memory saying. In this paper, this Walsh memory has been applied to the problem of face recognition, where PCA is applied to dimensionality reduction. The maximum Walsh spectral component and genetic algorithm (GA) are applied to determine the optimal Walsh function set to be associated with the data to be stored. The experimental results indicate that the proposed methods provide a novel and robust technology to achieve an error-free, real-time, and memory-saving recognition of large scale patterns.

  • PDF

Status of the technology development of large scale HTS generators for wind turbine

  • Le, T.D.;Kim, J.H.;Kim, D.J.;Boo, C.J.;Kim, H.M.
    • Progress in Superconductivity and Cryogenics
    • /
    • v.17 no.2
    • /
    • pp.18-24
    • /
    • 2015
  • Large wind turbine generators with high temperature superconductors (HTS) are in incessant development because of their advantages such as weight and volume reduction and the increased efficiency compared with conventional technologies. In addition, nowadays the wind turbine market is growing in a function of time, increasing the capacity and energy production of the wind farms installed and increasing the electrical power for the electrical generators installed. As a consequence, it is raising the wind power energy contribution for the global electricity demand. In this study, a forecast of wind energy development will be firstly emphasized, then it continue presenting a recent status of the technology development of large scale HTSG for wind power followed by an explanation of HTS wire trend, cryogenics cooling systems concept, HTS magnets field coil stability and other technological parts for optimization of HTS generator design - operating temperature, design topology, field coil shape and level cost of energy, as well. Finally, the most relevant projects and designs of HTS generators specifically for offshore wind power systems are also mentioned in this study.

BeanFS: A Distributed File System for Large-scale E-mail Services (BeanFS: 대규모 이메일 서비스를 위한 분산 파일 시스템)

  • Jung, Wook;Lee, Dae-Woo;Park, Eun-Ji;Lee, Young-Jae;Kim, Sang-Hoon;Kim, Jin-Soo;Kim, Tae-Woong;Jun, Sung-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.4
    • /
    • pp.247-258
    • /
    • 2009
  • Distributed file systems running on a cluster of inexpensive commodity hardware are being recognized as an effective solution to support the explosive growth of storage demand in large-scale Internet service companies. This paper presents the design and implementation of BeanFS, a distributed file system for large-scale e-mail services. BeanFS is adapted to e-mail services as follows. First, the volume-based replication scheme alleviates the metadata management overhead of the central metadata server in dealing with a very large number of small files. Second, BeanFS employs a light-weighted consistency maintenance protocol tailored to simple access patterns of e-mail message. Third, transient and permanent failures are treated separately and recovering from transient failures is done quickly and has less overhead.

An Ultra-precision Lathe for Large-area Micro-structured Roll Molds (대면적 미세패턴 롤 금형 가공용 초정밀 롤 선반 개발)

  • Oh, Jeong Seok;Song, Chang Kyu;Hwang, Jooho;Shim, Jong Youp;Park, Chun Hong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.30 no.12
    • /
    • pp.1303-1312
    • /
    • 2013
  • We report an ultra-precision lathe designed to machine micron-scale features on a large-area roll mold. The lathe can machine rolls up to 600 mm in diameter and 2,500 mm in length. All axes use hydrostatic oil bearings to exploit the high-precision, stiffness, and damping characteristics. The headstock spindle and rotary tooling table are driven by frameless direct drive motors, while coreless linear motors are used for the two linear axes. Finite element method modeling reveals that the effects of structural deformation on the machining accuracy are less than $1{\mu}m$. The results of thermal testing show that the maximum temperature rise at the spindle outer surface is approximately $0.5^{\circ}C$. Finally, performance evaluations of the error motion, micro-positioning capability, and fine-pitch machining demonstrate that the lathe is capable of producing optical-quality surfaces with micron-scale patterns with feature sizes as small as $20{\mu}m$ on a large-area roll mold.

Runtime Prediction Based on Workload-Aware Clustering (병렬 프로그램 로그 군집화 기반 작업 실행 시간 예측모형 연구)

  • Kim, Eunhye;Park, Ju-Won
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.38 no.3
    • /
    • pp.56-63
    • /
    • 2015
  • Several fields of science have demanded large-scale workflow support, which requires thousands of CPU cores or more. In order to support such large-scale scientific workflows, high capacity parallel systems such as supercomputers are widely used. In order to increase the utilization of these systems, most schedulers use backfilling policy: Small jobs are moved ahead to fill in holes in the schedule when large jobs do not delay. Since an estimate of the runtime is necessary for backfilling, most parallel systems use user's estimated runtime. However, it is found to be extremely inaccurate because users overestimate their jobs. Therefore, in this paper, we propose a novel system for the runtime prediction based on workload-aware clustering with the goal of improving prediction performance. The proposed method for runtime prediction of parallel applications consists of three main phases. First, a feature selection based on factor analysis is performed to identify important input features. Then, it performs a clustering analysis of history data based on self-organizing map which is followed by hierarchical clustering for finding the clustering boundaries from the weight vectors. Finally, prediction models are constructed using support vector regression with the clustered workload data. Multiple prediction models for each clustered data pattern can reduce the error rate compared with a single model for the whole data pattern. In the experiments, we use workload logs on parallel systems (i.e., iPSC, LANL-CM5, SDSC-Par95, SDSC-Par96, and CTC-SP2) to evaluate the effectiveness of our approach. Comparing with other techniques, experimental results show that the proposed method improves the accuracy up to 69.08%.

A Novel Reference Model for Cloud Manufacturing CPS Platform Based on oneM2M Standard (제조 클라우드 CPS를 위한 oneM2M 기반의 플랫폼 참조 모델)

  • Yun, Seongjin;Kim, Hanjin;Shin, Hyeonyeop;Chin, Hoe Seung;Kim, Won-Tae
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.2
    • /
    • pp.41-56
    • /
    • 2019
  • Cloud manufacturing is a new concept of manufacturing process that works like a single factory with connected multiple factories. The cloud manufacturing system is a kind of large-scale CPS that produces products through the collaboration of distributed manufacturing facilities based on technologies such as cloud computing, IoT, and virtualization. It utilizes diverse and distributed facilities based on centralized information systems, which allows flexible composition user-centric and service-oriented large-scale systems. However, the cloud manufacturing system is composed of a large number of highly heterogeneous subsystems. It has difficulties in interconnection, data exchange, information processing, and system verification for system construction. In this paper, we derive the user requirements of various aspects of the cloud manufacturing system, such as functional, human, trustworthiness, timing, data and composition, based on the CPS Framework, which is the analysis methodology for CPS. Next, by analyzing the user requirements we define the system requirements including scalability, composability, interactivity, dependability, timing, interoperability and intelligence. We map the defined CPS system requirements to the requirements of oneM2M, which is the platform standard for IoT, so that the support of the system requirements at the level of the IoT platform is verified through Mobius, which is the implementation of oneM2M standard. Analyzing the verification result, finally, we propose a large-scale cloud manufacturing platform based on oneM2M that can meet the cloud manufacturing requirements to support the overall features of the Cloud Manufacturing CPS with dependability.

Firm Size and Different Behaviors in IT Investment Decisions

  • Shim, Seon-Young;Lee, Byung-Tae
    • Management Science and Financial Engineering
    • /
    • v.16 no.2
    • /
    • pp.99-114
    • /
    • 2010
  • The influencing factors of large-scale IT investment decisions are rarely investigated in an empirical perspective. We find out different behaviors in IT investment decisions according to the size of organization. Large scale IT-investment decisions (e.g. system downsizing) can be the outcome of decision-makers' motivation to adopt and control new IT systems- However, this phenomenon is salient in the large-sized organization rather than small-sized ones. Based on our investigation, we predict general IT decision-making behaviors in organizations when making IT investment decisions.