• Title/Summary/Keyword: Generic System

Search Result 472, Processing Time 0.026 seconds

A Exploration of Neural Network Development Methodologies (인공지능 네트워크의 Methodology 개발 상호비교)

  • Lee, Ki-Dong;Meso, Peter
    • Journal of Digital Convergence
    • /
    • v.9 no.4
    • /
    • pp.91-101
    • /
    • 2011
  • We examined current publications on artificial neural network development with a View to identifying the methodologies that are being used to develop these networks, how extensive these methodologies are, the categorization of these methodologies, if these methodologies demonstrate a common underlying and generic (standard) methodology for the development of artificial neural networks, and how closely these methodologies (and the underlying genetic methodology, if established) relate to the conventional systems development methodologies.

Computationally-Efficient Algorithms for Multiuser Detection in Short Code Wideband CDMA TDD Systems

  • De, Parthapratim
    • Journal of Communications and Networks
    • /
    • v.18 no.1
    • /
    • pp.27-39
    • /
    • 2016
  • This paper derives and analyzes a novel block fast Fourier transform (FFT) based joint detection algorithm. The paper compares the performance and complexity of the novel block-FFT based joint detector to that of the Cholesky based joint detector and single user detection algorithms. The novel algorithm can operate at chip rate sampling, as well as higher sampling rates. For the performance/complexity analysis, the time division duplex (TDD) mode of a wideband code division multiplex access (WCDMA) is considered. The results indicate that the performance of the fast FFT based joint detector is comparable to that of the Cholesky based joint detector, and much superior to that of single user detection algorithms. On the other hand, the complexity of the fast FFT based joint detector is significantly lower than that of the Cholesky based joint detector and less than that of the single user detection algorithms. For the Cholesky based joint detector, the approximate Cholesky decomposition is applied. Moreover, the novel method can also be applied to any generic multiple-input-multiple-output (MIMO) system.

A Design for Six Sigma: A Robust Tool in Systems Engineering Process

  • Yoon, Hee-Kweon;Byun, Jai-Hyun
    • Industrial Engineering and Management Systems
    • /
    • v.11 no.4
    • /
    • pp.346-352
    • /
    • 2012
  • While systems engineering has been widely applied to complex system development, some evidences are reported about major budget and schedule overruns in systems engineering applied. On the other hand, many organizations have been deploying Design for Six Sigma (DFSS) to build Six Sigma momentums in the area of design and development for their products and processes. To explore the possibility of having a DFSS complement systems engineering process, this process reviews the systems engineering with their categories of effort and DFSS with its methodologies. A comparison of the systems engineering process and DFSS indicates that DFSS can be a complement to systems engineering for delivering higher quality products to customers faster at a lower cost. We suggest a simplified framework of systems engineering process, that is, PADOV which was derived from the generic systems engineering process which has been applied to the development of T-50 advanced supersonic trainer aircraft by Korea Aerospace Industries (KAI) with technical assistance of Lockheed Martin. We demonstrated that each phase of PADOV framework is comprehensively matched to the pertinent categories of systems engineering effort from various standards.

Design and Implementation of the Video Query Processing Engine for Content-Based Query Processing (내용기반 질의 처리를 위한 동영상 질의 처리기의 설계 및 구현)

  • Jo, Eun-Hui;Kim, Yong-Geol;Lee, Hun-Sun;Jeong, Yeong-Eun;Jin, Seong-Il
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.3
    • /
    • pp.603-614
    • /
    • 1999
  • As multimedia application services on high-speed information network have been rapidly developed, the need for the video information management system that provides an efficient way for users to retrieve video data is growing. In this paper, we propose a video data model that integrates free annotations, image features, and spatial-temporal features for video purpose of improving content-based retrieval of video data. The proposed video data model can act as a generic video data model for multimedia applications, and support free annotations, image features, spatial-temporal features, and structure information of video data within the same framework. We also propose the video query language for efficiently providing query specification to access video clips in the video data. It can formalize various kinds of queries based on the video contents. Finally we design and implement the query processing engine for efficient video data retrieval on the proposed metadata model and the proposed video query language.

  • PDF

Optical Analysis for the 3D Display with a Lenticular Array (렌티큘러 렌즈 기반 3차원 디스플레이 장치의 광학적 해석방법)

  • Kim, Bong-Sik;Kim, Keon-Woo;Lee, Kil-Hoon;Park, Woo-Sang
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.26 no.7
    • /
    • pp.534-538
    • /
    • 2013
  • We propose a generic method to calculate the optical functionalities of a 3D display with a lenticular lens array. In the present work, based on the geometrical optics, it is designed considering the specifications of the display panel. For the effective simulation, we first calculate the optical functionalities of a single cylindrical lens and, by comparing with the results obtained from the conventional geometrical optics, confirm the validity of the present method. Afterwards, we obtain the full distribution of the light intensity at an optimum viewing distance by expanding the results of the single lens to the horizontal plane of the display panel. From these results, we finally confirm whether the 3D images are realized or not in the system.

Development of Monitoring Tool for Synaptic Weights on Artificial Neural Network (인공 신경망의 시냅스 가중치 관리용 도구 개발)

  • Shin, Hyun-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.16D no.1
    • /
    • pp.139-144
    • /
    • 2009
  • Neural network is a very exciting and generic framework to develop almost all ranges of machine learning technologies and its potential is far beyond its current capabilities. Among other characteristics, neural network acts as associative memory obtained from the values structurally stored in synaptic inherent structure. Due to innate complexity of neural networks system, in its practical implementation and maintenance, multifaceted problems are known to be unavoidable. In this paper, we present design and implementation details of GUI software which can be valuable tool to maintain and develop neural networks. It has capability of displaying every state of synaptic weights with network nodal relation in each learning step.

A Study on Fault Detection for Photovoltaic Power Modules using Statistical Comparison Scheme (통계학적 비교 기법을 이용한 태양광 모듈의 고장 유무 검출에 관한 연구)

  • Cho, Hyun Cheol;Jung, Young Jin;Lee, Gwan Ho
    • Journal of the Korean Solar Energy Society
    • /
    • v.33 no.4
    • /
    • pp.89-93
    • /
    • 2013
  • In recent years, many investigations about photovoltaic power systems have been significantly carried out in the fields of renewable power energy. Such research area generally includes developments of highly efficient solar cells, advanced power conversion systems, and smart monitoring systems. A generic objective of fault detection and diagnosis techniques is to timely recognize unexpected faulty of dynamic systems so that economic demage occurred by such faulty is decreased by means of engineering techniques. This paper presents a novel fault detection approach for photovoltaic power arrays which are electrically connected in series and parallels. In the proposed fault detection scheme, we first measure all of photovoltaic modules located in each array by using electronic sense systems and then compare each measurement in turn to detect location of fault module through statistic computation algorithm. We accomplish real-time experiments to demonstrate our proposed fault detection methodology by using a test-bed system including two 20 watt photovoltaic modules.

TREATING UNCERTAINTIES IN A NUCLEAR SEISMIC PROBABILISTIC RISK ASSESSMENT BY MEANS OF THE DEMPSTER-SHAFER THEORY OF EVIDENCE

  • Lo, Chung-Kung;Pedroni, N.;Zio, E.
    • Nuclear Engineering and Technology
    • /
    • v.46 no.1
    • /
    • pp.11-26
    • /
    • 2014
  • The analyses carried out within the Seismic Probabilistic Risk Assessments (SPRAs) of Nuclear Power Plants (NPPs) are affected by significant aleatory and epistemic uncertainties. These uncertainties have to be represented and quantified coherently with the data, information and knowledge available, to provide reasonable assurance that related decisions can be taken robustly and with confidence. The amount of data, information and knowledge available for seismic risk assessment is typically limited, so that the analysis must strongly rely on expert judgments. In this paper, a Dempster-Shafer Theory (DST) framework for handling uncertainties in NPP SPRAs is proposed and applied to an example case study. The main contributions of this paper are two: (i) applying the complete DST framework to SPRA models, showing how to build the Dempster-Shafer structures of the uncertainty parameters based on industry generic data, and (ii) embedding Bayesian updating based on plant specific data into the framework. The results of the application to a case study show that the approach is feasible and effective in (i) describing and jointly propagating aleatory and epistemic uncertainties in SPRA models and (ii) providing 'conservative' bounds on the safety quantities of interest (i.e. Core Damage Frequency, CDF) that reflect the (limited) state of knowledge of the experts about the system of interest.

Conceptual Data Modeling: Entity-Relationship Models as Thinging Machines

  • Al-Fedaghi, Sabah
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.247-260
    • /
    • 2021
  • Data modeling is a process of developing a model to design and develop a data system that supports an organization's various business processes. A conceptual data model represents a technology-independent specification of structure of data to be stored within a database. The model aims to provide richer expressiveness and incorporate a set of semantics to (a) support the design, control, and integrity parts of the data stored in data management structures and (b) coordinate the viewing of connections and ideas on a database. The described structure of the data is often represented in an entity–relationship (ER) model, which was one of the first data-modeling techniques and is likely to continue to be a popular way of characterizing entity classes, attributes, and relationships. This paper attempts to examine the basic ER modeling notions in order to analyze the concepts to which they refer as well as ways to represent them. In such a mission, we apply a new modeling methodology (thinging machine; TM) to ER in terms of its fundamental building constructs, representation entities, relationships, and attributes. The goal of this venture is to further the understanding of data models and enrich their semantics. Three specific contributions to modeling in this context are incorporated: (a) using the TM model's five generic actions to inject processing in the ER structure; (b) relating the single ontological element of TM modeling (i.e., a thing/machine or thimac) to ER entities and relationships; and (c) proposing a high-level integrated, extended ER model that includes structural and time-oriented notions (e.g., events or behavior).

The Effect on Performance with SCM Dynamic Capabilities in the Pharmaceutical Industry : Mediated Through Cooperational Relationship (제약산업의 SCM 동적역량이 성과에 미치는 영향: 협력관계를 매개로 하여)

  • Seo, Young-Kyu;Song, Dohan;Huh, Hoon
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.3
    • /
    • pp.192-206
    • /
    • 2021
  • The pharmaceutical industry is an industry that provides medicines related to the health and life of the people. The pharmaceutical industry is a traditional regulation industry with the characteristics of R&D(Research and Development), purchasing, manufacturing, distribution and consumption under strict government management. Until now, pharmaceutical companies have continued to make efforts to maintain competitiveness through patent management, new product development, and marketing. However, industries are rapidly changing, including rising costs for developing new products and expanding generic markets. As these changes and uncertainties in the management environment increase, efforts are required to improve the competitiveness of the pharmaceutical industry from a new perspective. In this study, we intend to examine the impact of SCM(Supply Chain Management) dynamic capability of pharmaceutical companies on corporate performance through partnerships to respond to market changes and uncertainties. It was determined that the agility, visibility and flexibility that constitutes the SCM dynamic capabilities would affect the performance of pharmaceutical companies. In this study, the importance of SCM dynamic capabilities and cooperative relationships was identified through surveys by SCM managers of pharmaceutical companies. Consequently, in the pharmaceutical industry, which is a regulatory industry, we have identified that SCM dynamic capabilities and cooperative relationships with partner companies have a significant impact on corporate performance.