• Title/Summary/Keyword: Information input algorithm

Search Result 2,444, Processing Time 0.027 seconds

Development of Simplified DNBR Calculation Algorithm using Model-Based Systems Engineering Methodology

  • Awad, Ibrahim Fathy;Jung, Jae Cheon
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.14 no.2
    • /
    • pp.24-32
    • /
    • 2018
  • System Complexity one of the most common cause failure of the projects, it leads to a lack of understanding about the functions of the system. Hence, the model is developed for communication and furthermore modeling help analysis, design, and understanding of the system. On the other hand, the text-based specification is useful and easy to develop but is difficult to visualize the physical composition, structure, and behaviour or data exchange of the system. Therefore, it is necessary to transform system description into a diagram which clearly depicts the behaviour of the system as well as the interaction between components. According to the International Atomic Energy Agency (IAEA) Safety Glossary, The safety system is a system important to safety, provided to ensure the safe shutdown of the reactor or the residual heat removal from the reactor core, or to limit the consequences of anticipated operational occurrences and design basis accidents. Core Protection Calculator System (CPCS) in Advanced Power Reactor 1400 (APR 1400) Nuclear Power Plant is a safety critical system. CPCS was developed using systems engineering method focusing on Departure from Nuclear Boiling Ratio (DNBR) calculation. Due to the complexity of the system, many diagrams are needed to minimize the risk of ambiguities and lack of understanding. Using Model-Based Systems Engineering (MBSE) software for modeling the DNBR algorithm were used. These diagrams then serve as the baseline of the reverse engineering process and speeding up the development process. In addition, the use of MBSE ensures that any additional information obtained from auxiliary sources can then be input into the system model, ensuring data consistency.

Optimization of the Kernel Size in CNN Noise Attenuator (CNN 잡음 감쇠기에서 커널 사이즈의 최적화)

  • Lee, Haeng-Woo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.6
    • /
    • pp.987-994
    • /
    • 2020
  • In this paper, we studied the effect of kernel size of CNN layer on performance in acoustic noise attenuators. This system uses a deep learning algorithm using a neural network adaptive prediction filter instead of using the existing adaptive filter. Speech is estimated from a single input speech signal containing noise using a 100-neuron, 16-filter CNN filter and an error back propagation algorithm. This is to use the quasi-periodic property in the voiced sound section of the voice signal. In this study, a simulation program using Tensorflow and Keras libraries was written and a simulation was performed to verify the performance of the noise attenuator for the kernel size. As a result of the simulation, when the kernel size is about 16, the MSE and MAE values are the smallest, and when the size is smaller or larger than 16, the MSE and MAE values increase. It can be seen that in the case of an speech signal, the features can be best captured when the kernel size is about 16.

Nonlinear Noise Attenuator by Adaptive Wiener Filter with Neural Network (신경망 구조의 적응 Wiener 필터를 이용한 비선형 잡음감쇠기)

  • Haeng-Woo Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.1
    • /
    • pp.71-76
    • /
    • 2023
  • This paper studied a method of attenuating nonlinear noise using a Wiener filter of a neural network structure in an acoustic noise attenuator. This system improves nonlinear noise attenuation performance with a deep learning algorithm using a neural network Wiener filter instead of using a conventional adaptive filter. A voice is estimated from a single input voice signal containing nonlinear noise using a 128-neuron, 8-neuron hidden layer and an error back propagation algorithm. In this study, a simulation program using the Keras library was written and a simulation was performed to verify the attenuation performance for nonlinear noise. As a result of the simulation, it can be seen that the noise attenuation performance of this system is significantly improved when the FNN filter is used instead of the Wiener filter even when nonlinear noise is included. This is because the complex structure of the FNN filter expresses any type of nonlinear characteristics well.

Closed-form Expressions for Optimal Transmission Power Achieving Weighted Sum-Rate Maximization in MIMO Systems (MIMO 시스템의 가중합 전송률 최대화를 위한 최적 전송 전력의 닫힌 형태 표현)

  • Shin, Suk-Ho;Kim, Jae-Won;Park, Jong-Hyun;Sung, Won-Jin
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.47 no.7
    • /
    • pp.36-44
    • /
    • 2010
  • When multi-user MIMO (Multiple-Input Multiple-Output) systems utilize a sum-rate maximization (SRM) scheduler, the throughput of the systems can be enhanced. However, fairness problems may arise because users located near cell edge or experiencing poor channel conditions are less likely to be selected by the SRM scheduler. In this paper, a weighted sum-rate maximization (WSRM) scheduler is used to enhance the fairness performance of the MIMO systems. Closed-form expressions for the optimal transmit power allocation of WSRM and corresponding weighted sum-rate (WSR) are derived in the 6-sector collaborative transmission system. Using the derived results, we propose an algorithm which searches the optimal power allocation for WSRM in the 3-sector collaborative transmission system. Based on the derived closed-form expressions and the proposed algorithm, we perform computer simulations to compare performance of the WSRM scheduler and the SRM scheduler with respect to the sum-rate and the log-sum-of-average rates. We further verify that the WSRM scheduler efficiently improves fairness performance by showing the enhanced performance of average transmission rates in low percentile region.

Discovering and Maintaining Semantic Mappings between XML Schemas and Ontologies

  • An, Yuan;Borgida, Alex;Mylopoulos, John
    • Journal of Computing Science and Engineering
    • /
    • v.2 no.1
    • /
    • pp.44-73
    • /
    • 2008
  • There is general agreement that the problem of data semantics has to be addressed for XML data to become machine-processable. This problem can be tackled by defining a semantic mapping between an XML schema and an ontology. Unfortunately, creating such mappings is a tedious, time-consuming, and error-prone task. To alleviate this problem, we present a solution that heuristically discovers semantic mappings between XML schemas and ontologies. The solution takes as input an initial set of simple correspondences between element attributes in an XML schema and class attributes in an ontology, and then generates a set of mapping formulas. Once such a mapping is created, it is important and necessary to maintain the consistency of the mapping when the associated XML schema and ontology evolve. In this paper, we first offer a mapping formalism to represent semantic mappings. Second, we present our heuristic mapping discovery algorithm. Third, we show through an empirical study that considerable effort can be saved when discovering complex mappings by using our prototype tool. Finally, we propose a mapping maintenance plan dealing with schema evolution. Our study provides a set of effective solutions for building sustainable semantic integration systems for XML data.

Evaluation of the Tank Model Optimized Parameter for Watershed Modeling (유역 유출량 추정을 위한 TANK 모형의 매개변수 최적화에 따른 적용성 평가)

  • Kim, Kye Ung;Song, Jung Hun;Ahn, Jihyun;Park, Jihoon;Jun, Sang Min;Song, Inhong;Kang, Moon Seong
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.56 no.4
    • /
    • pp.9-19
    • /
    • 2014
  • The objective of this study was to evaluate of the Tank model in simulating runoff discharge from rural watershed in comparison to the SWAT (Soil and Water Assessment Tool) model. The model parameters of SWAT was calibrated by the shuffled complex evolution-university Arizona (SCE-UA) method while Tank model was calibrated by genetic algorithm (GA) and validated. Four dam watersheds were selected as the study areas. Hydrological data of the Water Management Information System (WAMIS) and geological data were used as an input data for the model simulation. Runoff data were used for the model calibration and validation. The determination coefficient ($R^2$), root mean square error (RMSE), Nash-Sutcliffe efficiency index (NSE) were used to evaluate the model performances. The result indicated that both SWAT model and Tank model simulated runoff reasonably during calibration and validation period. For annual runoff, the Tank model tended to overestimate, especially for small runoff (< 0.2 mm) whereas SWAT model underestimate runoff as compared to observed data. The statistics indicated that the Tank model simulated runoff more accurately than the SWAT model. Therefore the Tank model could be a good tool for runoff simulation considering its ease of use.

A Study on Implementation of a Robot Vision System for Recogniton of complex 2-D Objects (복잡한 2차원 물체 인식용 로봇 시각장치의 구현에 관한 연구)

  • Kim, Ho-Seong;Kim, Yeong-Seok;Byeon, Jeung-Nam
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.22 no.1
    • /
    • pp.53-60
    • /
    • 1985
  • A computer vision system for robot is developed which can recognize a variety of two dimensional complex objects in gray level noisy scenes. the system is also capable of determining the position and orientation of the objects for robotlc manipulation. The hardware of the vision system is developed and a new edge tracking technique is also proposed. The linked edges are approximated to sample line drawing by split and merge algorithm. The system extracts many features from line drawing and constructs relational structure by the concave and convex hull of objects. In matching process, the input obhects are compared with the objects database which is formed by learning ability. Thelearning process is so simple that the system is very flexible. Several examples arc shown to demonstrate the usefulness of this system.

  • PDF

Extreme Learning Machine Approach for Real Time Voltage Stability Monitoring in a Smart Grid System using Synchronized Phasor Measurements

  • Duraipandy, P.;Devaraj, D.
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.6
    • /
    • pp.1527-1534
    • /
    • 2016
  • Online voltage stability monitoring using real-time measurements is one of the most important tasks in a smart grid system to maintain the grid stability. Loading margin is a good indicator for assessing the voltage stability level. This paper presents an Extreme Learning Machine (ELM) approach for estimation of voltage stability level under credible contingencies using real-time measurements from Phasor Measurement Units (PMUs). PMUs enable a much higher data sampling rate and provide synchronized measurements of real-time phasors of voltages and currents. Depth First (DF) algorithm is used for optimally placing the PMUs. To make the ELM approach applicable for a large scale power system problem, Mutual information (MI)-based feature selection is proposed to achieve the dimensionality reduction. MI-based feature selection reduces the number of network input features which reduces the network training time and improves the generalization capability. Voltage magnitudes and phase angles received from PMUs are fed as inputs to the ELM model. IEEE 30-bus test system is considered for demonstrating the effectiveness of the proposed methodology for estimating the voltage stability level under various loading conditions considering single line contingencies. Simulation results validate the suitability of the technique for fast and accurate online voltage stability assessment using PMU data.

A Hybrid K-anonymity Data Relocation Technique for Privacy Preserved Data Mining in Cloud Computing

  • S.Aldeen, Yousra Abdul Alsahib;Salleh, Mazleena
    • Journal of Internet Computing and Services
    • /
    • v.17 no.5
    • /
    • pp.51-58
    • /
    • 2016
  • The unprecedented power of cloud computing (CC) that enables free sharing of confidential data records for further analysis and mining has prompted various security threats. Thus, supreme cyberspace security and mitigation against adversaries attack during data mining became inevitable. So, privacy preserving data mining is emerged as a precise and efficient solution, where various algorithms are developed to anonymize the data to be mined. Despite the wide use of generalized K-anonymizing approach its protection and truthfulness potency remains limited to tiny output space with unacceptable utility loss. By combining L-diversity and (${\alpha}$,k)-anonymity, we proposed a hybrid K-anonymity data relocation algorithm to surmount such limitation. The data relocation being a tradeoff between trustfulness and utility acted as a control input parameter. The performance of each K-anonymity's iteration is measured for data relocation. Data rows are changed into small groups of indistinguishable tuples to create anonymizations of finer granularity with assured privacy standard. Experimental results demonstrated considerable utility enhancement for relatively small number of group relocations.

Parameterization Model for Damaging Ultraviolet-B Irradiance

  • Kim, Yoo-Keun;Lee, Hwa-Woon;Moon, Yun-Seob
    • Environmental Sciences Bulletin of The Korean Environmental Sciences Society
    • /
    • v.3 no.1
    • /
    • pp.41-56
    • /
    • 1999
  • Since UV-B radiation measuring networks have not been established, numerical models which calculate the flux from other readily available meteorological measurements may play an important role. That is, such a problem can be solved by using parameterization models such as two stream approximation, the delta-Eddington method, doubling method, and discrete ordinate method. However, most UV-B radiative transfer models have not been validated with measurements, because such models are not intended as practical computational schemes for providing surface estimates of UV-B radiation. The main concern so far has been to demonstrate model sensitivity for cloudless skies. In particular, few have been concerned with real cloud information. Clouds and aerosols have generally been incorporated as constituents of particular atmospheric layers with specified optical depths and scattering properties. The parameterization model presented here is a combination of a detailed radiative transfer algorithm for a coludless sky radiative process and a more approximate scheme to handle cloud effects. The model input data requires a daily measurement of the total ozone amount plus a daily record of the amount and type of cloud in the atmosphere. Measurements for an examination of the models at the Department of Atmospheric Sciences, Pusan National University have been takenfrom February, 1995. These models can be used to calculate present and future fluxes where measurements have not been taken, and construct climatologies for the period before ozone depletion began.

  • PDF