• Title/Summary/Keyword: Information input algorithm

Search Result 2,444, Processing Time 0.032 seconds

Propagation Neural Networks for Real-time Recognition of Error Data (에라 정보의 실시간 인식을 위한 전파신경망)

  • Kim, Jong-Man;Hwang, Jong-Sun;Kim, Young-Min
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2001.11b
    • /
    • pp.46-51
    • /
    • 2001
  • For Fast Real-time Recognition of Nonlinear Error Data, a new Neural Network algorithm which recognized the map in real time is proposed. The proposed neural network technique is the real time computation method through the inter-node diffusion, In the network, a node corresponds to a state in the quantized input space. Each node is composed of a processing unit and fixed weights from its neighbor nodes as well as its input terminal. The most reliable algorithm derived for real time recognition of map, is a dynamic programming based algorithm based on sequence matching techniques that would process the data as it arrives and could therefore provide continuously updated neighbor information estimates. Through several simulation experiments, real time reconstruction of the nonlinear map information is processed,

  • PDF

Design of VLSI Array Architecture with Optimal Pipeline Period for Fast Fractal Image Compression (고속 프랙탈 영상압축을 위한 최적의 파이프라인 주기를 갖는 VLSI 어레이 구조 설계)

  • 성길영;우종호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.5A
    • /
    • pp.702-708
    • /
    • 2000
  • In this paper, we designed one-dimensional VLSI array with optimal pipeline period for high speed processing fractal image compression. The algorithm is derived which is suitable for VLSI array from axed block partition algorithm. Also the algorithm satisfies high quality of image and high compression-ratio. The designed VLSI array has optimal pipeline relied because the required processing time of PEs is distributed as same as possible. As this result, we can improve the processing speed up to about 3 times. The number of input/output pins can be reduced by sharing the input/output and arithmetic unit of the domain blocks and the range blocks.

  • PDF

Estimation of Trifocal Tensor with Corresponding Mesh of Two Frontal Images

  • Tran Duy Dung;Jun Byung Hwan
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.133-136
    • /
    • 2004
  • We are going to procedure various view from two frontal image using trifocal tensor. We found that warping is effective to produce synthesized poses of a face with the small number of mesh point of a given image in previous research[1]. For this research, fundamental matrix is important to calculate trifocal tensor. So, in this paper, we investigate two existing algorithms: Hartley's[2] and Kanatani's[3]. As an experimental result, Kenichi Kantani's algorithm has better performance of fundamental matrix than Harley's algorithm. Then we use the fundamental matrix of Kenichi Kantani's algorithm to calculate trifocal tensor. From trifocal tensor we calculate new trifocal tensor with rotation input and translation input and we use warping to produce new virtual views.

  • PDF

Propagation Neural Networks for Real-time Recognition of Error Data (에라 정보의 실시간 인식을 위한 전파신경망)

  • 김종만;황종선;김영민
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2001.11a
    • /
    • pp.46-51
    • /
    • 2001
  • For Fast Real-time Recognition of Nonlinear Error Data, a new Neural Network algorithm which recognized the map in real time is proposed. The proposed neural network technique is the real time computation method through the inter-node diffusion. In the network, a node corresponds to a state in the quantized input space. Each node is composed of a processing unit and fixed weights from its neighbor nodes as well as its input terminal. The most reliable algorithm derived for real time recognition of map, is a dynamic programming based algorithm based on sequence matching techniques that would process the data as it arrives and could therefore provide continuously updated neighbor information estimates. Through several simulation experiments, real time reconstruction of the nonlinear map information is processed.

  • PDF

IoT-based systemic lupus erythematosus prediction model using hybrid genetic algorithm integrated with ANN

  • Edison Prabhu K;Surendran D
    • ETRI Journal
    • /
    • v.45 no.4
    • /
    • pp.594-602
    • /
    • 2023
  • Internet of things (IoT) is commonly employed to detect different kinds of diseases in the health sector. Systemic lupus erythematosus (SLE) is an autoimmune illness that occurs when the body's immune system attacks its own connective tissues and organs. Because of the complicated interconnections between illness trigger exposure levels across time, humans have trouble predicting SLE symptom severity levels. An effective automated machine learning model that intakes IoT data was created to forecast SLE symptoms to solve this issue. IoT has several advantages in the healthcare industry, including interoperability, information exchange, machine-to-machine networking, and data transmission. An SLE symptom-predicting machine learning model was designed by integrating the hybrid marine predator algorithm and atom search optimization with an artificial neural network. The network is trained by the Gene Expression Omnibus dataset as input, and the patients' data are used as input to predict symptoms. The experimental results demonstrate that the proposed model's accuracy is higher than state-of-the-art prediction models at approximately 99.70%.

Synchronization at Input Buffered Switch (입력버퍼 교환기에서의 패킷 동기화 기법)

  • 이상호;신동렬
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.117-120
    • /
    • 1999
  • Input queueing is useful for high bandwidth switches and routers because of lower complexity and fewer circuits than output queueing. The input queueing switch, however, suffers HOL-Blocking, which limits the throughput to 58%. To get around this low throughput, we propose a simple scheduling algorithm called Synchronous Input Port (SIP). This method synchronize packets and switching without blocking, which is shown to have better performance over the established algorithms

  • PDF

A Study on Variable Step Size LMS Algorithm using estimated correlation (추정상관값을 이용한 가변 스텝사이즈 LMS 알고리듬에 관한 연구)

  • 권순용;오신범;이채욱
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.115-118
    • /
    • 2000
  • We present a new variable step size LMS algorithm using the correlation between reference input and error signal of adaptive filter. The proposed algorithm updates each weight of filter by different step size at same sample time. We applied this algorithm to adaptive multip]e-notch filter. Simulation results are presented to compare the performance of the proposed algorithm with the usual LMS algorithm and another variable step algorithm.

  • PDF

What are the benefits and challenges of multi-purpose dam operation modeling via deep learning : A case study of Seomjin River

  • Eun Mi Lee;Jong Hun Kam
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.246-246
    • /
    • 2023
  • Multi-purpose dams are operated accounting for both physical and socioeconomic factors. This study aims to evaluate the utility of a deep learning algorithm-based model for three multi-purpose dam operation (Seomjin River dam, Juam dam, and Juam Control dam) in Seomjin River. In this study, the Gated Recurrent Unit (GRU) algorithm is applied to predict hourly water level of the dam reservoirs over 2002-2021. The hyper-parameters are optimized by the Bayesian optimization algorithm to enhance the prediction skill of the GRU model. The GRU models are set by the following cases: single dam input - single dam output (S-S), multi-dam input - single dam output (M-S), and multi-dam input - multi-dam output (M-M). Results show that the S-S cases with the local dam information have the highest accuracy above 0.8 of NSE. Results from the M-S and M-M model cases confirm that upstream dam information can bring important information for downstream dam operation prediction. The S-S models are simulated with altered outflows (-40% to +40%) to generate the simulated water level of the dam reservoir as alternative dam operational scenarios. The alternative S-S model simulations show physically inconsistent results, indicating that our deep learning algorithm-based model is not explainable for multi-purpose dam operation patterns. To better understand this limitation, we further analyze the relationship between observed water level and outflow of each dam. Results show that complexity in outflow-water level relationship causes the limited predictability of the GRU algorithm-based model. This study highlights the importance of socioeconomic factors from hidden multi-purpose dam operation processes on not only physical processes-based modeling but also aritificial intelligence modeling.

  • PDF

Optimal actuator selection for output variance constrained control

  • 김재훈
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10a
    • /
    • pp.565-569
    • /
    • 1993
  • In this paper, a specified number of actuators are selected from a given set of admissible actuators. The selected set of actuators is likely to use minimum control energy while required output variance constraints are guaranteed to be satisfied. The actuator selection procedure is an iterative algorithm composed of two parts; an output variance constrained control and an input variance constrained control algorithm. The idea behind this algorithm is that the solution to the first control problem provides the necessary weighting matrix in the objective function of the second optimization problem, and the sensitivity information from the second problem is utilized to delete one actuator. For variance constrained control problems, by considering a dual version of each control problem an efficient algorithm is provided, whose convergence properties turn out to be better than an existing algorithm. Numerical examples with a simple beam are given for both the input/output variance constrained control problem and the actuator selection problem.

  • PDF

Optimization of State-Based Real-Time Speech Endpoint Detection Algorithm (상태변수 기반의 실시간 음성검출 알고리즘의 최적화)

  • Kim, Su-Hwan;Lee, Young-Jae;Kim, Young-Il;Jeong, Sang-Bae
    • Phonetics and Speech Sciences
    • /
    • v.2 no.4
    • /
    • pp.137-143
    • /
    • 2010
  • In this paper, a speech endpoint detection algorithm is proposed. The proposed algorithm is a kind of state transition-based ones for speech detection. To reject short-duration acoustic pulses which can be considered noises, it utilizes duration information of all detected pulses. For the optimization of parameters related with pulse lengths and energy threshold to detect speech intervals, an exhaustive search scheme is adopted while speech recognition rates are used as its performance index. Experimental results show that the proposed algorithm outperforms the baseline state-based endpoint detection algorithm. At 5 dB input SNR for the beamforming input, the word recognition accuracies of its outputs were 78.5% for human voice noises and 81.1% for music noises.

  • PDF