• Title/Summary/Keyword: neural network.

Search Result 11,766, Processing Time 0.04 seconds

Precise position control of hydraulic driven stenciling robot using neural network (신경회로망을 이용한 유압 스텐슬링 로봇의 정확한 위치 제어)

  • Jung, Seul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.779-782
    • /
    • 1997
  • In this paper, accurate position control of a stenciling robot manipulator is designed. The stenciling robot is requried to draw lines and characters on the pavement. Since the robot is huge and heavy, the inertia is expected to play a major role in the tracking performance as desired. Here we are proposing neural network control scheme for a computed-torque like controller for the stenciling robot. On-line compensation is achieved by neural network. Simulation studies with stenciling robot are carried out to test the performance of the proposed control scheme.

  • PDF

Growing Algorithm of Wavelet Neural Network using F-projection (F-투영법을 이용한 웨이블렛 신경망의 성장 알고리즘)

  • 서재용;김용택;조현찬;김용민;전홍태
    • Proceedings of the IEEK Conference
    • /
    • 2001.06c
    • /
    • pp.15-168
    • /
    • 2001
  • In this paper, we propose growing algorithm of wavelet neural network. It is growing algorithm that adds hidden nodes using wavelet frame which approximately supports orthogonality in wavelet neural network based on wavelet theory. The result of this processing can be reduced global error and progresses performance efficiency of wavelet neural network. We apply the proposed algorithm to approximation problem and evaluate effectiveness of proposed algorithm.

  • PDF

Fuzzy Control Method By Automatic Scaling Factor Tuning (자동 양자이득 조정에 의한 퍼지 제어방식)

  • 강성호;임중규;엄기환
    • Proceedings of the IEEK Conference
    • /
    • 2003.07c
    • /
    • pp.2807-2810
    • /
    • 2003
  • In this paper, we propose a fuzzy control method for improving the control performance by automatically tuning the scaling factor. The proposed method is that automatically tune the input scaling factor and the output scaling factor of fuzzy logic system through neural network. Used neural network is ADALINE (ADAptive Linear NEron) neural network with delayed input. ADALINE neural network has simple construct, superior learning capacity and small computation time. In order to verify the effectiveness of the proposed control method, we performed simulation. The results showed that the proposed control method improves considerably on the environment of the disturbance.

  • PDF

Channel Allocation Using Gradual Neural Network For Multi-User OFDM Systems (다중 사용자 OFDM시스템에서 Gradual Neural Network를 이용한 채널 할당)

  • Moon, Eun-Jin;Lee, Chang-Wook;Jeon, Gi-J.
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.240-242
    • /
    • 2004
  • A channel allocation algorithm of multi-user OFDM(orthogonal frequency division multiplexing) system is presented. The proposed algorithm is to reduce the complexity of the system, using the GNN(gradual neural network) with gradual expansion scheme and the algorithm attempts to allocate channel with good channel gain to each user. The method has lower computational complexity and less iteration than other algorithms.

  • PDF

Architectures of the Parallel, Self-Organizing Hierarchical Neural Networks (병렬 자구성 계층 신경망 (PSHINN)의 구조)

  • 윤영우;문태현;홍대식;강창언
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.1
    • /
    • pp.88-98
    • /
    • 1994
  • A new neural network architecture called the Parallel. Self-Organizing Hierarchical Neural Network (PSHNN) is presented. The new architecture involves a number of stages in which each stage can be a particular neural network (SNN). The experiments performed in comparison to multi-layered network with backpropagation training and indicated the superiority of the new architecture in the sense of classification accuracy, training time,parallelism.

  • PDF

APPROXIMATION ORDER TO A FUNCTION IN $C^1$[0, 1] AND ITS DERIVATIVE BY A FEEDFOWARD NEURAL NETWORK

  • Hahm, Nahm-Woo;Hong, Bum-Il
    • Journal of applied mathematics & informatics
    • /
    • v.27 no.1_2
    • /
    • pp.139-147
    • /
    • 2009
  • We study the neural network approximation to a function in $C^1$[0, 1] and its derivative. In [3], we used even trigonometric polynomials in order to get an approximation order to a function in $L_p$ space. In this paper, we show the simultaneous approximation order to a function in $C^1$[0, 1] using a Bernstein polynomial and a feedforward neural network. Our proofs are constructive.

  • PDF

The Study On the Effectiveness of Information Retrieval in the Vector Space Model and the Neural Network Inductive Learning Model

  • Kim, Seong-Hee
    • The Journal of Information Technology and Database
    • /
    • v.3 no.2
    • /
    • pp.75-96
    • /
    • 1996
  • This study is intended to compare the effectiveness of the neural network inductive learning model with a vector space model in information retrieval. As a result, searches responding to incomplete queries in the neural network inductive learning model produced a higher precision and recall as compared with searches responding to complete queries in the vector space model. The results show that the hybrid methodology of integrating an inductive learning technique with the neural network model can help solve information retrieval problems that are the results of inconsistent indexing and incomplete queries--problems that have plagued information retrieval effectiveness.

  • PDF

A comparative study between the neural network and the winters' model in forecasting

  • Kim, Wanhee
    • Korean Management Science Review
    • /
    • v.9 no.1
    • /
    • pp.17-30
    • /
    • 1992
  • This paper is organized as follows. Section 2 illustrates several applications of neural networks. Section 3 presents the theoretical aspects of the major neural network paradigms as well as the structure of the back -propagation network used in the study. Section 4 describes the experiment including data analysis, modeling, and the performance criteria followed by the detailed discussion of the experimental results. Future research avenues including advantages and limitations of neural network are presented in the last section.

  • PDF

Design of Multi-Dynamic Neural Network Controller for Improving Transient Performance (과도상태 성능 개선을 위한 다단동적 신경망 제어기 설계)

  • Cho, Hyun-Seob;Oh, Myoung-Kwan
    • Proceedings of the KAIS Fall Conference
    • /
    • 2010.11a
    • /
    • pp.344-348
    • /
    • 2010
  • The intent of this paper is to describe a neural network structure called multi dynamic neural network(MDNN), and examine how it can be used in developing a learning scheme for computing robot inverse kinematic transformations. The architecture and learning algorithm of the proposed dynamic neural network structure, the MDNN, are described. Computer simulations are demonstrate the effectiveness of the proposed learning using the MDNN.

  • PDF

Design of Multi-Dynamic Neural Network Controller (다단동적 신경망 제어기 설계)

  • Cho, Hyun-Seob;Oh, Myoung-Kwan
    • Proceedings of the KAIS Fall Conference
    • /
    • 2010.11a
    • /
    • pp.332-336
    • /
    • 2010
  • The intent of this paper is to describe a neural network structure called multi dynamic neural network(MDNN), and examine how it can be used in developing a learning scheme for computing robot inverse kinematic transformations. The architecture and learning algorithm of the proposed dynamic neural network structure, the MDNN, are described. Computer simulations are demonstrate the effectiveness of the proposed learning using the MDNN.

  • PDF