• Title/Summary/Keyword: 인공신 경망

Search Result 37, Processing Time 0.028 seconds

Theory Refinements in Knowledge-based Artificial Neural Networks by Adding Hidden Nodes (지식기반신경망에서 은닉노드삽입을 이용한 영역이론정련화)

  • Sim, Dong-Hui
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.7
    • /
    • pp.1773-1780
    • /
    • 1996
  • KBANN (knowledge-based artificial neural network) combining the symbolic approach and the numerical approach has been shown to be more effective than other machine learning models. However KBANN doesn't have the theory refinement ability because the topology of network can't be altered dynamically. Although TopGen was proposed to extend the ability of KABNN in this respect, it also had some defects due to the link-ing of hidden nodes to input nodes and the use of beam search. The algorithm which could solve this TopGen's defects, by adding the hidden nodes linked to next layer nodes and using hill-climbing search with backtracking, is designed.

  • PDF

RC Circuit Parameter Estimation for DC Electric Traction Substation Using Linear Artificial Neural Network Scheme (선형인공신경망을 이용한 직류 전철변전소의 RC 회로정수 추정)

  • Bae, Chang Han;Kim, Young Guk;Park, Chan Kyoung;Kim, Yong Ki;Han, Moon Seob
    • Journal of the Korean Society for Railway
    • /
    • v.19 no.3
    • /
    • pp.314-323
    • /
    • 2016
  • Overhead line voltage of DC railway traction substations has rising or falling characteristics depending on the acceleration and regenerative braking of the subway train loads. The suppression of this irregular fluctuation of the line voltage gives rise to improved energy efficiency of both the railway substation and the trains. This paper presents parameter estimation schemes using the RC circuit model for an overhead line voltage at a 1500V DC electric railway traction substation. A linear artificial neural network with a back-propagation learning algorithm was trained using the measurement data for an overhead line voltage and four feeder currents. The least square estimation method was configured to implement batch processing of these measurement data. These estimation results have been presented and performance analysis has been achieved through raw data simulation.

Performance Improvement Method of Fully Connected Neural Network Using Combined Parametric Activation Functions (결합된 파라메트릭 활성함수를 이용한 완전연결신경망의 성능 향상)

  • Ko, Young Min;Li, Peng Hang;Ko, Sun Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.1-10
    • /
    • 2022
  • Deep neural networks are widely used to solve various problems. In a fully connected neural network, the nonlinear activation function is a function that nonlinearly transforms the input value and outputs it. The nonlinear activation function plays an important role in solving the nonlinear problem, and various nonlinear activation functions have been studied. In this study, we propose a combined parametric activation function that can improve the performance of a fully connected neural network. Combined parametric activation functions can be created by simply adding parametric activation functions. The parametric activation function is a function that can be optimized in the direction of minimizing the loss function by applying a parameter that converts the scale and location of the activation function according to the input data. By combining the parametric activation functions, more diverse nonlinear intervals can be created, and the parameters of the parametric activation functions can be optimized in the direction of minimizing the loss function. The performance of the combined parametric activation function was tested through the MNIST classification problem and the Fashion MNIST classification problem, and as a result, it was confirmed that it has better performance than the existing nonlinear activation function and parametric activation function.

Inductive Learning using Theory-Refinement Knowledge-Based Artificial Neural Network (이론정련 지식기반인공신경망을 이용한 귀납적 학습)

  • 심동희
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.3
    • /
    • pp.280-285
    • /
    • 2001
  • Since KBANN (knowledge-based artificial neural network) combing the inductive learning algorithm and the analytical learning algorithm was proposed, several methods such as TopGen, TR-KBANN, THRE-KBANN which modify KBANN have been proposed. But these methods can be applied when there is a domain theory. The algorithm representing the problem into KBANN based on only the instances without domain theory is proposed in this paper. Domain theory represented into KBANN can be refined by THRE-KBANN. The performance of this algorithm is more efficient than the C4.5 in the experiment for some problem domains of inductive learning.

  • PDF

Theory Refinement using Hidden Nodes Connected from Relevant Input Nodes in Knowledge-based Artificial Neural Network (지식기반인공신경망에서 관련있는 입력노드만 연계된 은닉노드를 이용한 여역이론정련화)

  • Shim, Dong-Hee
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2780-2785
    • /
    • 1997
  • Although KBANN(knowledge-based artificial neural network) has been shown to be more effective than other machine learning algorithms, KBANN doesn't have the theory refinement capability because the topology of the network can't be altered dynamically. Although TopGen algorithm was proposed to extend the ability of KABNN in this respect, it also had some defects due to the connection of hidden nodes from all input nodes and the use of beam search. An algorithm, which could solve this TopGen's defects by adding the hidden nodes connected from only related input nodes and using hill-climbing search with backtracking, is proposed.

  • PDF

Application of AI technology for various disaster analysis (다양한 재해분석을 위한 AI 기술적용 사례 소개)

  • Giha Lee;Xuan-Hien Le;Van-Giang Nguyen;Van-Linh Ngyen;Sungho Jung
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.97-97
    • /
    • 2023
  • 최근 재해분야에서 인공신경망(ANN), 기계학습(ML), 딥러닝(DL) 등 AI 기술이 활용성이 점차 증가하고 있으며, 센싱정보와 연계한 시설물 안전관리, 원격탐사와 연계한 재해감시(녹조, 산사태, 산불 등), 수문시계열(수위, 유량 등) 예측, 레이더·위성강수 자료의 보정과 예측, 상하수도 관망누수예측 등 다양한 분야에서 AI 기술이 적용되고 그 활용성이 검증된 바 있다. 본 연구에서는 ML, DL, 물리기반신경망(Pysics-informed Neural Networks, PINNs)을 이용한 다양한 재해분석 사례를 소개하고, 그 활용성과 한계에 대해서 논의하고자 한다. 주요사례로는 (1) SAR영상과 기계학습을 이용한 재해피해지역(울진 산불) 감지, (2) 국가 디지털 정보를 이용한 산사태 위험지역 판별(인제 산사태) (3) 기계학습 및 딥러닝 기법을 이용한 위성강수 자료의 보정·예측 및 유출해석, (4) 수리해석을 위한 수치해석분야에서의 PINNs의 적용성(1차원 Saint-Venant 식 해석) 평가 연구결과를 공유한다. 특히, 자료의 입·출력 자료만으로 학습된 인공신경망 모형 대신 지배방정식(물리방정식)을 만족하도록 강제한 PINNs의 경우, 인공신경망 모형보다 우수한 모의능력을 보여주었으며, 향후 복잡한 수리모델링 등 수치해석분야에서 그 활용가능성이 매우 높을 것으로 판단된다.

  • PDF

Airline In-flight Meal Demand Forecasting with Neural Networks and Time Series Models (인공신경망을 이용한 항공기 기내식 수요예측의 예측력 개선 방안에 관한 연구)

  • Lee, Young-Chan;Seo, Chang-Gab
    • The Journal of Information Systems
    • /
    • v.10 no.2
    • /
    • pp.151-164
    • /
    • 2001
  • 현재의 항공사 기내식 수요예측 시스템으로는 항공기 운항의 지연이나 초과 주문으로 인한 손실 문제를 해결하기 힘든 것으로 알려져 있다. 이러한 문제를 해결하기 위해 본 연구에서는 항공기 기내식 시계열 자료만을 입력변수로 사용한 단순인공신경망모형(simple neural network model), 단순인공신경망모형에 전통적인 시계열 기법(본 연구에서는 지수 평활법)의 예측 결과를 입력변수로 추가한 혼합인공신경망모형(hybrid neural network model), 그리고 혼합인공신경 망 모형에 상관관계가 높은 다른 시계열 자료(본 논문에서는 유사 노선의 다른 항공기 기내식 시계열 자료)를 인공신경망의 입력변수로 추가시킨 하이퍼혼합인공신경망모형(hyper hybrid neural network model)을 새로운 항공기 기내식 수요예측 기법으로 제안하고, 이들 모형의 예측력을 비교 분석하였다. 분석 결과 하이퍼혼합인공신경망 모형의 예측력이 가장 우수한 것으로 나타나, 인공신경 망을 기반으로 한 수요예측에 있어 상관관계가 높은 다른 시계열 자료를 입력변수로 추가함으로써 인공신경망모형의 예측력을 개선시킬 수 있음을 알 수 있었다

  • PDF

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Study on Q-value prediction ahead of tunnel excavation face using recurrent neural network (순환인공신경망을 활용한 터널굴착면 전방 Q값 예측에 관한 연구)

  • Hong, Chang-Ho;Kim, Jin;Ryu, Hee-Hwan;Cho, Gye-Chun
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.22 no.3
    • /
    • pp.239-248
    • /
    • 2020
  • Exact rock classification helps suitable support patterns to be installed. Face mapping is usually conducted to classify the rock mass using RMR (Rock Mass Ration) or Q values. There have been several attempts to predict the grade of rock mass using mechanical data of jumbo drills or probe drills and photographs of excavation surfaces by using deep learning. However, they took long time, or had a limitation that it is impossible to grasp the rock grade in ahead of the tunnel surface. In this study, a method to predict the Q value ahead of excavation surface is developed using recurrent neural network (RNN) technique and it is compared with the Q values from face mapping for verification. Among Q values from over 4,600 tunnel faces, 70% of data was used for learning, and the rests were used for verification. Repeated learnings were performed in different number of learning and number of previous excavation surfaces utilized for learning. The coincidence between the predicted and actual Q values was compared with the root mean square error (RMSE). RMSE value from 600 times repeated learning with 2 prior excavation faces gives a lowest values. The results from this study can vary with the input data sets, the results can help to understand how the past ground conditions affect the future ground conditions and to predict the Q value ahead of the tunnel excavation face.

Evaluation of Thermal Embrittlement Susceptibility in Cast Austenitic Stainless Steel Using Artificial Neural Network (인공신경망을 이용한 주조 스테인리스강의 열취화 민감도 평가)

  • Kim, Cheol;Park, Heung-Bae;Jin, Tae-Eun;Jeong, Ill-Seok
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.28 no.4
    • /
    • pp.460-466
    • /
    • 2004
  • Cast austenitic stainless steel is used for several components, such as primary coolant piping, elbow, pump casing and valve bodies in light water reactors. These components are subject to thermal aging at the reactor operating temperature. Thermal aging results in spinodal decomposition of the delta-ferrite leading to increased strength and decreased toughness. This study shows that ferrite content can be predicted by use of the artificial neural network. The neural network has trained teaming data of chemical components and ferrite contents using backpropagation learning process. The predicted results of the ferrite content using trained neural network are in good agreement with experimental ones.