• Title/Summary/Keyword: Learning/Training Algorithms

Search Result 432, Processing Time 0.025 seconds

Intelligent Control Algorithm for the Adjustment Process During Electronics Production (전자제품생산의 조정고정을 위한 지능형 제어알고리즘)

  • 장석호;구영모;고택범;우광방
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.4 no.4
    • /
    • pp.448-457
    • /
    • 1998
  • A neural network based control algorithm with fuzzy compensation is proposed for the automated adjustment in the production of electronic end-products. The process of adjustment is to tune the variable devices in order to examine the specified performances of the products ready prior to packing. Camcorder is considered as a target product. The required test and adjustment system is developed. The adjustment system consists of a NNC(neural network controller), a sub-NNC, and an auxiliary algorithm utilizing the fuzzy logic. The neural network is trained by means of errors between the outputs of the real system and the network, as well as on the errors between the changing rate of the outputs. Control algorithm is derived to speed up the learning dynamics and to avoid the local minima at higher energy level, and is able to converge to the global minimum at lower energy level. Many unexpected problems in the application of the real system are resolved by the auxiliary algorithms. As the adjustments of multiple items are related to each other, but the significant effect of performance by any specific item is not observed. The experimental result shows that the proposed method performs very effectively and are advantageous in simple architecture, extracting easily the training data without expertise, adapting to the unstable system that the input-output properties of each products are slightly different, with a wide application to other similar adjustment processes.

  • PDF

Current Status and Future Direction of Artificial Intelligence in Healthcare and Medical Education (의료분야에서 인공지능 현황 및 의학교육의 방향)

  • Jung, Jin Sup
    • Korean Medical Education Review
    • /
    • v.22 no.2
    • /
    • pp.99-114
    • /
    • 2020
  • The rapid development of artificial intelligence (AI), including deep learning, has led to the development of technologies that may assist in the diagnosis and treatment of diseases, prediction of disease risk and prognosis, health index monitoring, drug development, and healthcare management and administration. However, in order for AI technology to improve the quality of medical care, technical problems and the efficacy of algorithms should be evaluated in real clinical environments rather than the environment in which algorithms are developed. Further consideration should be given to whether these models can improve the quality of medical care and clinical outcomes of patients. In addition, the development of regulatory systems to secure the safety of AI medical technology, the ethical and legal issues related to the proliferation of AI technology, and the impacts on the relationship with patients also need to be addressed. Systematic training of healthcare personnel is needed to enable adaption to the rapid changes in the healthcare environment. An overall review and revision of undergraduate medical curriculum is required to enable extraction of significant information from rapidly expanding medical information, data science literacy, empathy/compassion for patients, and communication among various healthcare providers. Specialized postgraduate AI education programs for each medical specialty are needed to develop proper utilization of AI models in clinical practice.

A Classification Analysis using Bayesian Neural Network (베이지안 신경망을 이용한 분류분석)

  • Hwang, Jin-Soo;Choi, Seong-Yong;Jun, Hong-Suk
    • Journal of the Korean Data and Information Science Society
    • /
    • v.12 no.2
    • /
    • pp.11-25
    • /
    • 2001
  • There are several algorithms for classification in modeling relations, patterns, and rules which exist in data. We learn to classify objects on the basis of instances presented to us, not by being given a set of classification rules. The Bayesian learning uses the probability distribution to express our knowledge about unknown parameters and update our knowledge by the law of probability as the evidence gathered from data. Also, the neural network models are designed for predicting an unknown category or quantity on the basis of known attributes by training. In this paper, we compare the misclassification error rates of Bayesian Neural Network method with those of other classification algorithms, CHAID, CART, and QUBST using several data sets.

  • PDF

Automatic Dataset Generation of Object Detection and Instance Segmentation using Mask R-CNN (Mask R-CNN을 이용한 물체인식 및 개체분할의 학습 데이터셋 자동 생성)

  • Jo, HyunJun;Kim, Dawit;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.31-39
    • /
    • 2019
  • A robot usually adopts ANN (artificial neural network)-based object detection and instance segmentation algorithms to recognize objects but creating datasets for these algorithms requires high labeling costs because the dataset should be manually labeled. In order to lower the labeling cost, a new scheme is proposed that can automatically generate a training images and label them for specific objects. This scheme uses an instance segmentation algorithm trained to give the masks of unknown objects, so that they can be obtained in a simple environment. The RGB images of objects can be obtained by using these masks, and it is necessary to label the classes of objects through a human supervision. After obtaining object images, they are synthesized with various background images to create new images. Labeling the synthesized images is performed automatically using the masks and previously input object classes. In addition, human intervention is further reduced by using the robot arm to collect object images. The experiments show that the performance of instance segmentation trained through the proposed method is equivalent to that of the real dataset and that the time required to generate the dataset can be significantly reduced.

A Deep-Learning Based Automatic Detection of Craters on Lunar Surface for Lunar Construction (달기지 건설을 위한 딥러닝 기반 달표면 크레이터 자동 탐지)

  • Shin, Hyu Soung;Hong, Sung Chul
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.38 no.6
    • /
    • pp.859-865
    • /
    • 2018
  • A construction of infrastructures and base station on the moon could be undertaken by linking with the regions where construction materials and energy could be supplied on site. It is necessary to detect craters on the lunar surface and gather their topological information in advance, which forms permanent shaded regions (PSR) in which rich ice deposits might be available. In this study, an effective method for automatic detection of lunar craters on the moon surface is taken into consideration by employing a latest version of deep-learning algorithm. A training of a deep-learning algorithm is performed by involving the still images of 90000 taken from the LRO orbiter on operation by NASA and the label data involving position and size of partly craters shown in each image. the Faster RCNN algorithm, which is a latest version of deep-learning algorithms, is applied for a deep-learning training. The trained deep-learning code was used for automatic detection of craters which had not been trained. As results, it is shown that a lot of erroneous information for crater's positions and sizes labelled by NASA has been automatically revised and many other craters not labelled has been detected. Therefore, it could be possible to automatically produce regional maps of crater density and topological information on the moon which could be changed through time and should be highly valuable in engineering consideration for lunar construction.

Research on Optimization Strategies for Random Forest Algorithms in Federated Learning Environments (연합 학습 환경에서의 랜덤 포레스트 알고리즘 최적화 전략 연구)

  • InSeo Song;KangYoon Lee
    • The Journal of Bigdata
    • /
    • v.9 no.1
    • /
    • pp.101-113
    • /
    • 2024
  • Federated learning has garnered attention as an efficient method for training machine learning models in a distributed environment while maintaining data privacy and security. This study proposes a novel FedRFBagging algorithm to optimize the performance of random forest models in such federated learning environments. By dynamically adjusting the trees of local random forest models based on client-specific data characteristics, the proposed approach reduces communication costs and achieves high prediction accuracy even in environments with numerous clients. This method adapts to various data conditions, significantly enhancing model stability and training speed. While random forest models consist of multiple decision trees, transmitting all trees to the server in a federated learning environment results in exponentially increasing communication overhead, making their use impractical. Additionally, differences in data distribution among clients can lead to quality imbalances in the trees. To address this, the FedRFBagging algorithm selects only the highest-performing trees from each client for transmission to the server, which then reselects trees based on impurity values to construct the optimal global model. This reduces communication overhead and maintains high prediction performance across diverse data distributions. Although the global model reflects data from various clients, the data characteristics of each client may differ. To compensate for this, clients further train additional trees on the global model to perform local optimizations tailored to their data. This improves the overall model's prediction accuracy and adapts to changing data distributions. Our study demonstrates that the FedRFBagging algorithm effectively addresses the communication cost and performance issues associated with random forest models in federated learning environments, suggesting its applicability in such settings.

Multi-classification Sensitive Image Detection Method Based on Lightweight Convolutional Neural Network

  • Yueheng Mao;Bin Song;Zhiyong Zhang;Wenhou Yang;Yu Lan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.5
    • /
    • pp.1433-1449
    • /
    • 2023
  • In recent years, the rapid development of social networks has led to a rapid increase in the amount of information available on the Internet, which contains a large amount of sensitive information related to pornography, politics, and terrorism. In the aspect of sensitive image detection, the existing machine learning algorithms are confronted with problems such as large model size, long training time, and slow detection speed when auditing and supervising. In order to detect sensitive images more accurately and quickly, this paper proposes a multiclassification sensitive image detection method based on lightweight Convolutional Neural Network. On the basis of the EfficientNet model, this method combines the Ghost Module idea of the GhostNet model and adds the SE channel attention mechanism in the Ghost Module for feature extraction training. The experimental results on the sensitive image data set constructed in this paper show that the accuracy of the proposed method in sensitive information detection is 94.46% higher than that of the similar methods. Then, the model is pruned through an ablation experiment, and the activation function is replaced by Hard-Swish, which reduces the parameters of the original model by 54.67%. Under the condition of ensuring accuracy, the detection time of a single image is reduced from 8.88ms to 6.37ms. The results of the experiment demonstrate that the method put forward has successfully enhanced the precision of identifying multi-class sensitive images, significantly decreased the number of parameters in the model, and achieved higher accuracy than comparable algorithms while using a more lightweight model design.

Training an Artificial Neural Network (ANN) to Control the Tap Changer of Parallel Transformers for a Closed Primary Bus

  • Sedaghati, Alireza
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1042-1047
    • /
    • 2004
  • Voltage control is an essential part of the electric energy transmission and distribution system to maintain proper voltage limit at the consumer's terminal. Besides the generating units that provide the basic voltage control, there are many additional voltage-controlling agents e.g., shunt capacitors, shunt reactors, static VAr compensators, regulating transformers mentioned in [1], [2]. The most popular one, among all those agents for controlling voltage levels at the distribution and transmission system, is the on-load tap changer transformer. It serves two functions-energy transformation in different voltage levels and the voltage control. Artificial Neural Network (ANN) has been realized as a convenient tool that can be used in controlling the on load tap changer in the distribution transformers. Usage of the ANN in this area needs suitable training and testing data for performance analysis before the practical application. This paper briefly describes a procedure of processing the data to train an Artificial Neural Network (ANN) to control the tap changer operating decision of parallel transformers for a closed primary bus. The data set are used to train a two layer ANN using three different neural net learning algorithms, namely, Standard Backpropagation [3], Bayesian Regularization [4] and Scaled Conjugate Gradient [5]. The experimental results are presented including performance analysis.

  • PDF

Nonlinear mappings of interval vectors by neural networks (신경회로망에 의한 구간 벡터의 비선형 사상)

  • 권기택;배철수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.8
    • /
    • pp.2119-2132
    • /
    • 1996
  • This paper proposes four approaches for approximately realizing nonlinear mappling of interval vectors by neural networks. In the proposed approaches, training data for the learning of neural networks are the paris of interval input vectors and interval target output vectors. The first approach is a direct application of the standard BP (Back-Propagation) algorithm with a pre-processed training data. The second approach is an application of the two BP algorithms. The third approach is an extension of the BP algorithm to the case of interval input-output data. The last approach is an extension of the third approach to neural network with interval weights and interval biases. These approaches are compared with one another by computer simulations.

  • PDF

I-QANet: Improved Machine Reading Comprehension using Graph Convolutional Networks (I-QANet: 그래프 컨볼루션 네트워크를 활용한 향상된 기계독해)

  • Kim, Jeong-Hoon;Kim, Jun-Yeong;Park, Jun;Park, Sung-Wook;Jung, Se-Hoon;Sim, Chun-Bo
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.11
    • /
    • pp.1643-1652
    • /
    • 2022
  • Most of the existing machine reading research has used Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) algorithms as networks. Among them, RNN was slow in training, and Question Answering Network (QANet) was announced to improve training speed. QANet is a model composed of CNN and self-attention. CNN extracts semantic and syntactic information well from the local corpus, but there is a limit to extracting the corresponding information from the global corpus. Graph Convolutional Networks (GCN) extracts semantic and syntactic information relatively well from the global corpus. In this paper, to take advantage of this strength of GCN, we propose I-QANet, which changed the CNN of QANet to GCN. The proposed model performed 1.2 times faster than the baseline in the Stanford Question Answering Dataset (SQuAD) dataset and showed 0.2% higher performance in Exact Match (EM) and 0.7% higher in F1. Furthermore, in the Korean Question Answering Dataset (KorQuAD) dataset consisting only of Korean, the learning time was 1.1 times faster than the baseline, and the EM and F1 performance were also 0.9% and 0.7% higher, respectively.