• Title/Summary/Keyword: optimal learning

Search Result 1,242, Processing Time 0.028 seconds

Single nucleotide polymorphism marker combinations for classifying Yeonsan Ogye chicken using a machine learning approach

  • Eunjin, Cho;Sunghyun, Cho;Minjun, Kim;Thisarani Kalhari, Ediriweera;Dongwon, Seo;Seung-Sook, Lee;Jihye, Cha;Daehyeok, Jin;Young-Kuk, Kim;Jun Heon, Lee
    • Journal of Animal Science and Technology
    • /
    • v.64 no.5
    • /
    • pp.830-841
    • /
    • 2022
  • Genetic analysis has great potential as a tool to differentiate between different species and breeds of livestock. In this study, the optimal combinations of single nucleotide polymorphism (SNP) markers for discriminating the Yeonsan Ogye chicken (Gallus gallus domesticus) breed were identified using high-density 600K SNP array data. In 3,904 individuals from 198 chicken breeds, SNP markers specific to the target population were discovered through a case-control genome-wide association study (GWAS) and filtered out based on the linkage disequilibrium blocks. Significant SNP markers were selected by feature selection applying two machine learning algorithms: Random Forest (RF) and AdaBoost (AB). Using a machine learning approach, the 38 (RF) and 43 (AB) optimal SNP marker combinations for the Yeonsan Ogye chicken population demonstrated 100% accuracy. Hence, the GWAS and machine learning models used in this study can be efficiently utilized to identify the optimal combination of markers for discriminating target populations using multiple SNP markers.

A Study on DRL-based Efficient Asset Allocation Model for Economic Cycle-based Portfolio Optimization (심층강화학습 기반의 경기순환 주기별 효율적 자산 배분 모델 연구)

  • JUNG, NAK HYUN;Taeyeon Oh;Kim, Kang Hee
    • Journal of Korean Society for Quality Management
    • /
    • v.51 no.4
    • /
    • pp.573-588
    • /
    • 2023
  • Purpose: This study presents a research approach that utilizes deep reinforcement learning to construct optimal portfolios based on the business cycle for stocks and other assets. The objective is to develop effective investment strategies that adapt to the varying returns of assets in accordance with the business cycle. Methods: In this study, a diverse set of time series data, including stocks, is collected and utilized to train a deep reinforcement learning model. The proposed approach optimizes asset allocation based on the business cycle, particularly by gathering data for different states such as prosperity, recession, depression, and recovery and constructing portfolios optimized for each phase. Results: Experimental results confirm the effectiveness of the proposed deep reinforcement learning-based approach in constructing optimal portfolios tailored to the business cycle. The utility of optimizing portfolio investment strategies for each phase of the business cycle is demonstrated. Conclusion: This paper contributes to the construction of optimal portfolios based on the business cycle using a deep reinforcement learning approach, providing investors with effective investment strategies that simultaneously seek stability and profitability. As a result, investors can adopt stable and profitable investment strategies that adapt to business cycle volatility.

Search of Optimal Path and Implementation using Network based Reinforcement Learning Algorithm and sharing of System Information (네트워크기반의 강화학습 알고리즘과 시스템의 정보공유화를 이용한 최단경로의 검색 및 구현)

  • Min, Seong-Joon;Oh, Kyung-Seok;Ahn, June-Young;Heo, Hoon
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.174-176
    • /
    • 2005
  • This treatise studies composing process that renew information mastered by interactive experience between environment and system via network among individuals. In the previous study map information regarding free space is learned by using of reinforced learning algorithm, which enable each individual to construct optimal action policy. Based on those action policy each individuals can obtain optimal path. Moreover decision process to distinguish best optimal path by comparing those in the network composed of each individuals is added. Also information about the finally chosen path is being updated. A self renewing method of each system information by sharing the each individual data via network is proposed Data enrichment by shilling the information of many maps not in the single map is tried Numerical simulation is conducted to confirm the propose concept. In order to prove its suitability experiment using micro-mouse by integrating and comparing the information between individuals is carried out in various types of map to reveal successful result.

  • PDF

Optimal EEG Locations for EEG Feature Extraction with Application to User's Intension using a Robust Neuro-Fuzzy System in BCI

  • Lee, Chang Young;Aliyu, Ibrahim;Lim, Chang Gyoon
    • Journal of Integrative Natural Science
    • /
    • v.11 no.4
    • /
    • pp.167-183
    • /
    • 2018
  • Electroencephalogram (EEG) recording provides a new way to support human-machine communication. It gives us an opportunity to analyze the neuro-dynamics of human cognition. Machine learning is a powerful for the EEG classification. In addition, machine learning can compensate for high variability of EEG when analyzing data in real time. However, the optimal EEG electrode location must be prioritized in order to extract the most relevant features from brain wave data. In this paper, we propose an intelligent system model for the extraction of EEG data by training the optimal electrode location of EEG in a specific problem. The proposed system is basically a fuzzy system and uses a neural network structurally. The fuzzy clustering method is used to determine the optimal number of fuzzy rules using the features extracted from the EEG data. The parameters and weight values found in the process of determining the number of rules determined here must be tuned for optimization in the learning process. Genetic algorithms are used to obtain optimized parameters. We present useful results by using optimal rule numbers and non - symmetric membership function using EEG data for four movements with the right arm through various experiments.

Deep reinforcement learning for optimal life-cycle management of deteriorating regional bridges using double-deep Q-networks

  • Xiaoming, Lei;You, Dong
    • Smart Structures and Systems
    • /
    • v.30 no.6
    • /
    • pp.571-582
    • /
    • 2022
  • Optimal life-cycle management is a challenging issue for deteriorating regional bridges. Due to the complexity of regional bridge structural conditions and a large number of inspection and maintenance actions, decision-makers generally choose traditional passive management strategies. They are less efficiency and cost-effectiveness. This paper suggests a deep reinforcement learning framework employing double-deep Q-networks (DDQNs) to improve the life-cycle management of deteriorating regional bridges to tackle these problems. It could produce optimal maintenance plans considering restrictions to maximize maintenance cost-effectiveness to the greatest extent possible. DDQNs method could handle the problem of the overestimation of Q-values in the Nature DQNs. This study also identifies regional bridge deterioration characteristics and the consequence of scheduled maintenance from years of inspection data. To validate the proposed method, a case study containing hundreds of bridges is used to develop optimal life-cycle management strategies. The optimization solutions recommend fewer replacement actions and prefer preventative repair actions when bridges are damaged or are expected to be damaged. By employing the optimal life-cycle regional maintenance strategies, the conditions of bridges can be controlled to a good level. Compared to the nature DQNs, DDQNs offer an optimized scheme containing fewer low-condition bridges and a more costeffective life-cycle management plan.

Adapative Modular Q-Learning for Agents´ Dynamic Positioning in Robot Soccer Simulation

  • Kwon, Ki-Duk;Kim, In-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.149.5-149
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent´s dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless ...

  • PDF

A Study on Virtual Tooth Image Generation Using Deep Learning - Based on the number of learning (심층 학습을 활용한 가상 치아 이미지 생성 연구 -학습 횟수를 중심으로)

  • Bae, EunJeong;Jeong, Junho;Son, Yunsik;Lim, JoonYeon
    • Journal of Technologic Dentistry
    • /
    • v.42 no.1
    • /
    • pp.1-8
    • /
    • 2020
  • Purpose: Among the virtual teeth generated by Deep Convolutional Generative Adversarial Networks (DCGAN), the optimal data was analyzed for the number of learning. Methods: We extracted 50 mandibular first molar occlusal surfaces and trained 4,000 epoch with DCGAN. The learning screen was saved every 50 times and evaluated on a Likert 5-point scale according to five classification criteria. Results were analyzed by one-way ANOVA and tukey HSD post hoc analysis (α = 0.05). Results: It was the highest with 83.90±6.32 in the number of group3 (2,050-3,000) learning and statistically significant in the group1 (50-1,000) and the group2 (1,050-2,000). Conclusion: Since there is a difference in the optimal virtual tooth generation according to the number of learning, it is necessary to analyze the learning frequency section in various ways.

IRSML: An intelligent routing algorithm based on machine learning in software defined wireless networking

  • Duong, Thuy-Van T.;Binh, Le Huu
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.733-745
    • /
    • 2022
  • In software-defined wireless networking (SDWN), the optimal routing technique is one of the effective solutions to improve its performance. This routing technique is done by many different methods, with the most common using integer linear programming problem (ILP), building optimal routing metrics. These methods often only focus on one routing objective, such as minimizing the packet blocking probability, minimizing end-to-end delay (EED), and maximizing network throughput. It is difficult to consider multiple objectives concurrently in a routing algorithm. In this paper, we investigate the application of machine learning to control routing in the SDWN. An intelligent routing algorithm is then proposed based on the machine learning to improve the network performance. The proposed algorithm can optimize multiple routing objectives. Our idea is to combine supervised learning (SL) and reinforcement learning (RL) methods to discover new routes. The SL is used to predict the performance metrics of the links, including EED quality of transmission (QoT), and packet blocking probability (PBP). The routing is done by the RL method. We use the Q-value in the fundamental equation of the RL to store the PBP, which is used for the aim of route selection. Concurrently, the learning rate coefficient is flexibly changed to determine the constraints of routing during learning. These constraints include QoT and EED. Our performance evaluations based on OMNeT++ have shown that the proposed algorithm has significantly improved the network performance in terms of the QoT, EED, packet delivery ratio, and network throughput compared with other well-known routing algorithms.

Optimal Learning of Fuzzy Neural Network Using Particle Swarm Optimization Algorithm

  • Kim, Dong-Hwa;Cho, Jae-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.421-426
    • /
    • 2005
  • Fuzzy logic, neural network, fuzzy-neural network play an important as the key technology of linguistic modeling for intelligent control and decision making in complex systems. The fuzzy-neural network (FNN) learning represents one of the most effective algorithms to build such linguistic models. This paper proposes particle swarm optimization algorithm based optimal learning fuzzy-neural network (PSOA-FNN). The proposed learning scheme is the fuzzy-neural network structure which can handle linguistic knowledge as tuning membership function of fuzzy logic by particle swarm optimization algorithm. The learning algorithm of the PSOA-FNN is composed of two phases. The first phase is to find the initial membership functions of the fuzzy neural network model. In the second phase, particle swarm optimization algorithm is used for tuning of membership functions of the proposed model.

  • PDF

Optimal Learning of Neo-Fuzzy Structure Using Bacteria Foraging Optimization

  • Kim, Dong-Hwa
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1716-1722
    • /
    • 2005
  • Fuzzy logic, neural network, fuzzy-neural network play an important as the key technology of linguistic modeling for intelligent control and decision in complex systems. The fuzzy-neural network (FNN) learning represents one of the most effective algorithms to build such linguistic models. This paper proposes bacteria foraging algorithm based optimal learning fuzzy-neural network (BA-FNN). The proposed learning scheme is the fuzzy-neural network structure which can handle linguistic knowledge as tuning membership function of fuzzy logic by bacteria foraging algorithm. The learning algorithm of the BA-FNN is composed of two phases. The first phase is to find the initial membership functions of the fuzzy neural network model. In the second phase, bacteria foraging algorithm is used for tuning of membership functions of the proposed model.

  • PDF