• Title/Summary/Keyword: machine learning applications

Search Result 538, Processing Time 0.035 seconds

A Study on AI-based MAC Scheduler in Beyond 5G Communication (5G 통신 MAC 스케줄러에 관한 연구)

  • Muhammad Muneeb;Kwang-Man Ko
    • Annual Conference of KIPS
    • /
    • 2024.05a
    • /
    • pp.891-894
    • /
    • 2024
  • The quest for reliability in Artificial Intelligence (AI) is progressively urgent, especially in the field of next generation wireless networks. Future Beyond 5G (B5G)/6G networks will connect a huge number of devices and will offer innovative services invested with AI and Machine Learning tools. Wireless communications, in general, and medium access control (MAC) techniques were among the fields that were heavily affected by this improvement. This study presents the applications and services of future communication networks. This study details the Medium Access Control (MAC) scheduler of Beyond-5G/6G from 3rd Generation Partnership (3GPP) and highlights the current open research issues which are yet to be optimized. This study provides an overview of how AI plays an important role in improving next generation communication by solving MAC-layer issues such as resource scheduling and queueing. We will select C-V2X as our use case to implement our proposed MAC scheduling model.

Research on Hot-Threshold based dynamic resource management in the cloud

  • Gun-Woo Kim;Seok-Jae Moon;Byung-Joon Park
    • International Journal of Advanced Culture Technology
    • /
    • v.12 no.3
    • /
    • pp.471-479
    • /
    • 2024
  • Recent advancements in cloud computing have significantly increased its importance across various sectors. As sensors, devices, and customer demands have become more diverse, workloads have become increasingly variable and difficult to predict. Cloud providers, connected to multiple physical servers to support a range of applications, often over-provision resources to handle peak workloads. This approach results in inconsistent services, imbalanced energy usage, waste, and potential violations of service level agreements. In this paper, we propose a novel engine equipped with a scheduler based on the Hot-Threshold concept, aimed at optimizing resource usage and improving energy efficiency in cloud environments. We developed this engine to employ both proactive and reactive methods. The proactive method leverages workload estimate-based provisioning, while the reactive Hot-Cold Scheduler consists of a Predictor, Solver, and Processor, which together suggest an intelligent migration flow. We demonstrate that our approach effectively addresses existing challenges in terms of cost and energy consumption. By intelligently managing resources based on past user statistics, we provide significant improvements in both energy efficiency and service consistency.

Automatic Generation of Information Extraction Rules Through User-interface Agents (사용자 인터페이스 에이전트를 통한 정보추출 규칙의 자동 생성)

  • 김용기;양재영;최중민
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.447-456
    • /
    • 2004
  • Information extraction is a process of recognizing and fetching particular information fragments from a document. In order to extract information uniformly from many heterogeneous information sources, it is necessary to produce information extraction rules called a wrapper for each source. Previous methods of information extraction can be categorized into manual wrapper generation and automatic wrapper generation. In the manual method, since the wrapper is manually generated by a human expert who analyzes documents and writes rules, the precision of the wrapper is very high whereas it reveals problems in scalability and efficiency In the automatic method, the agent program analyzes a set of example documents and produces a wrapper through learning. Although it is very scalable, this method has difficulty in generating correct rules per se, and also the generated rules are sometimes unreliable. This paper tries to combine both manual and automatic methods by proposing a new method of learning information extraction rules. We adopt the scheme of supervised learning in which a user-interface agent is designed to get information from the user regarding what to extract from a document, and eventually XML-based information extraction rules are generated through learning according to these inputs. The interface agent is used not only to generate new extraction rules but also to modify and extend existing ones to enhance the precision and the recall measures of the extraction system. We have done a series of experiments to test the system, and the results are very promising. We hope that our system can be applied to practical systems such as information-mediator agents.

Hypernetwork Classifiers for Microarray-Based miRNA Module Analysis (마이크로어레이 기반 miRNA 모듈 분석을 위한 하이퍼망 분류 기법)

  • Kim, Sun;Kim, Soo-Jin;Zhang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.6
    • /
    • pp.347-356
    • /
    • 2008
  • High-throughput microarray is one of the most popular tools in molecular biology, and various computational methods have been developed for the microarray data analysis. While the computational methods easily extract significant features, it suffers from inferring modules of multiple co-regulated genes. Hypernetworhs are motivated by biological networks, which handle all elements based on their combinatorial processes. Hence, the hypernetworks can naturally analyze the biological effects of gene combinations. In this paper, we introduce a hypernetwork classifier for microRNA (miRNA) profile analysis based on microarray data. The hypernetwork classifier uses miRNA pairs as elements, and an evolutionary learning is performed to model the microarray profiles. miTNA modules are easily extracted from the hypernetworks, and users can directly evaluate if the miRNA modules are significant. For experimental results, the hypernetwork classifier showed 91.46% accuracy for miRNA expression profiles on multiple human canters, which outperformed other machine learning methods. The hypernetwork-based analysis showed that our approach could find biologically significant miRNA modules.

Experimental Study on Cooperative Coalition in N-person Iterated Prisoner's Dilemma Game using Evolutionary (진화방식을 이용한 N명 반복적 죄수 딜레마 게임의 협동연합에 관한 실험적 연구)

  • Seo, Yeon-Gyu;Cho, Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.3
    • /
    • pp.257-265
    • /
    • 2000
  • There is much selective confliction in nature where selfish and rational individuals exists. Iterated Prisoner's Dilemma (IPD) game deals with this problem, and has been used to study on the evolution of cooperation in social, economic and biological systems. So far, there has been much work about the relationship of the number of players and cooperation, strategy learning as a machine learning and the effect of payoff functions to cooperation. In this paper, We attempt to investigate the cooperative coalition size according to payoff functions, and observe the relationship of localization and the evolution of cooperation in NIPD (N-player IPD) game. Experimental results indicate that cooperative coalition size increases as the gradient of the payoff function for cooperation becomes steeper than that of defector's payoff function, or as the minimum coalition size gets smaller, Moreover, the smaller the neighborhood of interaction is, the higher the cooperative coalition emerges through the evolution of population.

  • PDF

Ranking by Inductive Inference in Collaborative Filtering Systems (협력적 여과 시스템에서 귀납 추리를 이용한 순위 결정)

  • Ko, Su-Jeong
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.9
    • /
    • pp.659-668
    • /
    • 2010
  • Collaborative filtering systems grasp behaviors for a new user and need new information for the user in order to recommend interesting items to the user. For the purpose of acquiring the information the collaborative filtering systems learn behaviors for users based on the previous data and can obtain new information from the results. In this paper, we propose an inductive inference method to obtain new information for users and rank items by using the new information in the proposed method. The proposed method clusters users into groups by learning users through NMF among inductive machine learning methods and selects the group features from the groups by using chi-square. Then, the method classifies a new user into a group by using the bayesian probability model as one of inductive inference methods based on the rating values for the new user and the features of groups. Finally, the method decides the ranks of items by applying the Rocchio algorithm to items with the missing values.

A Language Model and Clue based Machine Learning Method for Discovering Technology Trends from Patent Text (특허 문서 텍스트로부터의 기술 트렌드 탐지를 위한 언어 모델 및 단서 기반 기계학습 방법)

  • Tian, Yingshi;Kim, Young-Ho;Jeong, Yoon-Jae;Ryu, Ji-Hee;Myaeng, Sung-Hyon
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.5
    • /
    • pp.420-429
    • /
    • 2009
  • Patent text is a rich source for discovering technological trends. In order to automate such a discovery process, we attempt to identify phrases corresponding to the problem and its solution method which together form a technology. Problem and solution phrases are identified by a SVM classifier using features based on a combination of a language modeling approach and linguistic clues. Based on the occurrence statistics of the phrases, we identify the time span of each problem and solution and finally generate a trend. Based on our experiment, we show that the proposed semantic phrase identification method is promising with its accuracy being 77% in R-precision. We also show that the unsupervised method for discovering technological trends is meaningful.

An efficient hybrid TLBO-PSO-ANN for fast damage identification in steel beam structures using IGA

  • Khatir, S.;Khatir, T.;Boutchicha, D.;Le Thanh, C.;Tran-Ngoc, H.;Bui, T.Q.;Capozucca, R.;Abdel-Wahab, M.
    • Smart Structures and Systems
    • /
    • v.25 no.5
    • /
    • pp.605-617
    • /
    • 2020
  • The existence of damages in structures causes changes in the physical properties by reducing the modal parameters. In this paper, we develop a two-stages approach based on normalized Modal Strain Energy Damage Indicator (nMSEDI) for quick applications to predict the location of damage. A two-dimensional IsoGeometric Analysis (2D-IGA), Machine Learning Algorithm (MLA) and optimization techniques are combined to create a new tool. In the first stage, we introduce a modified damage identification technique based on frequencies using nMSEDI to locate the potential of damaged elements. In the second stage, after eliminating the healthy elements, the damage index values from nMSEDI are considered as input in the damage quantification algorithm. The hybrid of Teaching-Learning-Based Optimization (TLBO) with Artificial Neural Network (ANN) and Particle Swarm Optimization (PSO) are used along with nMSEDI. The objective of TLBO is to estimate the parameters of PSO-ANN to find a good training based on actual damage and estimated damage. The IGA model is updated using experimental results based on stiffness and mass matrix using the difference between calculated and measured frequencies as objective function. The feasibility and efficiency of nMSEDI-PSO-ANN after finding the best parameters by TLBO are demonstrated through the comparison with nMSEDI-IGA for different scenarios. The result of the analyses indicates that the proposed approach can be used to determine correctly the severity of damage in beam structures.

An Incremental Method Using Sample Split Points for Global Discretization (전역적 범주화를 위한 샘플 분할 포인트를 이용한 점진적 기법)

  • 한경식;이수원
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.849-858
    • /
    • 2004
  • Most of supervised teaming algorithms could be applied after that continuous variables are transformed to categorical ones at the preprocessing stage in order to avoid the difficulty of processing continuous variables. This preprocessing stage is called global discretization, uses the class distribution list called bins. But, when data are large and the range of the variable to be discretized is very large, many sorting and merging should be performed to produce a single bin because most of global discretization methods need a single bin. Also, if new data are added, they have to perform discretization from scratch to construct categories influenced by the data because the existing methods perform discretization in batch mode. This paper proposes a method that extracts sample points and performs discretization from these sample points in order to solve these problems. Because the approach in this paper does not require merging for producing a single bin, it is efficient when large data are needed to be discretized. In this study, an experiment using real and synthetic datasets was made to compare the proposed method with an existing one.

A Memory-based Reasoning Algorithm using Adaptive Recursive Partition Averaging Method (적응형 재귀 분할 평균법을 이용한 메모리기반 추론 알고리즘)

  • 이형일;최학윤
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.478-487
    • /
    • 2004
  • We had proposed the RPA(Recursive Partition Averaging) method in order to improve the storage requirement and classification rate of the Memory Based Reasoning. That algorithm worked not bad in many area, however, the major drawbacks of RPA are it's partitioning condition and the way of extracting major patterns. We propose an adaptive RPA algorithm which uses the FPD(feature-based population densimeter) to stop the ARPA partitioning process and produce, instead of RPA's averaged major pattern, optimizing resulting hyperrectangles. The proposed algorithm required only approximately 40% of memory space that is needed in k-NN classifier, and showed a superior classification performance to the RPA. Also, by reducing the number of stored patterns, it showed an excellent results in terms of classification when we compare it to the k-NN.