• Title/Summary/Keyword: Machine learning in communications

Search Result 111, Processing Time 0.026 seconds

A Study on AI-based MAC Scheduler in Beyond 5G Communication (5G 통신 MAC 스케줄러에 관한 연구)

  • Muhammad Muneeb;Kwang-Man Ko
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.891-894
    • /
    • 2024
  • The quest for reliability in Artificial Intelligence (AI) is progressively urgent, especially in the field of next generation wireless networks. Future Beyond 5G (B5G)/6G networks will connect a huge number of devices and will offer innovative services invested with AI and Machine Learning tools. Wireless communications, in general, and medium access control (MAC) techniques were among the fields that were heavily affected by this improvement. This study presents the applications and services of future communication networks. This study details the Medium Access Control (MAC) scheduler of Beyond-5G/6G from 3rd Generation Partnership (3GPP) and highlights the current open research issues which are yet to be optimized. This study provides an overview of how AI plays an important role in improving next generation communication by solving MAC-layer issues such as resource scheduling and queueing. We will select C-V2X as our use case to implement our proposed MAC scheduling model.

The Fourth Industrial Revolution and College Mathematics Education - Case study of Linear Algebra approach - (4차 산업혁명과 대학수학교육 - 산업수학 프로그램 소개 및 관련 수학강좌 사례 -)

  • Lee, Sang-Gu;Lee, Jae Hwa;Kim, Young Rock;Ham, Yoonmee
    • Communications of Mathematical Education
    • /
    • v.32 no.3
    • /
    • pp.245-255
    • /
    • 2018
  • In this paper, we discuss efforts that has been made by mathematics departments in Korea to meet the need of the 4th industrial revolution era. First of all, we introduce various industrial mathematics programs that some universities in Korea started to provide in order to nurture math/math education graduate to be prepared for the demand of the society. We also introduced a mathematics for Big Data course that we did offer recently which can be shared.

Big Data Meets Telcos: A Proactive Caching Perspective

  • Bastug, Ejder;Bennis, Mehdi;Zeydan, Engin;Kader, Manhal Abdel;Karatepe, Ilyas Alper;Er, Ahmet Salih;Debbah, Merouane
    • Journal of Communications and Networks
    • /
    • v.17 no.6
    • /
    • pp.549-557
    • /
    • 2015
  • Mobile cellular networks are becoming increasingly complex to manage while classical deployment/optimization techniques and current solutions (i.e., cell densification, acquiring more spectrum, etc.) are cost-ineffective and thus seen as stopgaps. This calls for development of novel approaches that leverage recent advances in storage/memory, context-awareness, edge/cloud computing, and falls into framework of big data. However, the big data by itself is yet another complex phenomena to handle and comes with its notorious 4V: Velocity, voracity, volume, and variety. In this work, we address these issues in optimization of 5G wireless networks via the notion of proactive caching at the base stations. In particular, we investigate the gains of proactive caching in terms of backhaul offloadings and request satisfactions, while tackling the large-amount of available data for content popularity estimation. In order to estimate the content popularity, we first collect users' mobile traffic data from a Turkish telecom operator from several base stations in hours of time interval. Then, an analysis is carried out locally on a big data platformand the gains of proactive caching at the base stations are investigated via numerical simulations. It turns out that several gains are possible depending on the level of available information and storage size. For instance, with 10% of content ratings and 15.4Gbyte of storage size (87%of total catalog size), proactive caching achieves 100% of request satisfaction and offloads 98% of the backhaul when considering 16 base stations.

Improvement of EEG-Based Drowsiness Detection System Using Discrete Wavelet Transform (DWT를 적용한 EEG 기반 졸음 감지 시스템의 성능 향상)

  • Han, Hyungseob;Song, Kyoung-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.9
    • /
    • pp.1731-1733
    • /
    • 2015
  • Since electroencephalogram(EEG) has non-linear and non-stationary properties, it is effective to analyze the characteristic of EEG with time-frequency method rather than spectrum method. In this letter, we propose the modified drowsiness detection system using discrete wavelet transform combined with errors-in-variables and multilayer perceptron methods. For the comparison of the proposed scheme with the previous one, the state 'others' is added to the previous states of drivers: 'alertness,' 'transition,' and 'drowsiness.' From the computer simulation using machine learning, we confirm that the proposed scheme outperforms the previous one for some conditions.

Counterfactual image generation by disentangling data attributes with deep generative models

  • Jieon Lim;Weonyoung Joo
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.6
    • /
    • pp.589-603
    • /
    • 2023
  • Deep generative models target to infer the underlying true data distribution, and it leads to a huge success in generating fake-but-realistic data. Regarding such a perspective, the data attributes can be a crucial factor in the data generation process since non-existent counterfactual samples can be generated by altering certain factors. For example, we can generate new portrait images by flipping the gender attribute or altering the hair color attributes. This paper proposes counterfactual disentangled variational autoencoder generative adversarial networks (CDVAE-GAN), specialized for data attribute level counterfactual data generation. The structure of the proposed CDVAE-GAN consists of variational autoencoders and generative adversarial networks. Specifically, we adopt a Gaussian variational autoencoder to extract low-dimensional disentangled data features and auxiliary Bernoulli latent variables to model the data attributes separately. Also, we utilize a generative adversarial network to generate data with high fidelity. By enjoying the benefits of the variational autoencoder with the additional Bernoulli latent variables and the generative adversarial network, the proposed CDVAE-GAN can control the data attributes, and it enables producing counterfactual data. Our experimental result on the CelebA dataset qualitatively shows that the generated samples from CDVAE-GAN are realistic. Also, the quantitative results support that the proposed model can produce data that can deceive other machine learning classifiers with the altered data attributes.

Idea proposal of InfograaS for Visualization of Public Big-data (공공 빅데이터의 시각화를 위한 InfograaS의 아이디어 제안)

  • Cha, Byung-Rae;Lee, Hyung-Ho;Sim, Su-Jeong;Kim, Jong-Won
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.5
    • /
    • pp.524-531
    • /
    • 2014
  • In this paper, we have proposed the processing and analyzing the linked open data (LOD), a kind of big-data, using resources of cloud computing. The LOD is web-based open data in order to share and recycle of public data. Specially, we defined the InfograaS (Info-graphic as a service), new business area of SaaS (software as a service), to support visualization technique for BA (business analytics) and Info-graphic. The goal of this study is easily to use it by the non-specialist and beginner without experts of visualization and business analysis. Data visualization is the process to represent visually and understand the data analysis easily. The purpose of data visualization is to deliver information clearly and effectively by chart and figure. The big data of public data are shared and presented in the charts and the graphics understood easily by various processing results using Hadoop, R, machine learning, and data mining of open source and resources of cloud computing.

헬스 및 웰니스 플랫폼: 서비스 및 가용 기술에 관한 연구

  • Amin, Muhammad Bilal;Khan, Wajahat Ali;Rizvi, Bilal Ali;Bang, Jae-Hun;Ali, Taqdir;Heo, Tae-Ho;Hussain, Shujaat;Ali, Imran;Kim, Do-Hyeong;Lee, Seung-Ryong
    • Communications of the Korean Institute of Information Scientists and Engineers
    • /
    • v.35 no.7
    • /
    • pp.9-25
    • /
    • 2017
  • In this paper, we surveyed state-of-the-art health and wellness platforms. The motivation of this paper is to review the state-of-the-art health and wellness platforms and their maturity with respect to adoption of latest enabling technologies. The is review is classified into four categories: healthcare systems, AI-assisted healthcare, wellness platforms, and open source health and wellness initiatives. From this comprehensive review, it can be stated that the contemporary healthcare systems are well-adopting wellness due to the concentration shift towards prevention. Thus, the gap between health and wellness is slowly yet carefully entering gray area. Where both the domains can freely invoke each other's services, and supporting enabling technologies. Furthermore, the biomedical researchers and physicians are no longer carrying the myopic views of trusting their knowledge for diagnosis. AI-assisted technologies based on machine learning and big data are influencing today's prognosis with trust and confidence.

  • PDF

Prediction of spatio-temporal AQI data

  • KyeongEun Kim;MiRu Ma;KyeongWon Lee
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.2
    • /
    • pp.119-133
    • /
    • 2023
  • With the rapid growth of the economy and fossil fuel consumption, the concentration of air pollutants has increased significantly and the air pollution problem is no longer limited to small areas. We conduct statistical analysis with the actual data related to air quality that covers the entire of South Korea using R and Python. Some factors such as SO2, CO, O3, NO2, PM10, precipitation, wind speed, wind direction, vapor pressure, local pressure, sea level pressure, temperature, humidity, and others are used as covariates. The main goal of this paper is to predict air quality index (AQI) spatio-temporal data. The observations of spatio-temporal big datasets like AQI data are correlated both spatially and temporally, and computation of the prediction or forecasting with dependence structure is often infeasible. As such, the likelihood function based on the spatio-temporal model may be complicated and some special modelings are useful for statistically reliable predictions. In this paper, we propose several methods for this big spatio-temporal AQI data. First, random effects with spatio-temporal basis functions model, a classical statistical analysis, is proposed. Next, neural networks model, a deep learning method based on artificial neural networks, is applied. Finally, random forest model, a machine learning method that is closer to computational science, will be introduced. Then we compare the forecasting performance of each other in terms of predictive diagnostics. As a result of the analysis, all three methods predicted the normal level of PM2.5 well, but the performance seems to be poor at the extreme value.

Artificial Intelligence and College Mathematics Education (인공지능(Artificial Intelligence)과 대학수학교육)

  • Lee, Sang-Gu;Lee, Jae Hwa;Ham, Yoonmee
    • Communications of Mathematical Education
    • /
    • v.34 no.1
    • /
    • pp.1-15
    • /
    • 2020
  • Today's healthcare, intelligent robots, smart home systems, and car sharing are already innovating with cutting-edge information and communication technologies such as Artificial Intelligence (AI), the Internet of Things, the Internet of Intelligent Things, and Big data. It is deeply affecting our lives. In the factory, robots have been working for humans more than several decades (FA, OA), AI doctors are also working in hospitals (Dr. Watson), AI speakers (Giga Genie) and AI assistants (Siri, Bixby, Google Assistant) are working to improve Natural Language Process. Now, in order to understand AI, knowledge of mathematics becomes essential, not a choice. Thus, mathematicians have been given a role in explaining such mathematics that make these things possible behind AI. Therefore, the authors wrote a textbook 'Basic Mathematics for Artificial Intelligence' by arranging the mathematics concepts and tools needed to understand AI and machine learning in one or two semesters, and organized lectures for undergraduate and graduate students of various majors to explore careers in artificial intelligence. In this paper, we share our experience of conducting this class with the full contents in http://matrix.skku.ac.kr/math4ai/.

Construction of a Standard Dataset for Liver Tumors for Testing the Performance and Safety of Artificial Intelligence-Based Clinical Decision Support Systems (인공지능 기반 임상의학 결정 지원 시스템 의료기기의 성능 및 안전성 검증을 위한 간 종양 표준 데이터셋 구축)

  • Seung-seob Kim;Dong Ho Lee;Min Woo Lee;So Yeon Kim;Jaeseung Shin;Jin‑Young Choi;Byoung Wook Choi
    • Journal of the Korean Society of Radiology
    • /
    • v.82 no.5
    • /
    • pp.1196-1206
    • /
    • 2021
  • Purpose To construct a standard dataset of contrast-enhanced CT images of liver tumors to test the performance and safety of artificial intelligence (AI)-based algorithms for clinical decision support systems (CDSSs). Materials and Methods A consensus group of medical experts in gastrointestinal radiology from four national tertiary institutions discussed the conditions to be included in a standard dataset. Seventy-five cases of hepatocellular carcinoma, 75 cases of metastasis, and 30-50 cases of benign lesions were retrieved from each institution, and the final dataset consisted of 300 cases of hepatocellular carcinoma, 300 cases of metastasis, and 183 cases of benign lesions. Only pathologically confirmed cases of hepatocellular carcinomas and metastases were enrolled. The medical experts retrieved the medical records of the patients and manually labeled the CT images. The CT images were saved as Digital Imaging and Communications in Medicine (DICOM) files. Results The medical experts in gastrointestinal radiology constructed the standard dataset of contrast-enhanced CT images for 783 cases of liver tumors. The performance and safety of the AI algorithm can be evaluated by calculating the sensitivity and specificity for detecting and characterizing the lesions. Conclusion The constructed standard dataset can be utilized for evaluating the machine-learning-based AI algorithm for CDSS.