• Title/Summary/Keyword: Machine learning in communications

Search Result 109, Processing Time 0.026 seconds

A Review of Intelligent Self-Driving Vehicle Software Research

  • Gwak, Jeonghwan;Jung, Juho;Oh, RyumDuck;Park, Manbok;Rakhimov, Mukhammad Abdu Kayumbek;Ahn, Junho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5299-5320
    • /
    • 2019
  • Interest in self-driving vehicle research has been rapidly increasing, and related research has been continuously conducted. In such a fast-paced self-driving vehicle research area, the development of advanced technology for better convenience safety, and efficiency in road and transportation systems is expected. Here, we investigate research in self-driving vehicles and analyze the main technologies of driverless car software, including: technical aspects of autonomous vehicles, traffic infrastructure and its communications, research techniques with vision recognition, deep leaning algorithms, localization methods, existing problems, and future development directions. First, we introduce intelligent self-driving car and road infrastructure algorithms such as machine learning, image processing methods, and localizations. Second, we examine the intelligent technologies used in self-driving car projects, autonomous vehicles equipped with multiple sensors, and interactions with transport infrastructure. Finally, we highlight the future direction and challenges of self-driving vehicle transportation systems.

FPGA Design of SVM Classifier for Real Time Image Processing (실시간 영상처리를 위한 SVM 분류기의 FPGA 구현)

  • Na, Won-Seob;Han, Sung-Woo;Jeong, Yong-Jin
    • Journal of IKEEE
    • /
    • v.20 no.3
    • /
    • pp.209-219
    • /
    • 2016
  • SVM is a machine learning method used for image processing. It is well known for its high classification performance. We have to perform multiple MAC operations in order to use SVM for image classification. However, if the resolution of the target image or the number of classification cases increases, the execution time of SVM also increases, which makes it difficult to be performed in real-time applications. In this paper, we propose an hardware architecture which enables real-time applications using SVM classification. We used parallel architecture to simultaneously calculate MAC operations, and also designed the system for several feature extractors for compatibility. RBF kernel was used for hardware implemenation, and the exponent calculation formular included in the kernel was modified to enable fixed point modelling. Experimental results for the system, when implemented in Xilinx ZC-706 evaluation board, show that it can process 60.46 fps for $1360{\times}800$ resolution at 100MHz clock frequency.

Damage detection in structures using modal curvatures gapped smoothing method and deep learning

  • Nguyen, Duong Huong;Bui-Tien, T.;Roeck, Guido De;Wahab, Magd Abdel
    • Structural Engineering and Mechanics
    • /
    • v.77 no.1
    • /
    • pp.47-56
    • /
    • 2021
  • This paper deals with damage detection using a Gapped Smoothing Method (GSM) combined with deep learning. Convolutional Neural Network (CNN) is a model of deep learning. CNN has an input layer, an output layer, and a number of hidden layers that consist of convolutional layers. The input layer is a tensor with shape (number of images) × (image width) × (image height) × (image depth). An activation function is applied each time to this tensor passing through a hidden layer and the last layer is the fully connected layer. After the fully connected layer, the output layer, which is the final layer, is predicted by CNN. In this paper, a complete machine learning system is introduced. The training data was taken from a Finite Element (FE) model. The input images are the contour plots of curvature gapped smooth damage index. A free-free beam is used as a case study. In the first step, the FE model of the beam was used to generate data. The collected data were then divided into two parts, i.e. 70% for training and 30% for validation. In the second step, the proposed CNN was trained using training data and then validated using available data. Furthermore, a vibration experiment on steel damaged beam in free-free support condition was carried out in the laboratory to test the method. A total number of 15 accelerometers were set up to measure the mode shapes and calculate the curvature gapped smooth of the damaged beam. Two scenarios were introduced with different severities of the damage. The results showed that the trained CNN was successful in detecting the location as well as the severity of the damage in the experimental damaged beam.

Web-based synthetic-aperture radar data management system and land cover classification

  • Dalwon Jang;Jaewon Lee;Jong-Seol Lee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1858-1872
    • /
    • 2023
  • With the advance of radar technologies, the availability of synthetic aperture radar (SAR) images increases. To improve application of SAR images, a management system for SAR images is proposed in this paper. The system provides trainable land cover classification module and display of SAR images on the map. Users of the system can create their own classifier with their data, and obtain the classified results of newly captured SAR images by applying the classifier to the images. The classifier is based on convolutional neural network structure. Since there are differences among SAR images depending on capturing method and devices, a fixed classifier cannot cover all types of SAR land cover classification problems. Thus, it is adopted to create each user's classifier. In our experiments, it is shown that the module works well with two different SAR datasets. With this system, SAR data and land cover classification results are managed and easily displayed.

Traffic Sign Recognition using SVM and Decision Tree for Poor Driving Environment (SVM과 의사결정트리를 이용한 열악한 환경에서의 교통표지판 인식 알고리즘)

  • Jo, Young-Bae;Na, Won-Seob;Eom, Sung-Je;Jeong, Yong-Jin
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.485-494
    • /
    • 2014
  • Traffic Sign Recognition(TSR) is an important element in an Advanced Driver Assistance System(ADAS). However, many studies related to TSR approaches only in normal daytime environment because a sign's unique color doesn't appear in poor environment such as night time, snow, rain or fog. In this paper, we propose a new TSR algorithm based on machine learning for daytime as well as poor environment. In poor environment, traditional methods which use RGB color region doesn't show good performance. So we extracted sign characteristics using HoG extraction, and detected signs using a Support Vector Machine(SVM). The detected sign is recognized by a decision tree based on 25 reference points in a Normalized RGB system. The detection rate of the proposed system is 96.4% and the recognition rate is 94% when applied in poor environment. The testing was performed on an Intel i5 processor at 3.4 GHz using Full HD resolution images. As a result, the proposed algorithm shows that machine learning based detection and recognition methods can efficiently be used for TSR algorithm even in poor driving environment.

Prediction of high turbidity in rivers using LSTM algorithm (LSTM 모형을 이용한 하천 고탁수 발생 예측 연구)

  • Park, Jungsu;Lee, Hyunho
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.34 no.1
    • /
    • pp.35-43
    • /
    • 2020
  • Turbidity has various effects on the water quality and ecosystem of a river. High turbidity during floods increases the operation cost of a drinking water supply system. Thus, the management of turbidity is essential for providing safe water to the public. There have been various efforts to estimate turbidity in river systems for proper management and early warning of high turbidity in the water supply process. Advanced data analysis technology using machine learning has been increasingly used in water quality management processes. Artificial neural networks(ANNs) is one of the first algorithms applied, where the overfitting of a model to observed data and vanishing gradient in the backpropagation process limit the wide application of ANNs in practice. In recent years, deep learning, which overcomes the limitations of ANNs, has been applied in water quality management. LSTM(Long-Short Term Memory) is one of novel deep learning algorithms that is widely used in the analysis of time series data. In this study, LSTM is used for the prediction of high turbidity(>30 NTU) in a river from the relationship of turbidity to discharge, which enables early warning of high turbidity in a drinking water supply system. The model showed 0.98, 0.99, 0.98 and 0.99 for precision, recall, F1-score and accuracy respectively, for the prediction of high turbidity in a river with 2 hour frequency data. The sensitivity of the model to the observation intervals of data is also compared with time periods of 2 hour, 8 hour, 1 day and 2 days. The model shows higher precision with shorter observation intervals, which underscores the importance of collecting high frequency data for better management of water resources in the future.

Network Forensics and Intrusion Detection in MQTT-Based Smart Homes

  • Lama AlNabulsi;Sireen AlGhamdi;Ghala AlMuhawis;Ghada AlSaif;Fouz AlKhaldi;Maryam AlDossary;Hussian AlAttas;Abdullah AlMuhaideb
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.95-102
    • /
    • 2023
  • The emergence of Internet of Things (IoT) into our daily lives has grown rapidly. It's been integrated to our homes, cars, and cities, increasing the intelligence of devices involved in communications. Enormous amount of data is exchanged over smart devices through the internet, which raises security concerns in regards of privacy evasion. This paper is focused on the forensics and intrusion detection on one of the most common protocols in IoT environments, especially smart home environments, which is the Message Queuing Telemetry Transport (MQTT) protocol. The paper covers general IoT infrastructure, MQTT protocol and attacks conducted on it, and multiple network forensics frameworks in smart homes. Furthermore, a machine learning model is developed and tested to detect several types of attacks in an IoT network. A forensics tool (MQTTracker) is proposed to contribute to the investigation of MQTT protocol in order to provide a safer technological future in the warmth of people's homes. The MQTT-IOT-IDS2020 dataset is used to train the machine learning model. In addition, different attack detection algorithms are compared to ensure the suitable algorithm is chosen to perform accurate classification of attacks within MQTT traffic.

Machine Learning-Based Signal Prediction Method for Power Line Communication Systems (전력선 통신 시스템을 위한 머신러닝 기반의 원신호 예측 기법)

  • Sun, Young Ghyu;Sim, Issac;Hong, Seung Gwan;Kim, Jin Young
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.3
    • /
    • pp.74-79
    • /
    • 2017
  • In this paper, we propose a system model that predicts the original signal transmitted from the transmitter using the received signal in the power line communication system based on the multi - layer perceptron which is one of the machine learning algorithms. Power line communication system using communication system using power network has more noise than communication system using general communication line. It causes a problem that the performance of the power line communication system is degraded. In order to solve this problem, the communication system model proposed in this paper minimizes the influence of noise through original signal prediction and mitigates the performance degradation of the power line communication system. In this paper, we prove that the original signal is predicted by applying the proposed communication system model to the white noise environment.

Lightweight Named Entity Extraction for Korean Short Message Service Text

  • Seon, Choong-Nyoung;Yoo, Jin-Hwan;Kim, Hark-Soo;Kim, Ji-Hwan;Seo, Jung-Yun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.3
    • /
    • pp.560-574
    • /
    • 2011
  • In this paper, we propose a hybrid method of Machine Learning (ML) algorithm and a rule-based algorithm to implement a lightweight Named Entity (NE) extraction system for Korean SMS text. NE extraction from Korean SMS text is a challenging theme due to the resource limitation on a mobile phone, corruptions in input text, need for extension to include personal information stored in a mobile phone, and sparsity of training data. The proposed hybrid method retaining the advantages of statistical ML and rule-based algorithms provides fully-automated procedures for the combination of ML approaches and their correction rules using a threshold-based soft decision function. The proposed method is applied to Korean SMS texts to extract person's names as well as location names which are key information in personal appointment management system. Our proposed system achieved 80.53% in F-measure in this domain, superior to those of the conventional ML approaches.

A Dynamic Channel Switching Policy Through P-learning for Wireless Mesh Networks

  • Hossain, Md. Kamal;Tan, Chee Keong;Lee, Ching Kwang;Yeoh, Chun Yeow
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.2
    • /
    • pp.608-627
    • /
    • 2016
  • Wireless mesh networks (WMNs) based on IEEE 802.11s have emerged as one of the prominent technologies in multi-hop communications. However, the deployment of WMNs suffers from serious interference problem which severely limits the system capacity. Using multiple radios for each mesh router over multiple channels, the interference can be reduced and improve system capacity. Nevertheless, interference cannot be completely eliminated due to the limited number of available channels. An effective approach to mitigate interference is to apply dynamic channel switching (DCS) in WMNs. Conventional DCS schemes trigger channel switching if interference is detected or exceeds a predefined threshold which might cause unnecessary channel switching and long protocol overheads. In this paper, a P-learning based dynamic switching algorithm known as learning automaton (LA)-based DCS algorithm is proposed. Initially, an optimal channel for communicating node pairs is determined through the learning process. Then, a novel switching metric is introduced in our LA-based DCS algorithm to avoid unnecessary initialization of channel switching. Hence, the proposed LA-based DCS algorithm enables each pair of communicating mesh nodes to communicate over the least loaded channels and consequently improve network performance.