• 제목/요약/키워드: Campus Network

검색결과 295건 처리시간 0.079초

A Supervised Feature Selection Method for Malicious Intrusions Detection in IoT Based on Genetic Algorithm

  • Saman Iftikhar;Daniah Al-Madani;Saima Abdullah;Ammar Saeed;Kiran Fatima
    • International Journal of Computer Science & Network Security
    • /
    • 제23권3호
    • /
    • pp.49-56
    • /
    • 2023
  • Machine learning methods diversely applied to the Internet of Things (IoT) field have been successful due to the enhancement of computer processing power. They offer an effective way of detecting malicious intrusions in IoT because of their high-level feature extraction capabilities. In this paper, we proposed a novel feature selection method for malicious intrusion detection in IoT by using an evolutionary technique - Genetic Algorithm (GA) and Machine Learning (ML) algorithms. The proposed model is performing the classification of BoT-IoT dataset to evaluate its quality through the training and testing with classifiers. The data is reduced and several preprocessing steps are applied such as: unnecessary information removal, null value checking, label encoding, standard scaling and data balancing. GA has applied over the preprocessed data, to select the most relevant features and maintain model optimization. The selected features from GA are given to ML classifiers such as Logistic Regression (LR) and Support Vector Machine (SVM) and the results are evaluated using performance evaluation measures including recall, precision and f1-score. Two sets of experiments are conducted, and it is concluded that hyperparameter tuning has a significant consequence on the performance of both ML classifiers. Overall, SVM still remained the best model in both cases and overall results increased.

Predicting concrete's compressive strength through three hybrid swarm intelligent methods

  • Zhang Chengquan;Hamidreza Aghajanirefah;Kseniya I. Zykova;Hossein Moayedi;Binh Nguyen Le
    • Computers and Concrete
    • /
    • 제32권2호
    • /
    • pp.149-163
    • /
    • 2023
  • One of the main design parameters traditionally utilized in projects of geotechnical engineering is the uniaxial compressive strength. The present paper employed three artificial intelligence methods, i.e., the stochastic fractal search (SFS), the multi-verse optimization (MVO), and the vortex search algorithm (VSA), in order to determine the compressive strength of concrete (CSC). For the same reason, 1030 concrete specimens were subjected to compressive strength tests. According to the obtained laboratory results, the fly ash, cement, water, slag, coarse aggregates, fine aggregates, and SP were subjected to tests as the input parameters of the model in order to decide the optimum input configuration for the estimation of the compressive strength. The performance was evaluated by employing three criteria, i.e., the root mean square error (RMSE), mean absolute error (MAE), and the determination coefficient (R2). The evaluation of the error criteria and the determination coefficient obtained from the above three techniques indicates that the SFS-MLP technique outperformed the MVO-MLP and VSA-MLP methods. The developed artificial neural network models exhibit higher amounts of errors and lower correlation coefficients in comparison with other models. Nonetheless, the use of the stochastic fractal search algorithm has resulted in considerable enhancement in precision and accuracy of the evaluations conducted through the artificial neural network and has enhanced its performance. According to the results, the utilized SFS-MLP technique showed a better performance in the estimation of the compressive strength of concrete (R2=0.99932 and 0.99942, and RMSE=0.32611 and 0.24922). The novelty of our study is the use of a large dataset composed of 1030 entries and optimization of the learning scheme of the neural prediction model via a data distribution of a 20:80 testing-to-training ratio.

Computational intelligence models for predicting the frictional resistance of driven pile foundations in cold regions

  • Shiguan Chen;Huimei Zhang;Kseniya I. Zykova;Hamed Gholizadeh Touchaei;Chao Yuan;Hossein Moayedi;Binh Nguyen Le
    • Computers and Concrete
    • /
    • 제32권2호
    • /
    • pp.217-232
    • /
    • 2023
  • Numerous studies have been performed on the behavior of pile foundations in cold regions. This study first attempted to employ artificial neural networks (ANN) to predict pile-bearing capacity focusing on pile data recorded primarily on cold regions. As the ANN technique has disadvantages such as finding global minima or slower convergence rates, this study in the second phase deals with the development of an ANN-based predictive model improved with an Elephant herding optimizer (EHO), Dragonfly Algorithm (DA), Genetic Algorithm (GA), and Evolution Strategy (ES) methods for predicting the piles' bearing capacity. The network inputs included the pile geometrical features, pile area (m2), pile length (m), internal friction angle along the pile body and pile tip (Ø°), and effective vertical stress. The MLP model pile's output was the ultimate bearing capacity. A sensitivity analysis was performed to determine the optimum parameters to select the best predictive model. A trial-and-error technique was also used to find the optimum network architecture and the number of hidden nodes. According to the results, there is a good consistency between the pile-bearing DA-MLP-predicted capacities and the measured bearing capacities. Based on the R2 and determination coefficient as 0.90364 and 0.8643 for testing and training datasets, respectively, it is suggested that the DA-MLP model can be effectively implemented with higher reliability, efficiency, and practicability to predict the bearing capacity of piles.

스토킹 관련 언론기사에 대한 텍스트네트워크분석 (Text Network Analysis on Stalking-Related News Articles )

  • 지은선;정상희
    • 문화기술의 융합
    • /
    • 제9권3호
    • /
    • pp.579-585
    • /
    • 2023
  • 본 연구의 목적은 텍스트네트워트분석을 통해 스토킹에 대한 정치성향의 언론기사 내에 핵심 단어를 탐색하고 내재된 의도를 살펴보는 것이다. 2018년 1월 1일부터 2022년 12월 31일까지 보도된 보수언론기사(조선일보, 중앙일보) 824건, 진보언론기사(한겨레신문, 경향신문) 783건으로 총 1,607건을 선정하여 LDA(Latent Dirichlet Allocation) 기반의 토픽모델링 기법으로 도출된 주제범주의 양상을 탐색하였다. 연구결과는 보수언론과 진보언론의 공통된 토픽은 젠더폭력의 인식개선, 신변보호 및 처벌강도, 스토커 신상공개 도출되었고 두 언론의 상이한 토픽은 보수언론에서는 스토커의 가해행위, '신당역 살인사건'의 개요와 진보언론은 '신당역 살인사건'의 가중처벌요구, (사이버공간의) 성착취 범죄 근절로 구성되었다. 본 연구는 스토킹에 대한 언론기사 간의 이념적 의견에 따라 보도형태에 변화가 있음을 시사한다.

A Deep Learning Approach for Covid-19 Detection in Chest X-Rays

  • Sk. Shalauddin Kabir;Syed Galib;Hazrat Ali;Fee Faysal Ahmed;Mohammad Farhad Bulbul
    • International Journal of Computer Science & Network Security
    • /
    • 제24권3호
    • /
    • pp.125-134
    • /
    • 2024
  • The novel coronavirus 2019 is called COVID-19 has outspread swiftly worldwide. An early diagnosis is more important to control its quick spread. Medical imaging mechanics, chest calculated tomography or chest X-ray, are playing a vital character in the identification and testing of COVID-19 in this present epidemic. Chest X-ray is cost effective method for Covid-19 detection however the manual process of x-ray analysis is time consuming given that the number of infected individuals keep growing rapidly. For this reason, it is very important to develop an automated COVID-19 detection process to control this pandemic. In this study, we address the task of automatic detection of Covid-19 by using a popular deep learning model namely the VGG19 model. We used 1300 healthy and 1300 confirmed COVID-19 chest X-ray images in this experiment. We performed three experiments by freezing different blocks and layers of VGG19 and finally, we used a machine learning classifier SVM for detecting COVID-19. In every experiment, we used a five-fold cross-validation method to train and validated the model and finally achieved 98.1% overall classification accuracy. Experimental results show that our proposed method using the deep learning-based VGG19 model can be used as a tool to aid radiologists and play a crucial role in the timely diagnosis of Covid-19.

Wine Quality Prediction by Using Backward Elimination Based on XGBoosting Algorithm

  • Umer Zukaib;Mir Hassan;Tariq Khan;Shoaib Ali
    • International Journal of Computer Science & Network Security
    • /
    • 제24권2호
    • /
    • pp.31-42
    • /
    • 2024
  • Different industries mostly rely on quality certification for promoting their products or brands. Although getting quality certification, specifically by human experts is a tough job to do. But the field of machine learning play a vital role in every aspect of life, if we talk about quality certification, machine learning is having a lot of applications concerning, assigning and assessing quality certifications to different products on a macro level. Like other brands, wine is also having different brands. In order to ensure the quality of wine, machine learning plays an important role. In this research, we use two datasets that are publicly available on the "UC Irvine machine learning repository", for predicting the wine quality. Datasets that we have opted for our experimental research study were comprised of white wine and red wine datasets, there are 1599 records for red wine and 4898 records for white wine datasets. The research study was twofold. First, we have used a technique called backward elimination in order to find out the dependency of the dependent variable on the independent variable and predict the dependent variable, the technique is useful for predicting which independent variable has maximum probability for improving the wine quality. Second, we used a robust machine learning algorithm known as "XGBoost" for efficient prediction of wine quality. We evaluate our model on the basis of error measures, root mean square error, mean absolute error, R2 error and mean square error. We have compared the results generated by "XGBoost" with the other state-of-the-art machine learning techniques, experimental results have showed, "XGBoost" outperform as compared to other state of the art machine learning techniques.

Energy Efficient Cell Management by Flow Scheduling in Ultra Dense Networks

  • Sun, Guolin;Addo, Prince Clement;Wang, Guohui;Liu, Guisong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권9호
    • /
    • pp.4108-4122
    • /
    • 2016
  • To address challenges of an unprecedented growth in mobile data traffic, the ultra-dense network deployment is a cost efficient solution to off-load the traffic over other small cells. However, the real traffic is often much lower than the peak-hour traffic and certain small cells are superfluous, which will not only introduce extra energy consumption, but also impose extra interference onto the radio environment. In this paper, an elastic energy efficient cell management scheme is proposed based on flow scheduling among multi-layer ultra-dense cells by a SDN controller. A significant power saving was achieved by a cell-level energy manager. The scheme is elastic for energy saving, adaptive to the dynamic traffic distribution in the office or campus environment. In the end, the performance is evaluated and demonstrated. The results show substantial improvements over the conventional method in terms of the number of active BSs, the handover times, and the switches of BSs.

탐색공간 최적화를 통한 시그니쳐기반 트래픽 분석 시스템 성능향상 (Performance Improvement of Signature-based Traffic Classification System by Optimizing the Search Space)

  • 박준상;윤성호;김명섭
    • 인터넷정보학회논문지
    • /
    • 제12권3호
    • /
    • pp.89-99
    • /
    • 2011
  • 인터넷에 기반한 응용 프로그램의 종류와 네트워크 대역폭이 증가하면서 페이로드 시그니처 기반 트래픽 분류 시스템에서 처리하는 데이터의 양이 급격하게 증가하고 있다. 대용량 트래픽 데이터에 대한 처리 속도를 향상시키기 위한 방법으로 다양한 패턴 매칭 알고리즘이 제안되고 있다. 하지만 비약적으로 늘어나는 시그니처의 수와 트래픽 양에 비해 패턴 매칭 알고리즘의 성능 향상 속도는 한정적이고, 입력데이터의 특성에 의존적인 성능을 나타낸다. 따라서 본 논문에서는 분류 시스템의 입력 데이터로 제공되는 트래픽 데이터와 시그니처의 탐색 공간을 최적화할 수 있는 분류, 시스템 구조를 제안한다. 또한 제안하는 분류 시스템을 학내 망에서 발생하는 대용량의 트래픽에 실시간으로 적용하여 그 타당성을 증명한다.

Twitter Crawling System

  • Ganiev, Saydiolim;Nasridinov, Aziz;Byun, Jeong-Yong
    • Journal of Multimedia Information System
    • /
    • 제2권3호
    • /
    • pp.287-294
    • /
    • 2015
  • We are living in epoch of information when Internet touches all aspects of our lives. Therefore, it provides a plenty of services each of which benefits people in different ways. Electronic Mail (E-mail), File Transfer Protocol (FTP), Voice/Video Communication, Search Engines are bright examples of Internet services. Between them Social Network Services (SNS) continuously gain its popularity over the past years. Most popular SNSs like Facebook, Weibo and Twitter generate millions of data every minute. Twitter is one of SNS which allows its users post short instant messages. They, 100 million, posted 340 million tweets per day (2012)[1]. Often big amount of data contains lots of noisy data which can be defined as uninteresting and unclassifiable data. However, researchers can take advantage of such huge information in order to analyze and extract meaningful and interesting features. The way to collect SNS data as well as tweets is handled by crawlers. Twitter crawler has recently emerged as a great tool to crawl Twitter data as well as tweets. In this project, we develop Twitter Crawler system which enables us to extract Twitter data. We implemented our system in Java language along with MySQL. We use Twitter4J which is a java library for communicating with Twitter API. The application, first, connects to Twitter API, then retrieves tweets, and stores them into database. We also develop crawling strategies to efficiently extract tweets in terms of time and amount.

Concurrent Channel Time Allocation for Resource Management in WPANs

  • Park, Hyunhee;Piamrat, Kandaraj;Singh, Kamal Deep
    • Journal of information and communication convergence engineering
    • /
    • 제12권2호
    • /
    • pp.109-115
    • /
    • 2014
  • This paper presents a concurrent channel time allocation scheme used in the reservation period for concurrent transmissions in 60-GHz wireless personal area networks (WPANs). To this end, the proposed resource allocation scheme includes an efficient method for creating a concurrent transmission group by using a table that indicates whether individual streams experience interference from other streams or not. The coordinator device calculates the number of streams that can be concurrently transmitted with each stream and groups them together on the basis of the calculation result. Then, the coordinator device allocates resources to each group such that the streams belonging to the same group can transmit data concurrently. Therefore, when the piconet coordinator (PNC) allocates the channel time to the individual groups, it should allow for maximizing the overall capacity. The performance evaluation result demonstrates that the proposed scheme outperforms the random grouping scheme in terms of the overall capacity when the beamwidth is $30^{\circ}C$ and the radiation efficiency is 0.9.