• Title/Summary/Keyword: Security Techniques

Search Result 1,571, Processing Time 0.025 seconds

High Speed AES Implementation on 64 bits Processors (64-비트 프로세서에서 AES 고속 구현)

  • Jung, Chang-Ho;Park, Il-Hwan
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.6A
    • /
    • pp.51-61
    • /
    • 2008
  • This paper suggests a new way to implement high speed AES on Intel Core2 processors and AMD Athlon64 processors, which are used all over the world today. First, Core2 Processors of EM64T architecture's memory-access-instruction processing efficiency are lower than calculus-instruction processing efficiency. So, previous AES implementation techniques, which had a high rate of memory-access-instruction, could cause memory-bottleneck. To improve this problem we present the partial round key techniques that reduce the rate of memory-access-instruction. The result in Intel Core2Duo 3.0 Ghz Processors show 185 cycles/block and 2.0 Gbps's throughputs in ECB mode. This is 35 cycles/block faster than bernstein software, which is known for being the fastest way. On the other side, in AMD64 processors of AMD64 architecture, by removing bottlenecks that occur in decoding processing we could improve the speed, with the result that the Athlon64 processor reached 170 cycles/block. The result that we present is the same performance of Matsui's unpublished software.

A Behavior based Detection for Malicious Code Using Obfuscation Technique (우회기법을 이용하는 악성코드 행위기반 탐지 방법)

  • Park Nam-Youl;Kim Yong-Min;Noh Bong-Nam
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.16 no.3
    • /
    • pp.17-28
    • /
    • 2006
  • The appearance of variant malicious codes using obfuscation techniques is accelerating the spread of malicious codes around the detection by a vaccine. n a system does not patch detection patterns for vulnerabilities and worms to the vaccine, it can be infected by the worms and malicious codes can be spreaded rapidly to other systems and networks in a few minute. Moreover, It is limited to the conventional pattern based detection and treatment for variants or new malicious codes. In this paper, we propose a method of behavior based detection by the static analysis, the dynamic analysis and the dynamic monitoring to detect a malicious code using obfuscation techniques with the PE compression. Also we show that dynamic monitoring can detect worms with the PE compression which accesses to important resources such as a registry, a cpu, a memory and files with the proposed method for similarity.

Polymorphic Wonn Detection Using A Fast Static Analysis Approach (고속 정적 분석 방법을 이용한 폴리모픽 웹 탐지)

  • Oh, Jin-Tae;Kim, Dae-Won;Kim, Ik-Kyun;Jang, Jong-Soo;Jeon, Yong-Hee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.19 no.4
    • /
    • pp.29-39
    • /
    • 2009
  • In order to respond against worms which are malicious programs automatically spreading across communication networks, worm detection approach by generating signatures resulting from analyzing worm-related packets is being mostly used. However, to avoid such signature-based detection techniques, usage of exploits employing mutated polymorphic types are becoming more prevalent. In this paper, we propose a novel static analysis approach for detecting the decryption routine of polymorphic exploit code, Our approach detects a code routine for performing the decryption of the encrypted original code which are contained with the polymorphic exploit code within the network flows. The experiment results show that our approach can detect polymorphic exploit codes in which the static analysis resistant techniques are used. It is also revealed that our approach is more efficient than the emulation-based approach in the processing performance.

The Traffic Analysis of P2P-based Storm Botnet using Honeynet (허니넷을 이용한 P2P 기반 Storm 봇넷의 트래픽 분석)

  • Han, Kyoung-Soo;Lim, Kwang-Hyuk;Im, Eul-Gyu
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.19 no.4
    • /
    • pp.51-61
    • /
    • 2009
  • Recently, the cyber-attacks using botnets are being increased, Because these attacks pursue the money, the criminal aspect is also being increased, There are spreading of spam mail, DDoS(Distributed Denial of Service) attacks, propagations of malicious codes and malwares, phishings. leaks of sensitive informations as cyber-attacks that used botnets. There are many studies about detection and mitigation techniques against centralized botnets, namely IRC and HITP botnets. However, P2P botnets are still in an early stage of their studies. In this paper, we analyzed the traffics of the Peacomm bot that is one of P2P-based storm bot by using honeynet which is utilized in active analysis of network attacks. As a result, we could see that the Peacomm bot sends a large number of UDP packets to the zombies in wide network through P2P. Furthermore, we could know that the Peacomm bot makes the scale of botnet maintained and extended through these results. We expect that these results are used as a basis of detection and mitigation techniques against P2P botnets.

A Novel RGB Image Steganography Using Simulated Annealing and LCG via LSB

  • Bawaneh, Mohammed J.;Al-Shalabi, Emad Fawzi;Al-Hazaimeh, Obaida M.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.143-151
    • /
    • 2021
  • The enormous prevalence of transferring official confidential digital documents via the Internet shows the urgent need to deliver confidential messages to the recipient without letting any unauthorized person to know contents of the secret messages or detect there existence . Several Steganography techniques such as the least significant Bit (LSB), Secure Cover Selection (SCS), Discrete Cosine Transform (DCT) and Palette Based (PB) were applied to prevent any intruder from analyzing and getting the secret transferred message. The utilized steganography methods should defiance the challenges of Steganalysis techniques in term of analysis and detection. This paper presents a novel and robust framework for color image steganography that combines Linear Congruential Generator (LCG), simulated annealing (SA), Cesar cryptography and LSB substitution method in one system in order to reduce the objection of Steganalysis and deliver data securely to their destination. SA with the support of LCG finds out the optimal minimum sniffing path inside a cover color image (RGB) then the confidential message will be encrypt and embedded within the RGB image path as a host medium by using Cesar and LSB procedures. Embedding and extraction processes of secret message require a common knowledge between sender and receiver; that knowledge are represented by SA initialization parameters, LCG seed, Cesar key agreement and secret message length. Steganalysis intruder will not understand or detect the secret message inside the host image without the correct knowledge about the manipulation process. The constructed system satisfies the main requirements of image steganography in term of robustness against confidential message extraction, high quality visual appearance, little mean square error (MSE) and high peak signal noise ratio (PSNR).

Accuracy of Phishing Websites Detection Algorithms by Using Three Ranking Techniques

  • Mohammed, Badiea Abdulkarem;Al-Mekhlafi, Zeyad Ghaleb
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.272-282
    • /
    • 2022
  • Between 2014 and 2019, the US lost more than 2.1 billion USD to phishing attacks, according to the FBI's Internet Crime Complaint Center, and COVID-19 scam complaints totaled more than 1,200. Phishing attacks reflect these awful effects. Phishing websites (PWs) detection appear in the literature. Previous methods included maintaining a centralized blacklist that is manually updated, but newly created pseudonyms cannot be detected. Several recent studies utilized supervised machine learning (SML) algorithms and schemes to manipulate the PWs detection problem. URL extraction-based algorithms and schemes. These studies demonstrate that some classification algorithms are more effective on different data sets. However, for the phishing site detection problem, no widely known classifier has been developed. This study is aimed at identifying the features and schemes of SML that work best in the face of PWs across all publicly available phishing data sets. The Scikit Learn library has eight widely used classification algorithms configured for assessment on the public phishing datasets. Eight was tested. Later, classification algorithms were used to measure accuracy on three different datasets for statistically significant differences, along with the Welch t-test. Assemblies and neural networks outclass classical algorithms in this study. On three publicly accessible phishing datasets, eight traditional SML algorithms were evaluated, and the results were calculated in terms of classification accuracy and classifier ranking as shown in tables 4 and 8. Eventually, on severely unbalanced datasets, classifiers that obtained higher than 99.0 percent classification accuracy. Finally, the results show that this could also be adapted and outperforms conventional techniques with good precision.

Machine Learning-based Classification of Hyperspectral Imagery

  • Haq, Mohd Anul;Rehman, Ziaur;Ahmed, Ahsan;Khan, Mohd Abdul Rahim
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.193-202
    • /
    • 2022
  • The classification of hyperspectral imagery (HSI) is essential in the surface of earth observation. Due to the continuous large number of bands, HSI data provide rich information about the object of study; however, it suffers from the curse of dimensionality. Dimensionality reduction is an essential aspect of Machine learning classification. The algorithms based on feature extraction can overcome the data dimensionality issue, thereby allowing the classifiers to utilize comprehensive models to reduce computational costs. This paper assesses and compares two HSI classification techniques. The first is based on the Joint Spatial-Spectral Stacked Autoencoder (JSSSA) method, the second is based on a shallow Artificial Neural Network (SNN), and the third is used the SVM model. The performance of the JSSSA technique is better than the SNN classification technique based on the overall accuracy and Kappa coefficient values. We observed that the JSSSA based method surpasses the SNN technique with an overall accuracy of 96.13% and Kappa coefficient value of 0.95. SNN also achieved a good accuracy of 92.40% and a Kappa coefficient value of 0.90, and SVM achieved an accuracy of 82.87%. The current study suggests that both JSSSA and SNN based techniques prove to be efficient methods for hyperspectral classification of snow features. This work classified the labeled/ground-truth datasets of snow in multiple classes. The labeled/ground-truth data can be valuable for applying deep neural networks such as CNN, hybrid CNN, RNN for glaciology, and snow-related hazard applications.

Novel Intent based Dimension Reduction and Visual Features Semi-Supervised Learning for Automatic Visual Media Retrieval

  • kunisetti, Subramanyam;Ravichandran, Suban
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.230-240
    • /
    • 2022
  • Sharing of online videos via internet is an emerging and important concept in different types of applications like surveillance and video mobile search in different web related applications. So there is need to manage personalized web video retrieval system necessary to explore relevant videos and it helps to peoples who are searching for efficient video relates to specific big data content. To evaluate this process, attributes/features with reduction of dimensionality are computed from videos to explore discriminative aspects of scene in video based on shape, histogram, and texture, annotation of object, co-ordination, color and contour data. Dimensionality reduction is mainly depends on extraction of feature and selection of feature in multi labeled data retrieval from multimedia related data. Many of the researchers are implemented different techniques/approaches to reduce dimensionality based on visual features of video data. But all the techniques have disadvantages and advantages in reduction of dimensionality with advanced features in video retrieval. In this research, we present a Novel Intent based Dimension Reduction Semi-Supervised Learning Approach (NIDRSLA) that examine the reduction of dimensionality with explore exact and fast video retrieval based on different visual features. For dimensionality reduction, NIDRSLA learns the matrix of projection by increasing the dependence between enlarged data and projected space features. Proposed approach also addressed the aforementioned issue (i.e. Segmentation of video with frame selection using low level features and high level features) with efficient object annotation for video representation. Experiments performed on synthetic data set, it demonstrate the efficiency of proposed approach with traditional state-of-the-art video retrieval methodologies.

A Detecting Technique for the Climatic Factors that Aided the Spread of COVID-19 using Deep and Machine Learning Algorithms

  • Al-Sharari, Waad;Mahmood, Mahmood A.;Abd El-Aziz, A.A.;Azim, Nesrine A.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.131-138
    • /
    • 2022
  • Novel Coronavirus (COVID-19) is viewed as one of the main general wellbeing theaters on the worldwide level all over the planet. Because of the abrupt idea of the flare-up and the irresistible force of the infection, it causes individuals tension, melancholy, and other pressure responses. The avoidance and control of the novel Covid pneumonia have moved into an imperative stage. It is fundamental to early foresee and figure of infection episode during this troublesome opportunity to control of its grimness and mortality. The entire world is investing unimaginable amounts of energy to fight against the spread of this lethal infection. In this paper, we utilized machine learning and deep learning techniques for analyzing what is going on utilizing countries shared information and for detecting the climate factors that effect on spreading Covid-19, such as humidity, sunny hours, temperature and wind speed for understanding its regular dramatic way of behaving alongside the forecast of future reachability of the COVID-2019 around the world. We utilized data collected and produced by Kaggle and the Johns Hopkins Center for Systems Science. The dataset has 25 attributes and 9566 objects. Our Experiment consists of two phases. In phase one, we preprocessed dataset for DL model and features were decreased to four features humidity, sunny hours, temperature and wind speed by utilized the Pearson Correlation Coefficient technique (correlation attributes feature selection). In phase two, we utilized the traditional famous six machine learning techniques for numerical datasets, and Dense Net deep learning model to predict and detect the climatic factor that aide to disease outbreak. We validated the model by using confusion matrix (CM) and measured the performance by four different metrics: accuracy, f-measure, recall, and precision.

Exploring Support Vector Machine Learning for Cloud Computing Workload Prediction

  • ALOUFI, OMAR
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.374-388
    • /
    • 2022
  • Cloud computing has been one of the most critical technology in the last few decades. It has been invented for several purposes as an example meeting the user requirements and is to satisfy the needs of the user in simple ways. Since cloud computing has been invented, it had followed the traditional approaches in elasticity, which is the key characteristic of cloud computing. Elasticity is that feature in cloud computing which is seeking to meet the needs of the user's with no interruption at run time. There are traditional approaches to do elasticity which have been conducted for several years and have been done with different modelling of mathematical. Even though mathematical modellings have done a forward step in meeting the user's needs, there is still a lack in the optimisation of elasticity. To optimise the elasticity in the cloud, it could be better to benefit of Machine Learning algorithms to predict upcoming workloads and assign them to the scheduling algorithm which would achieve an excellent provision of the cloud services and would improve the Quality of Service (QoS) and save power consumption. Therefore, this paper aims to investigate the use of machine learning techniques in order to predict the workload of Physical Hosts (PH) on the cloud and their energy consumption. The environment of the cloud will be the school of computing cloud testbed (SoC) which will host the experiments. The experiments will take on real applications with different behaviours, by changing workloads over time. The results of the experiments demonstrate that our machine learning techniques used in scheduling algorithm is able to predict the workload of physical hosts (CPU utilisation) and that would contribute to reducing power consumption by scheduling the upcoming virtual machines to the lowest CPU utilisation in the environment of physical hosts. Additionally, there are a number of tools, which are used and explored in this paper, such as the WEKA tool to train the real data to explore Machine learning algorithms and the Zabbix tool to monitor the power consumption before and after scheduling the virtual machines to physical hosts. Moreover, the methodology of the paper is the agile approach that helps us in achieving our solution and managing our paper effectively.