• Title/Summary/Keyword: vector computer

Search Result 2,007, Processing Time 0.026 seconds

Exploring Support Vector Machine Learning for Cloud Computing Workload Prediction

  • ALOUFI, OMAR
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.374-388
    • /
    • 2022
  • Cloud computing has been one of the most critical technology in the last few decades. It has been invented for several purposes as an example meeting the user requirements and is to satisfy the needs of the user in simple ways. Since cloud computing has been invented, it had followed the traditional approaches in elasticity, which is the key characteristic of cloud computing. Elasticity is that feature in cloud computing which is seeking to meet the needs of the user's with no interruption at run time. There are traditional approaches to do elasticity which have been conducted for several years and have been done with different modelling of mathematical. Even though mathematical modellings have done a forward step in meeting the user's needs, there is still a lack in the optimisation of elasticity. To optimise the elasticity in the cloud, it could be better to benefit of Machine Learning algorithms to predict upcoming workloads and assign them to the scheduling algorithm which would achieve an excellent provision of the cloud services and would improve the Quality of Service (QoS) and save power consumption. Therefore, this paper aims to investigate the use of machine learning techniques in order to predict the workload of Physical Hosts (PH) on the cloud and their energy consumption. The environment of the cloud will be the school of computing cloud testbed (SoC) which will host the experiments. The experiments will take on real applications with different behaviours, by changing workloads over time. The results of the experiments demonstrate that our machine learning techniques used in scheduling algorithm is able to predict the workload of physical hosts (CPU utilisation) and that would contribute to reducing power consumption by scheduling the upcoming virtual machines to the lowest CPU utilisation in the environment of physical hosts. Additionally, there are a number of tools, which are used and explored in this paper, such as the WEKA tool to train the real data to explore Machine learning algorithms and the Zabbix tool to monitor the power consumption before and after scheduling the virtual machines to physical hosts. Moreover, the methodology of the paper is the agile approach that helps us in achieving our solution and managing our paper effectively.

Turbulent-image Restoration Based on a Compound Multibranch Feature Fusion Network

  • Banglian Xu;Yao Fang;Leihong Zhang;Dawei Zhang;Lulu Zheng
    • Current Optics and Photonics
    • /
    • v.7 no.3
    • /
    • pp.237-247
    • /
    • 2023
  • In middle- and long-distance imaging systems, due to the atmospheric turbulence caused by temperature, wind speed, humidity, and so on, light waves propagating in the air are distorted, resulting in image-quality degradation such as geometric deformation and fuzziness. In remote sensing, astronomical observation, and traffic monitoring, image information loss due to degradation causes huge losses, so effective restoration of degraded images is very important. To restore images degraded by atmospheric turbulence, an image-restoration method based on improved compound multibranch feature fusion (CMFNetPro) was proposed. Based on the CMFNet network, an efficient channel-attention mechanism was used to replace the channel-attention mechanism to improve image quality and network efficiency. In the experiment, two-dimensional random distortion vector fields were used to construct two turbulent datasets with different degrees of distortion, based on the Google Landmarks Dataset v2 dataset. The experimental results showed that compared to the CMFNet, DeblurGAN-v2, and MIMO-UNet models, the proposed CMFNetPro network achieves better performance in both quality and training cost of turbulent-image restoration. In the mixed training, CMFNetPro was 1.2391 dB (weak turbulence), 0.8602 dB (strong turbulence) respectively higher in terms of peak signal-to-noise ratio and 0.0015 (weak turbulence), 0.0136 (strong turbulence) respectively higher in terms of structure similarity compared to CMFNet. CMFNetPro was 14.4 hours faster compared to the CMFNet. This provides a feasible scheme for turbulent-image restoration based on deep learning.

Efficient Transformer Dissolved Gas Analysis and Classification Method (효율적인 변압기 유중가스 분석 및 분류 방법)

  • Cho, Yoon-Jeong;Kim, Jae-Young;Kim, Jong-Myon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.3
    • /
    • pp.563-570
    • /
    • 2018
  • This paper proposes an efficient dissolved gas analysis(DGA) and classification method of an oil-filled transformer using machine learning algorithms to solve problems inherent in IEC 60599. In IEC 60599, a certain diagnosis criteria do not exist, and duplication area is existed. Thus, it is difficult to make a decision without any experts since the IEC 60599 standard can not support analysis and classification of gas date of a power transformer in that criteria. To address these issue. we propose a dissolved gas analysis(DGA) and classification method using a machine learning algorithm. We evaluate the performance of the proposed method using support vector machines with dissolved gas dataset extracted from a power transformer in the real industry. To validate the performance of the proposed method, we compares the proposed method with the IEC 60599 standard. Experimental results show that the proposed method outperforms the IEC 60599 in the classification accuracy.

Acoustic Emission based early fault detection and diagnosis method for pipeline (음향방출 기반 배관 조기 결함 검출 및 진단 방법)

  • Kim, Jaeyoung;Jeong, Inkyu;Kim, Jongmyon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.3
    • /
    • pp.571-578
    • /
    • 2018
  • The deteriorated pipline often causes the unexpected leakage and crack. Negligence and late maintenance leads the enormous damage for gas and water resource. This paper proposes early fault detection and diagnosis algorithm for pipeline using acoustic emission (AE) signals. Early fault detection method for pipeline compares the frequency amplitude of the spectrum to that of the spectrum in normal condition. Larger amplitude of the spectrum indicates abnormal condition. Early fault diagnosis algorithm uses support vector machines (SVM), which is trained for normal and abnormal conditions to diagnose the measured AE signal from the target pipeline. In the experiment, a pipeline testbed is constructed similarly to real industrial pipeline. Normal, 5mm cracked, 10mm holed pipelines are installed and tested in this study. The proposed fault detection and diagnosis technique is validated as an efficient approach to detect early faulty condition of pipeline.

Geometric and Semantic Improvement for Unbiased Scene Graph Generation

  • Ruhui Zhang;Pengcheng Xu;Kang Kang;You Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.10
    • /
    • pp.2643-2657
    • /
    • 2023
  • Scene graphs are structured representations that can clearly convey objects and the relationships between them, but are often heavily biased due to the highly skewed, long-tailed relational labeling in the dataset. Indeed, the visual world itself and its descriptions are biased. Therefore, Unbiased Scene Graph Generation (USGG) prefers to train models to eliminate long-tail effects as much as possible, rather than altering the dataset directly. To this end, we propose Geometric and Semantic Improvement (GSI) for USGG to mitigate this issue. First, to fully exploit the feature information in the images, geometric dimension and semantic dimension enhancement modules are designed. The geometric module is designed from the perspective that the position information between neighboring object pairs will affect each other, which can improve the recall rate of the overall relationship in the dataset. The semantic module further processes the embedded word vector, which can enhance the acquisition of semantic information. Then, to improve the recall rate of the tail data, the Class Balanced Seesaw Loss (CBSLoss) is designed for the tail data. The recall rate of the prediction is improved by penalizing the body or tail relations that are judged incorrectly in the dataset. The experimental findings demonstrate that the GSI method performs better than mainstream models in terms of the mean Recall@K (mR@K) metric in three tasks. The long-tailed imbalance in the Visual Genome 150 (VG150) dataset is addressed better using the GSI method than by most of the existing methods.

Optimize KNN Algorithm for Cerebrospinal Fluid Cell Diseases

  • Soobia Saeed;Afnizanfaizal Abdullah;NZ Jhanjhi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.43-52
    • /
    • 2024
  • Medical imaginings assume a important part in the analysis of tumors and cerebrospinal fluid (CSF) leak. Magnetic resonance imaging (MRI) is an image segmentation technology, which shows an angular sectional perspective of the body which provides convenience to medical specialists to examine the patients. The images generated by MRI are detailed, which enable medical specialists to identify affected areas to help them diagnose disease. MRI imaging is usually a basic part of diagnostic and treatment. In this research, we propose new techniques using the 4D-MRI image segmentation process to detect the brain tumor in the skull. We identify the issues related to the quality of cerebrum disease images or CSF leakage (discover fluid inside the brain). The aim of this research is to construct a framework that can identify cancer-damaged areas to be isolated from non-tumor. We use 4D image light field segmentation, which is followed by MATLAB modeling techniques, and measure the size of brain-damaged cells deep inside CSF. Data is usually collected from the support vector machine (SVM) tool using MATLAB's included K-Nearest Neighbor (KNN) algorithm. We propose a 4D light field tool (LFT) modulation method that can be used for the light editing field application. Depending on the input of the user, an objective evaluation of each ray is evaluated using the KNN to maintain the 4D frequency (redundancy). These light fields' approaches can help increase the efficiency of device segmentation and light field composite pipeline editing, as they minimize boundary artefacts.

Protecting Accounting Information Systems using Machine Learning Based Intrusion Detection

  • Biswajit Panja
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.111-118
    • /
    • 2024
  • In general network-based intrusion detection system is designed to detect malicious behavior directed at a network or its resources. The key goal of this paper is to look at network data and identify whether it is normal traffic data or anomaly traffic data specifically for accounting information systems. In today's world, there are a variety of principles for detecting various forms of network-based intrusion. In this paper, we are using supervised machine learning techniques. Classification models are used to train and validate data. Using these algorithms we are training the system using a training dataset then we use this trained system to detect intrusion from the testing dataset. In our proposed method, we will detect whether the network data is normal or an anomaly. Using this method we can avoid unauthorized activity on the network and systems under that network. The Decision Tree and K-Nearest Neighbor are applied to the proposed model to classify abnormal to normal behaviors of network traffic data. In addition to that, Logistic Regression Classifier and Support Vector Classification algorithms are used in our model to support proposed concepts. Furthermore, a feature selection method is used to collect valuable information from the dataset to enhance the efficiency of the proposed approach. Random Forest machine learning algorithm is used, which assists the system to identify crucial aspects and focus on them rather than all the features them. The experimental findings revealed that the suggested method for network intrusion detection has a neglected false alarm rate, with the accuracy of the result expected to be between 95% and 100%. As a result of the high precision rate, this concept can be used to detect network data intrusion and prevent vulnerabilities on the network.

Adaptive Search Range Decision for Accelerating GPU-based Integer-pel Motion Estimation in HEVC Encoders (HEVC 부호화기에서 GPU 기반 정수화소 움직임 추정을 고속화하기 위한 적응적인 탐색영역 결정 방법)

  • Kim, Sangmin;Lee, Dongkyu;Sim, Dong-Gyu;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.699-712
    • /
    • 2014
  • In this paper, we propose a new Adaptive Search Range (ASR) decision algorithm for accelerating GPU-based Integer-pel Motion Estimation (IME) of High Efficiency Video Coding (HEVC). For deciding the ASR, we classify a frame into two models using Motion Vector Differences (MVDs) then adaptively decide the search ranges of each model. In order to apply the proposed algorithm to the GPU-based ME process, starting points of the ME are decided using only temporal Motion Vectors (MVs). The CPU decides the ASR as well as the starting points and transfers them to the GPU. Then, the GPU performs the integer-pel ME. The proposed algorithm reduces the total encoding time by 37.9% with BD-rate increase of 1.1% and yields 951.2 times faster ME against the CPU-based anchor. In addition, the proposed algorithm achieves the time reduction of 57.5% in the ME running time with the negligible coding loss of 0.6%, compared with the simple GPU-based ME without ASR decision.

Design of Low Cost Controller for 5[kVA] 3-Phase Active Power Filter (5[kVA]급 3상 능동전력필터를 위한 저가형 제어기 설계)

  • 이승요;채영민;최해룡;신우석;최규하
    • The Transactions of the Korean Institute of Power Electronics
    • /
    • v.4 no.1
    • /
    • pp.26-34
    • /
    • 1999
  • According to increase of nonlinear power electronics equipment, active power filters have been researched and developed for many years to compensate harmonic disturbances and reactive power. However the commercial of active power filter is being proceeded slowly, because the cost of active power filter compared to the passive filter for harmonic and reactive power compensation is expensive. Especially, the use of DSP (Digital Signal Processing) chip, which is frequently used to control 3-phase active power filter, is a factor of increasing the cost of active power filters. On the other hand, the use of only analog controller makes the controller's circuits much more complicate and depreciates the flexibilities of controller. In this paper, a controller with low cost for 5[kVA] 3-phase active power filter system is designed. To reduce the expense of active filter system, the presented controller is composed of digital control part using Intel 80C196KC $\mu$P and analog control part using hysteresis controller for current control. Characteristic analysis of designed controller for active filter system is performed by computer simulation and compensating characteristics of the designed controller are verified by experiment.tegy can apply to the vector control, leading to better output torque capability in the ac motor drive system. This strategy is that in the overmodulation range, the d-axis output current is given a priority to regulate the flux well, instead the q-axis output curent is sacrificed. Therefore, the vector control even in the overmodulation PWM operation can be achieved well. For this purpose, the d-axis output voltage of a current controller to control the flux is conserved. the q-axis output voltage to control the torque is controlled to place the reference voltage vector on the hexagon boundary in case of the overmodulation. The validity of the proposed overall scheme is confirmed by simulation and experiments for a 22[kW] induction motor drive system.

Reverse-time migration using the Poynting vector (포인팅 벡터를 이용한 역시간 구조보정)

  • Yoon, Kwang-Jin;Marfurt, Kurt J.
    • Geophysics and Geophysical Exploration
    • /
    • v.9 no.1
    • /
    • pp.102-107
    • /
    • 2006
  • Recently, rapid developments in computer hardware have enabled reverse-time migration to be applied to various production imaging problems. As a wave-equation technique using the two-way wave equation, reverse-time migration can handle not only multi-path arrivals but also steep dips and overturned reflections. However, reverse-time migration causes unwanted artefacts, which arise from the two-way characteristics of the hyperbolic wave equation. Zero-lag cross correlation with diving waves, head waves and back-scattered waves result in spurious artefacts. These strong artefacts have the common feature that the correlating forward and backward wavefields propagate in almost the opposite direction to each other at each correlation point. This is because the ray paths of the forward and backward wavefields are almost identical. In this paper, we present several tactics to avoid artefacts in shot-domain reverse-time migration. Simple muting of a shot gather before migration, or wavefront migration which performs correlation only within a time window following first arriving travel times, are useful in suppressing artefacts. Calculating the wave propagation direction from the Poynting vector gives rise to a new imaging condition, which can eliminate strong artefacts and can produce common image gathers in the reflection angle domain.