• 제목/요약/키워드: software algorithms

Search Result 1,093, Processing Time 0.026 seconds

IRSML: An intelligent routing algorithm based on machine learning in software defined wireless networking

  • Duong, Thuy-Van T.;Binh, Le Huu
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.733-745
    • /
    • 2022
  • In software-defined wireless networking (SDWN), the optimal routing technique is one of the effective solutions to improve its performance. This routing technique is done by many different methods, with the most common using integer linear programming problem (ILP), building optimal routing metrics. These methods often only focus on one routing objective, such as minimizing the packet blocking probability, minimizing end-to-end delay (EED), and maximizing network throughput. It is difficult to consider multiple objectives concurrently in a routing algorithm. In this paper, we investigate the application of machine learning to control routing in the SDWN. An intelligent routing algorithm is then proposed based on the machine learning to improve the network performance. The proposed algorithm can optimize multiple routing objectives. Our idea is to combine supervised learning (SL) and reinforcement learning (RL) methods to discover new routes. The SL is used to predict the performance metrics of the links, including EED quality of transmission (QoT), and packet blocking probability (PBP). The routing is done by the RL method. We use the Q-value in the fundamental equation of the RL to store the PBP, which is used for the aim of route selection. Concurrently, the learning rate coefficient is flexibly changed to determine the constraints of routing during learning. These constraints include QoT and EED. Our performance evaluations based on OMNeT++ have shown that the proposed algorithm has significantly improved the network performance in terms of the QoT, EED, packet delivery ratio, and network throughput compared with other well-known routing algorithms.

Development of Dental Medical Image Processing SW using Open Source Library (오픈 소스를 이용한 치과 의료영상처리 SW 개발)

  • Jongjin, Park
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.1
    • /
    • pp.59-64
    • /
    • 2023
  • With the recent development of IT technology, medical image processing technology is also widely used in the dental field, and the treatment effect is enhanced by using 3D data such as CT. In this paper, open source libraries such as ITK and VTK are introduced to develop dental medical image processing software, and how to use them to develop dental medical image processing software centering on 3D CBCT. In ITK, basic algorithms for medical image processing are implemented, so the image processing pipeline can be quickly implemented, and the desired algorithm can be easily implemented as a filter by the developer. The developed algorithm is linked with VTK to implement the visualization function. The developed SW can be used for dental diagnosis and treatment that overcomes the limitations of 2D images..

Vision Inspection and Correction for DDI Protective Film Attachment

  • Kang, Jin-Su;Kim, Sung-Soo;Lee, Yong-Hwan;Kim, Young-Hyung
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.153-166
    • /
    • 2020
  • DDI(Display Driver IC) are used to drive numerous pixels that make up display. For stable driving of DDI, it is necessary to attach a protective film to shield electromagnetic waves. When the protective film is attached, defects often occur if the film is inclined or the center point is not aligned. In order to minimize such defects, an algorithm for correcting the center point and the inclined angle using camera image information is required. This technology detects the corner coordinates of the protective film by image processing in order to correct the positional defects where the protective film is attached. Corner point coordinates are detected using an algorithm, and center point position finds and correction values are calculated using the detected coordinates. LUT (Lookup Table) is used to quickly find out whether the angle is inclined or not. These algorithms were described by Verilog HDL. The method using the existing software requires a memory to store the entire image after processing one image. Since the method proposed in this paper is a method of scanning by adding a line buffer in one scan, it is possible to scan even if only a part of the image is saved after processing one image. Compared to those written in software language, the execution time is shortened, the speed is very fast, and the error is relatively small.

Design and Implementation of a Face Authentication System (딥러닝 기반의 얼굴인증 시스템 설계 및 구현)

  • Lee, Seungik
    • Journal of Software Assessment and Valuation
    • /
    • v.16 no.2
    • /
    • pp.63-68
    • /
    • 2020
  • This paper proposes a face authentication system based on deep learning framework. The proposed system is consisted of face region detection and feature extraction using deep learning algorithm, and performed the face authentication using joint-bayesian matrix learning algorithm. The performance of proposed paper is evaluated by various face database , and the face image of one person consists of 2 images. The face authentication algorithm was performed by measuring similarity by applying 2048 dimension characteristic and combined Bayesian algorithm through Deep Neural network and calculating the same error rate that failed face certification. The result of proposed paper shows that the proposed system using deep learning and joint bayesian algorithms showed the equal error rate of 1.2%, and have a good performance compared to previous approach.

Extraction Scheme of Function Information in Stripped Binaries using LSTM (스트립된 바이너리에서 LSTM을 이용한 함수정보 추출 기법)

  • Chang, Duhyeuk;Kim, Seon-Min;Heo, Junyoung
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.2
    • /
    • pp.39-46
    • /
    • 2021
  • To analyze and defend malware codes, reverse engineering is used as identify function location information. However, the stripped binary is not easy to find information such as function location because function symbol information is removed. To solve this problem, there are various binary analysis tools such as BAP and BitBlaze IDA Pro, but they are based on heuristics method, so they do not perform well in general. In this paper, we propose a technique to extract function information using LSTM-based models by applying algorithms of N-byte method that is extracted binaries corresponding to reverse assembling instruments in a recursive descent method. Through experiments, the proposed techniques were superior to the existing techniques in terms of time and accuracy.

Android Malware Detection using Machine Learning Techniques KNN-SVM, DBN and GRU

  • Sk Heena Kauser;V.Maria Anu
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.7
    • /
    • pp.202-209
    • /
    • 2023
  • Android malware is now on the rise, because of the rising interest in the Android operating system. Machine learning models may be used to classify unknown Android malware utilizing characteristics gathered from the dynamic and static analysis of an Android applications. Anti-virus software simply searches for the signs of the virus instance in a specific programme to detect it while scanning. Anti-virus software that competes with it keeps these in large databases and examines each file for all existing virus and malware signatures. The proposed model aims to provide a machine learning method that depend on the malware detection method for Android inability to detect malware apps and improve phone users' security and privacy. This system tracks numerous permission-based characteristics and events collected from Android apps and analyses them using a classifier model to determine whether the program is good ware or malware. This method used the machine learning techniques KNN-SVM, DBN, and GRU in which help to find the accuracy which gives the different values like KNN gives 87.20 percents accuracy, SVM gives 91.40 accuracy, Naive Bayes gives 85.10 and DBN-GRU Gives 97.90. Furthermore, in this paper, we simply employ standard machine learning techniques; but, in future work, we will attempt to improve those machine learning algorithms in order to develop a better detection algorithm.

A Study on Connections of Resources in Data Centers (데이터센터 자원 연결 방안 연구)

  • Ki, Jang-Geun;Kwon, Kee-Young
    • Journal of Software Assessment and Valuation
    • /
    • v.15 no.2
    • /
    • pp.67-72
    • /
    • 2019
  • The recent explosion of data traffic, including cloud services, coupled with the Internet penetration has led to a surge in the need for ultra-fast optical networks that can efficiently connect the data center's reconfiguable resources. In this paper, the algorithms for controlling switching cell operation in the optical switch connection structure are proposed, and the resulting performance is compared and analyzed through simulation. Performance analysis results showed that the algorithm proposed in this paper has improved the probability of successful multi-connection setup by about 3 to 7% compared to the existing algorithm.

Pixel-Wise Polynomial Estimation Model for Low-Light Image Enhancement

  • Muhammad Tahir Rasheed;Daming Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.9
    • /
    • pp.2483-2504
    • /
    • 2023
  • Most existing low-light enhancement algorithms either use a large number of training parameters or lack generalization to real-world scenarios. This paper presents a novel lightweight and robust pixel-wise polynomial approximation-based deep network for low-light image enhancement. For mapping the low-light image to the enhanced image, pixel-wise higher-order polynomials are employed. A deep convolution network is used to estimate the coefficients of these higher-order polynomials. The proposed network uses multiple branches to estimate pixel values based on different receptive fields. With a smaller receptive field, the first branch enhanced local features, the second and third branches focused on medium-level features, and the last branch enhanced global features. The low-light image is downsampled by the factor of 2b-1 (b is the branch number) and fed as input to each branch. After combining the outputs of each branch, the final enhanced image is obtained. A comprehensive evaluation of our proposed network on six publicly available no-reference test datasets shows that it outperforms state-of-the-art methods on both quantitative and qualitative measures.

A study on the process of mapping data and conversion software using PC-clustering (PC-clustering을 이용한 매핑자료처리 및 변환소프트웨어에 관한 연구)

  • WhanBo, Taeg-Keun;Lee, Byung-Wook;Park, Hong-Gi
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.7 no.2 s.14
    • /
    • pp.123-132
    • /
    • 1999
  • With the rapid increases of the amount of data and computing, the parallelization of the computing algorithm becomes necessary more than ever. However the parallelization had been conducted mostly in a super-computer until the rod 1990s, it was not for the general users due to the high price, the complexity of usage, and etc. A new concept for the parallel processing has been emerged in the form of K-clustering form the late 1990s, it becomes an excellent alternative for the applications need high computer power with a relative low cost although the installation and the usage are still difficult to the general users. The mapping algorithms (cut, join, resizing, warping, conversion from raster to vector and vice versa, etc) in GIS are well suited for the parallelization due to the characteristics of the data structure. If those algorithms are manipulated using PC-clustering, the result will be satisfiable in terms of cost and performance since they are processed in real flu with a low cos4 In this paper the tools and the libraries for the parallel processing and PC-clustering we introduced and how those tools and libraries are applied to mapping algorithms in GIS are showed. Parallel programs are developed for the mapping algorithms and the result of the experiments shows that the performance in most algorithms increases almost linearly according to the number of node.

  • PDF

Application of SWAT-CUP for Streamflow Auto-calibration at Soyang-gang Dam Watershed (소양강댐 유역의 유출 자동보정을 위한 SWAT-CUP의 적용 및 평가)

  • Ryu, Jichul;Kang, Hyunwoo;Choi, Jae Wan;Kong, Dong Soo;Gum, Donghyuk;Jang, Chun Hwa;Lim, Kyoung Jae
    • Journal of Korean Society on Water Environment
    • /
    • v.28 no.3
    • /
    • pp.347-358
    • /
    • 2012
  • The SWAT (Soil and Water Assessment Tool) should be calibrated and validated with observed data to secure accuracy of model prediction. Recently, the SWAT-CUP (Calibration and Uncertainty Program for SWAT) software, which can calibrate SWAT using various algorithms, were developed to help SWAT users calibrate model efficiently. In this study, three algorithms (GLUE: Generalized Likelihood Uncertainty Estimation, PARASOL: Parameter solution, SUFI-2: Sequential Uncertainty Fitting ver. 2) in the SWAT-CUP were applied for the Soyang-gang dam watershed to evaluate these algorithms. Simulated total streamflow and 0~75% percentile streamflow were compared with observed data, respectively. The NSE (Nash-Sutcliffe Efficiency) and $R^2$ (Coefficient of Determination) values were the same from three algorithms but the P-factor for confidence of calibration ranged from 0.27 to 0.81 . the PARASOL shows the lowest p-factor (0.27), SUFI-2 gives the greatest P-factor (0.81) among these three algorithms. Based on calibration results, the SUFI-2 was found to be suitable for calibration in Soyang-gang dam watershed. Although the NSE and $R^2$ values were satisfactory for total streamflow estimation, the SWAT simulated values for low flow regime were not satisfactory (negative NSE values) in this study. This is because of limitations in semi-distributed SWAT modeling structure, which cannot simulated effects of spatial locations of HRUs (Hydrologic Response Unit) within subwatersheds in SWAT. To solve this problem, a module capable of simulating groundwater/baseflow should be developed and added to the SWAT system. With this enhancement in SWAT/SWAT-CUP, the SWAT estimated streamflow values could be used in determining standard flow rate in TMDLs (Total Maximum Daily Load) application at a watershed.