• Title/Summary/Keyword: gradient algorithm

Search Result 1,168, Processing Time 0.027 seconds

A Study on Implementation of the High Speed Feature Extraction System Based on Block Type Classification (블록 유형 분류 알고리즘 기반 고속 특징추출 시스템 구현에 관한 연구)

  • Lee, Juseong;An, Ho-Myoung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.3
    • /
    • pp.186-191
    • /
    • 2019
  • In this paper, we propose a implementation approach of the high-speed feature extraction algorithm. The proposed method is based on the block type classification algorithm which reduces the computation time when target macro block is divided to smooth block type that has no image features. It is quantitatively identified that occurs at 29.5% of the total image using 200 standard test images with $64{\times}64$ macro block size. This means that within a standard test image containing various image information, 29.5% can reduce the complexity of the operation. When the proposed approach is applied to the Canny edge detection, the required latency of the edge detection can be completely eliminated, such as 2D derivative filter, gradient magnitude/direction computation, non-maximal suppression, adaptive threshold calculation, hysteresis thresholding. Also, it is expected that operation time of the feature detection can be reduced by applying block type classification algorithm to various feature extraction algorithms in this way.

Optimal Algorithm and Number of Neurons in Deep Learning (딥러닝 학습에서 최적의 알고리즘과 뉴론수 탐색)

  • Jang, Ha-Young;You, Eun-Kyung;Kim, Hyeock-Jin
    • Journal of Digital Convergence
    • /
    • v.20 no.4
    • /
    • pp.389-396
    • /
    • 2022
  • Deep Learning is based on a perceptron, and is currently being used in various fields such as image recognition, voice recognition, object detection, and drug development. Accordingly, a variety of learning algorithms have been proposed, and the number of neurons constituting a neural network varies greatly among researchers. This study analyzed the learning characteristics according to the number of neurons of the currently used SGD, momentum methods, AdaGrad, RMSProp, and Adam methods. To this end, a neural network was constructed with one input layer, three hidden layers, and one output layer. ReLU was applied to the activation function, cross entropy error (CEE) was applied to the loss function, and MNIST was used for the experimental dataset. As a result, it was concluded that the number of neurons 100-300, the algorithm Adam, and the number of learning (iteraction) 200 would be the most efficient in deep learning learning. This study will provide implications for the algorithm to be developed and the reference value of the number of neurons given new learning data in the future.

Calculating the Actual Surface Area for Gangneung Forest Fire Area Using Slope-Aspect Algorithm (Slope-Aspect 알고리즘을 활용한 강릉시 산불 피해지역 실표면적 산출 방법)

  • Jeong, JongChul
    • Journal of Cadastre & Land InformatiX
    • /
    • v.52 no.1
    • /
    • pp.95-104
    • /
    • 2022
  • This study aims to find the exact area of the forest fire in Okgye-myeon, Gangneung, April 4, 2019. Since there is a gradient in our country's forests, we should find a surface area that takes into account The 5th numerical clinical map provided by the DEM and the Korea Forest Service provided by the National Geographic Information Service was used. In DEM, the center point of each pixel was created and all points were connected. The length of the connecting line is determined by the spatial resolution of the pixel and the cosine value, and the surface area is obtained along with the height value, which is called the Slope-Aspect algorithm. The surface area and floor area of the forest were shown according to the tree species and types of forest, and their quantitative numerical differences proved the validity of this study.

Real-time prediction on the slurry concentration of cutter suction dredgers using an ensemble learning algorithm

  • Han, Shuai;Li, Mingchao;Li, Heng;Tian, Huijing;Qin, Liang;Li, Jinfeng
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.463-481
    • /
    • 2020
  • Cutter suction dredgers (CSDs) are widely used in various dredging constructions such as channel excavation, wharf construction, and reef construction. During a CSD construction, the main operation is to control the swing speed of cutter to keep the slurry concentration in a proper range. However, the slurry concentration cannot be monitored in real-time, i.e., there is a "time-lag effect" in the log of slurry concentration, making it difficult for operators to make the optimal decision on controlling. Concerning this issue, a solution scheme that using real-time monitored indicators to predict current slurry concentration is proposed in this research. The characteristics of the CSD monitoring data are first studied, and a set of preprocessing methods are presented. Then we put forward the concept of "index class" to select the important indices. Finally, an ensemble learning algorithm is set up to fit the relationship between the slurry concentration and the indices of the index classes. In the experiment, log data over seven days of a practical dredging construction is collected. For comparison, the Deep Neural Network (DNN), Long Short Time Memory (LSTM), Support Vector Machine (SVM), Random Forest (RF), Gradient Boosting Decision Tree (GBDT), and the Bayesian Ridge algorithm are tried. The results show that our method has the best performance with an R2 of 0.886 and a mean square error (MSE) of 5.538. This research provides an effective way for real-time predicting the slurry concentration of CSDs and can help to improve the stationarity and production efficiency of dredging construction.

  • PDF

Prediction of Soil Moisture with Open Source Weather Data and Machine Learning Algorithms (공공 기상데이터와 기계학습 모델을 이용한 토양수분 예측)

  • Jang, Young-bin;Jang, Ik-hoon;Choe, Young-chan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.22 no.1
    • /
    • pp.1-12
    • /
    • 2020
  • As one of the essential resources in the agricultural process, soil moisture has been carefully managed by predicting future changes and deficits. In recent years, statistics and machine learning based approach to predict soil moisture has been preferred in academia for its generalizability and ease of use in the field. However, little is known that machine learning based soil moisture prediction is applicable in the situation of South Korea. In this sense, this paper aims to examine 1) whether publicly available weather data generated in South Korea has sufficient quality to predict soil moisture, 2) which machine learning algorithm would perform best in the situation of South Korea, and 3) whether a single machine learning model could be generally applicable in various regions. We used various machine learning methods such as Support Vector Machines (SVM), Random Forest (RF), Extremely Randomized Trees (ET), Gradient Boosting Machines (GBM), and Deep Feedforward Network (DFN) to predict future soil moisture in Andong, Boseong, Cheolwon, Suncheon region with open source weather data. As a result, GBM model showed the lowest prediction error in every data set we used (R squared: 0.96, RMSE: 1.8). Furthermore, GBM showed the lowest variance of prediction error between regions which indicates it has the highest generalizability.

Fast GPU Implementation for the Solution of Tridiagonal Matrix Systems (삼중대각행렬 시스템 풀이의 빠른 GPU 구현)

  • Kim, Yong-Hee;Lee, Sung-Kee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.11_12
    • /
    • pp.692-704
    • /
    • 2005
  • With the improvement of computer hardware, GPUs(Graphics Processor Units) have tremendous memory bandwidth and computation power. This leads GPUs to use in general purpose computation. Especially, GPU implementation of compute-intensive physics based simulations is actively studied. In the solution of differential equations which are base of physics simulations, tridiagonal matrix systems occur repeatedly by finite-difference approximation. From the point of view of physics based simulations, fast solution of tridiagonal matrix system is important research field. We propose a fast GPU implementation for the solution of tridiagonal matrix systems. In this paper, we implement the cyclic reduction(also known as odd-even reduction) algorithm which is a popular choice for vector processors. We obtained a considerable performance improvement for solving tridiagonal matrix systems over Thomas method and conjugate gradient method. Thomas method is well known as a method for solving tridiagonal matrix systems on CPU and conjugate gradient method has shown good results on GPU. We experimented our proposed method by applying it to heat conduction, advection-diffusion, and shallow water simulations. The results of these simulations have shown a remarkable performance of over 35 frame-per-second on the 1024x1024 grid.

Effect of input variable characteristics on the performance of an ensemble machine learning model for algal bloom prediction (앙상블 머신러닝 모형을 이용한 하천 녹조발생 예측모형의 입력변수 특성에 따른 성능 영향)

  • Kang, Byeong-Koo;Park, Jungsu
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.35 no.6
    • /
    • pp.417-424
    • /
    • 2021
  • Algal bloom is an ongoing issue in the management of freshwater systems for drinking water supply, and the chlorophyll-a concentration is commonly used to represent the status of algal bloom. Thus, the prediction of chlorophyll-a concentration is essential for the proper management of water quality. However, the chlorophyll-a concentration is affected by various water quality and environmental factors, so the prediction of its concentration is not an easy task. In recent years, many advanced machine learning algorithms have increasingly been used for the development of surrogate models to prediction the chlorophyll-a concentration in freshwater systems such as rivers or reservoirs. This study used a light gradient boosting machine(LightGBM), a gradient boosting decision tree algorithm, to develop an ensemble machine learning model to predict chlorophyll-a concentration. The field water quality data observed at Daecheong Lake, obtained from the real-time water information system in Korea, were used for the development of the model. The data include temperature, pH, electric conductivity, dissolved oxygen, total organic carbon, total nitrogen, total phosphorus, and chlorophyll-a. First, a LightGBM model was developed to predict the chlorophyll-a concentration by using the other seven items as independent input variables. Second, the time-lagged values of all the input variables were added as input variables to understand the effect of time lag of input variables on model performance. The time lag (i) ranges from 1 to 50 days. The model performance was evaluated using three indices, root mean squared error-observation standard deviation ration (RSR), Nash-Sutcliffe coefficient of efficiency (NSE) and mean absolute error (MAE). The model showed the best performance by adding a dataset with a one-day time lag (i=1) where RSR, NSE, and MAE were 0.359, 0.871 and 1.510, respectively. The improvement of model performance was observed when a dataset with a time lag up of about 15 days (i=15) was added.

Calibration of Portable Particulate Mattere-Monitoring Device using Web Query and Machine Learning

  • Loh, Byoung Gook;Choi, Gi Heung
    • Safety and Health at Work
    • /
    • v.10 no.4
    • /
    • pp.452-460
    • /
    • 2019
  • Background: Monitoring and control of PM2.5 are being recognized as key to address health issues attributed to PM2.5. Availability of low-cost PM2.5 sensors made it possible to introduce a number of portable PM2.5 monitors based on light scattering to the consumer market at an affordable price. Accuracy of light scatteringe-based PM2.5 monitors significantly depends on the method of calibration. Static calibration curve is used as the most popular calibration method for low-cost PM2.5 sensors particularly because of ease of application. Drawback in this approach is, however, the lack of accuracy. Methods: This study discussed the calibration of a low-cost PM2.5-monitoring device (PMD) to improve the accuracy and reliability for practical use. The proposed method is based on construction of the PM2.5 sensor network using Message Queuing Telemetry Transport (MQTT) protocol and web query of reference measurement data available at government-authorized PM monitoring station (GAMS) in the republic of Korea. Four machine learning (ML) algorithms such as support vector machine, k-nearest neighbors, random forest, and extreme gradient boosting were used as regression models to calibrate the PMD measurements of PM2.5. Performance of each ML algorithm was evaluated using stratified K-fold cross-validation, and a linear regression model was used as a reference. Results: Based on the performance of ML algorithms used, regression of the output of the PMD to PM2.5 concentrations data available from the GAMS through web query was effective. The extreme gradient boosting algorithm showed the best performance with a mean coefficient of determination (R2) of 0.78 and standard error of 5.0 ㎍/㎥, corresponding to 8% increase in R2 and 12% decrease in root mean square error in comparison with the linear regression model. Minimum 100 hours of calibration period was found required to calibrate the PMD to its full capacity. Calibration method proposed poses a limitation on the location of the PMD being in the vicinity of the GAMS. As the number of the PMD participating in the sensor network increases, however, calibrated PMDs can be used as reference devices to nearby PMDs that require calibration, forming a calibration chain through MQTT protocol. Conclusions: Calibration of a low-cost PMD, which is based on construction of PM2.5 sensor network using MQTT protocol and web query of reference measurement data available at a GAMS, significantly improves the accuracy and reliability of a PMD, thereby making practical use of the low-cost PMD possible.

Bar Code Location Algorithm Using Pixel Gradient and Labeling (화소의 기울기와 레이블링을 이용한 효율적인 바코드 검출 알고리즘)

  • Kim, Seung-Jin;Jung, Yoon-Su;Kim, Bong-Seok;Won, Jong-Un;Won, Chul-Ho;Cho, Jin-Ho;Lee, Kuhn-Il
    • The KIPS Transactions:PartD
    • /
    • v.10D no.7
    • /
    • pp.1171-1176
    • /
    • 2003
  • In this paper, we propose an effective bar code detection algorithm using the feature analysis and the labeling. After computing the direction of pixels using four line operators, we obtain the histogram about the direction of pixels by a block unit. We calculate the difference between the maximum value and the minimum value of the histogram and consider the block that have the largest difference value as the block of the bar code region. We get the line passing by the bar code region with the selected block but detect blocks of interest to get the more accurate line. The largest difference value is used to decide the threshold value to obtain the binary image. After obtaining a binary image, we do the labeling about the binary image. Therefore, we find blocks of interest in the bar code region. We calculate the gradient and the center of the bar code with blocks of interest, and then get the line passing by the bar code and detect the bar code. As we obtain the gray level of the line passing by the bar code, we grasp the information of the bar code.

Verification of Dose Distribution for Stereotactic Radiosurgery with a Linear Accelerator (선형가속기를 이용한 방사선 수술의 선량분포의 실험적 확인)

  • Park Kyung Ran;Kim Kye Jun;Chu Sung Sil;Lee Jong Young;Joh Chul Woo;Lee Chang Geol;Suh Chang Ok;Kim Gwi Eon
    • Radiation Oncology Journal
    • /
    • v.11 no.2
    • /
    • pp.421-430
    • /
    • 1993
  • The calculation of dose distribution in multiple arc stereotactic radiotherapy is a three-dimensional problem and, therefore, the three-dimensional dose calculation algorithm is important and the algorithm's accuracy and reliability should be confirmed experimentally. The aim of this study is to verify the dose distribution of stereotactic radiosurgery experimentally and to investigate the effect of the beam quality, the number of arcs of radiation, and the tertiary collimation on the resulting dose distribution. Film dosimetry with phantom measurements was done to get the three-dimensional orthogonal isodose distribution. All experiments were carried out with a 6 MV X-ray, except for the study of the effects of beam energy on dose distribution, which was done for X-ray energies of 6 and 15 MV. The irradiation technique was from 4 to 11 arcs at intervals of from 15 to 45 degrees between each arc with various field sizes with additional circular collimator. The dose distributions of square field with linear accelerator collimator compared with the dose distributions obtained using circular field with tertiary collimator. The parameters used for comparing the results were the shape of the isodose curve, dose fall-offs fom $90\%$ to $50\%$ and from $90\%\;to\;20\%$ isodose line for the steepest and shallowest profile, and $A=\frac{90\%\;isodose\;area}{50\%\;isodose\;area-90\%\;isodose\;area}$(modified from Chierego). This ratio may be considered as being proportional to the sparing of normal tissue around the target volume. The effect of beam energy in 6 and 15 MV X-ray indicated that the shapes of isodose curves were the same. The value of ratio A and the steepest and shallowest dose fall-offs for 6 MV X-ray was minimally better than that for 15 MV X-ray. These data illustrated that an increase in the dimensions of the field from 10 to 28 mm in diameter did not significantly change the isodose distribution. There was no significant difference in dose gradient and the shape of isodose curve regardless of the number of arcs for field sizes of 10, 21, and 32 mm in diameter. The shape of isodose curves was more circular in circular field and square in square field. And the dose gradient for the circular field was slightly better than that for the square field.

  • PDF