• Title/Summary/Keyword: Splitting machine

Search Result 40, Processing Time 0.026 seconds

Fast Algorithm for Intra Prediction of HEVC Using Adaptive Decision Trees

  • Zheng, Xing;Zhao, Yao;Bai, Huihui;Lin, Chunyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.7
    • /
    • pp.3286-3300
    • /
    • 2016
  • High Efficiency Video Coding (HEVC) Standard, as the latest coding standard, introduces satisfying compression structures with respect to its predecessor Advanced Video Coding (H.264/AVC). The new coding standard can offer improved encoding performance compared with H.264/AVC. However, it also leads to enormous computational complexity that makes it considerably difficult to be implemented in real time application. In this paper, based on machine learning, a fast partitioning method is proposed, which can search for the best splitting structures for Intra-Prediction. In view of the video texture characteristics, we choose the entropy of Gray-Scale Difference Statistics (GDS) and the minimum of Sum of Absolute Transformed Difference (SATD) as two important features, which can make a balance between the computation complexity and classification performance. According to the selected features, adaptive decision trees can be built for the Coding Units (CU) with different size by offline training. Furthermore, by this way, the partition of CUs can be resolved as a binary classification problem. Experimental results have shown that the proposed algorithm can save over 34% encoding time on average, with a negligible Bjontegaard Delta (BD)-rate increase.

Temporal Search Algorithm for Multiple-Pedestrian Tracking

  • Yu, Hye-Yeon;Kim, Young-Nam;Kim, Moon-Hyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.5
    • /
    • pp.2310-2325
    • /
    • 2016
  • In this paper, we provide a trajectory-generation algorithm that can identify pedestrians in real time. Typically, the contours for the extraction of pedestrians from the foreground of images are not clear due to factors including brightness and shade; furthermore, pedestrians move in different directions and interact with each other. These issues mean that the identification of pedestrians and the generation of trajectories are somewhat difficult. We propose a new method for trajectory generation regarding multiple pedestrians. The first stage of the method distinguishes between those pedestrian-blob situations that need to be merged and those that require splitting, followed by the use of trained decision trees to separate the pedestrians. The second stage generates the trajectories of each pedestrian by using the point-correspondence method; however, we introduce a new point-correspondence algorithm for which the A* search method has been modified. By using fuzzy membership functions, a heuristic evaluation of the correspondence between the blobs was also conducted. The proposed method was implemented and tested with the PETS 2009 dataset to show an effective multiple-pedestrian-tracking capability in a pedestrian-interaction environment.

Automation of Longline -Automation of the Alaska Pollack Longline- (주낙어구의 자동화 -명태주낙어업의 자동화-)

  • KO Kwan-Soh;YOON Gab-Dong;LEE Chun-Woo
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.20 no.2
    • /
    • pp.106-113
    • /
    • 1987
  • The Alaska pollack longline operations, which consist of baiting, shooting, hauling and arrangement of hooks, are dependant on manual labour up to the present. The automation against this traditional way is necessary to eliminate the manual operations and to reduce crew. We have developed a prototype longline system suitable for Alaska pollack longline gear, which is composed of an automatic baiting machine, an automatic line hauler, a hook cleaner and storage rails. The automatic bailing machine driven by hydraulic power is precise baiting method controlled sequentially, and the automatic line hauler is to haul up the mainline by means of hydraulic power and at the same time to split every hook and to carry it onto storage rail automatically. A series functioning tests on shooting and hauling apparatus were carried out in the laboratory and at sea. The results obtained are as follows ; 1. As for the baiting machine, the exciting time of solenoid which operates a directional valve, bait feeding and cutting time, is shortened according to the increase of pressure, and also, after cutting the bait, the over-rotated angle of the blade increased in accordance with the increase of pressure. 2. The baiting efficiency is about $90\%$ when using sand lance (Hypoptychus dybowskii), and the most proper pressure of the hydraulic circuit in feeding and cutting the bait is between $13\;kgf/cm^2\;and\;20\;kgf/cm^2$. 3. The hook splitting rate of the automatic line hauler is about $95.5\%$ regardless of hauling speed and materials of snood. 4. The case of unseparating hook is appeared when the snood gets entangled or the hook is sticked in the mainline.

  • PDF

ADMM algorithms in statistics and machine learning (통계적 기계학습에서의 ADMM 알고리즘의 활용)

  • Choi, Hosik;Choi, Hyunjip;Park, Sangun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1229-1244
    • /
    • 2017
  • In recent years, as demand for data-based analytical methodologies increases in various fields, optimization methods have been developed to handle them. In particular, various constraints required for problems in statistics and machine learning can be solved by convex optimization. Alternating direction method of multipliers (ADMM) can effectively deal with linear constraints, and it can be effectively used as a parallel optimization algorithm. ADMM is an approximation algorithm that solves complex original problems by dividing and combining the partial problems that are easier to optimize than original problems. It is useful for optimizing non-smooth or composite objective functions. It is widely used in statistical and machine learning because it can systematically construct algorithms based on dual theory and proximal operator. In this paper, we will examine applications of ADMM algorithm in various fields related to statistics, and focus on two major points: (1) splitting strategy of objective function, and (2) role of the proximal operator in explaining the Lagrangian method and its dual problem. In this case, we introduce methodologies that utilize regularization. Simulation results are presented to demonstrate effectiveness of the lasso.

Experimental Studies on the Properties of Epoxy Resin Mortars (에폭시 수지 모르터의 특성에 관한 실험적 연구)

  • 연규석;강신업
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.26 no.1
    • /
    • pp.52-72
    • /
    • 1984
  • This study was performed to obtain the basic data which can be applied to the use of epoxy resin mortars. The data was based on the properties of epoxy resin mortars depending upon various mixing ratios to compare those of cement mortar. The resin which was used at this experiment was Epi-Bis type epoxy resin which is extensively being used as concrete structures. In the case of epoxy resin mortar, mixing ratios of resin to fine aggregate were 1: 2, 1: 4, 1: 6, 1: 8, 1:10, 1 :12 and 1:14, but the ratio of cement to fine aggregate in cement mortar was 1 : 2.5. The results obtained are summarized as follows; 1.When the mixing ratio was 1: 6, the highest density was 2.01 g/cm$^3$, being lower than 2.13 g/cm$^3$ of that of cement mortar. 2.According to the water absorption and water permeability test, the watertightness was shown very high at the mixing ratios of 1: 2, 1: 4 and 1: 6. But then the mixing ratio was less than 1 : 6, the watertightness considerably decreased. By this result, it was regarded that optimum mixing ratio of epoxy resin mortar for watertight structures should be richer mixing ratio than 1: 6. 3.The hardening shrinkage was large as the mixing ratio became leaner, but the values were remarkably small as compared with cement mortar. And the influence of dryness and moisture was exerted little at richer mixing ratio than 1: 6, but its effect was obvious at the lean mixing ratio, 1: 8, 1:10,1:12 and 1:14. It was confirmed that the optimum mixing ratio for concrete structures which would be influenced by the repeated dryness and moisture should be rich mixing ratio higher than 1: 6. 4.The compressive, bending and splitting tensile strenghs were observed very high, even the value at the mixing ratio of 1:14 was higher than that of cement mortar. It showed that epoxy resin mortar especially was to have high strength in bending and splitting tensile strength. Also, the initial strength within 24 hours gave rise to high value. Thus it was clear that epoxy resin was rapid hardening material. The multiple regression equations of strength were computed depending on a function of mixing ratios and curing times. 5.The elastic moduli derived from the compressive stress-strain curve were slightly smaller than the value of cement mortar, and the toughness of epoxy resin mortar was larger than that of cement mortar. 6.The impact resistance was strong compared with cement mortar at all mixing ratios. Especially, bending impact strength by the square pillar specimens was higher than the impact resistance of flat specimens or cylinderic specimens. 7.The Brinell hardness was relatively larger than that of cement mortar, but it gradually decreased with the decline of mixing ratio, and Brinell hardness at mixing ratio of 1 :14 was much the same as cement mortar. 8.The abrasion rate of epoxy resin mortar at all mixing ratio, when Losangeles abation testing machine revolved 500 times, was very low. Even mixing ratio of 1 :14 was no more than 31.41%, which was less than critical abrasion rate 40% of coarse aggregate for cement concrete. Consequently, the abrasion rate of epoxy resin mortar was superior to cement mortar, and the relation between abrasion rate and Brinell hardness was highly significant as exponential curve. 9.The highest bond strength of epoxy resin mortar was 12.9 kg/cm$^2$ at the mixing ratio of 1:2. The failure of bonded flat steel specimens occurred on the part of epoxy resin mortar at the mixing ratio of 1: 2 and 1: 4, and that of bonded cement concrete specimens was fond on the part of combained concrete at the mixing ratio of 1 : 2 ,1: 4 and 1: 6. It was confirmed that the optimum mixing ratio for bonding of steel plate, and of cement concrete should be rich mixing ratio above 1 : 4 and 1 : 6 respectively. 10.The variations of color tone by heating began to take place at about 60˚C, and the ultimate change occurred at 120˚C. The compressive, bending and splitting tensile strengths increased with rising temperature up to 80˚ C, but these rapidly decreased when temperature was above 800 C. Accordingly, it was evident that the resistance temperature of epoxy resin mortar was about 80˚C which was generally considered lower than that of the other concrete materials. But it is likely that there is no problem in epoxy resin mortar when used for unnecessary materials of high temperature resistance. The multiple regression equations of strength were computed depending on a function of mixing ratios and heating temperatures. 11.The susceptibility to chemical attack of cement mortar was easily affected by inorganic and organic acid. and that of epoxy resin mortar with mixing ratio of 1: 4 was of great resistance. On the other hand, when mixing ratio was lower than 1 : 8 epoxy resin mortar had very poor resistance, especially being poor resistant to organicacid. Therefore, for the structures requiring chemical resistance optimum mixing of epoxy resin mortar should be rich mixing ratio higher than 1: 4.

  • PDF

A new formulation for strength characteristics of steel slag aggregate concrete using an artificial intelligence-based approach

  • Awoyera, Paul O.;Mansouri, Iman;Abraham, Ajith;Viloria, Amelec
    • Computers and Concrete
    • /
    • v.27 no.4
    • /
    • pp.333-341
    • /
    • 2021
  • Steel slag, an industrial reject from the steel rolling process, has been identified as one of the suitable, environmentally friendly materials for concrete production. Given that the coarse aggregate portion represents about 70% of concrete constituents, other economic approaches have been found in the use of alternative materials such as steel slag in concrete. Unfortunately, a standard framework for its application is still lacking. Therefore, this study proposed functional model equations for the determination of strength properties (compression and splitting tensile) of steel slag aggregate concrete (SSAC), using gene expression programming (GEP). The study, in the experimental phase, utilized steel slag as a partial replacement of crushed rock, in steps 20%, 40%, 60%, 80%, and 100%, respectively. The predictor variables included in the analysis were cement, sand, granite, steel slag, water/cement ratio, and curing regime (age). For the model development, 60-75% of the dataset was used as the training set, while the remaining data was used for testing the model. Empirical results illustrate that steel aggregate could be used up to 100% replacement of conventional aggregate, while also yielding comparable results as the latter. The GEP-based functional relations were tested statistically. The minimum absolute percentage error (MAPE), and root mean square error (RMSE) for compressive strength are 6.9 and 1.4, and 12.52 and 0.91 for the train and test datasets, respectively. With the consistency of both the training and testing datasets, the model has shown a strong capacity to predict the strength properties of SSAC. The results showed that the proposed model equations are reliably suitable for estimating SSAC strength properties. The GEP-based formula is relatively simple and useful for pre-design applications.

A study on the utilization of abrasive waterjet for mechanical excavation of hard rock in vertical shaft construction (고강도 암반에서 수직구 기계굴착을 위한 연마재 워터젯 활용에 관한 연구)

  • Seon-Ah Jo;Ju-Hwan Jung;Hee-Hwan Ryu;Jun-Sik Park;Tae-Min Oh
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.25 no.5
    • /
    • pp.357-371
    • /
    • 2023
  • In cable tunnel construction using TBM, the vertical shaft is an essential structure for entrance and exit of TBM equipment and power lines. Since a shaft penetrates the ground vertically, it often encounters rock mass. Blasting or rock splitting methods, which are mainly used to the rock excavation, cause public complaints due to the noise, vibration and road occupation. Therefore, mechanical excavation using vertical shaft excavation machine are considered as an alternative to the conventional methods. However, at the current level of technology, the vertical excavation machine has limitation in its performance when applied for high strength rock with a compressive strength of more than 120 MPa. In this study, the potential utilization of waterjet technology as an excavation assistance method was investigated to improve mechanical excavation performance in the hard rock formations. Rock cutting experiments were conducted to verify the cutting performance of the abrasive waterjet. Based on the experimental result, it was found that ensuring excavation performance with respect to changing in ground conditions can be achieved by adjusting waterjet parameters such as standoff distance, traverse speed and water pressure. In addition, based on the relationship between excavation performance, uniaxial compressive strength and RQD, it was suggested that excavation performance could be improved by artificially creating joints using the abrasive waterjet. It is expected that these research results can be utilized as fundamental data for the introduction of vertical shaft excavation machines in the future.

Data analysis by Integrating statistics and visualization: Visual verification for the prediction model (통계와 시각화를 결합한 데이터 분석: 예측모형 대한 시각화 검증)

  • Mun, Seong Min;Lee, Kyung Won
    • Design Convergence Study
    • /
    • v.15 no.6
    • /
    • pp.195-214
    • /
    • 2016
  • Predictive analysis is based on a probabilistic learning algorithm called pattern recognition or machine learning. Therefore, if users want to extract more information from the data, they are required high statistical knowledge. In addition, it is difficult to find out data pattern and characteristics of the data. This study conducted statistical data analyses and visual data analyses to supplement prediction analysis's weakness. Through this study, we could find some implications that haven't been found in the previous studies. First, we could find data pattern when adjust data selection according as splitting criteria for the decision tree method. Second, we could find what type of data included in the final prediction model. We found some implications that haven't been found in the previous studies from the results of statistical and visual analyses. In statistical analysis we found relation among the multivariable and deducted prediction model to predict high box office performance. In visualization analysis we proposed visual analysis method with various interactive functions. Finally through this study we verified final prediction model and suggested analysis method extract variety of information from the data.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

Satellite-Based Cabbage and Radish Yield Prediction Using Deep Learning in Kangwon-do (딥러닝을 활용한 위성영상 기반의 강원도 지역의 배추와 무 수확량 예측)

  • Hyebin Park;Yejin Lee;Seonyoung Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.1031-1042
    • /
    • 2023
  • In this study, a deep learning model was developed to predict the yield of cabbage and radish, one of the five major supply and demand management vegetables, using satellite images of Landsat 8. To predict the yield of cabbage and radish in Gangwon-do from 2015 to 2020, satellite images from June to September, the growing period of cabbage and radish, were used. Normalized difference vegetation index, enhanced vegetation index, lead area index, and land surface temperature were employed in this study as input data for the yield model. Crop yields can be effectively predicted using satellite images because satellites collect continuous spatiotemporal data on the global environment. Based on the model developed previous study, a model designed for input data was proposed in this study. Using time series satellite images, convolutional neural network, a deep learning model, was used to predict crop yield. Landsat 8 provides images every 16 days, but it is difficult to acquire images especially in summer due to the influence of weather such as clouds. As a result, yield prediction was conducted by splitting June to July into one part and August to September into two. Yield prediction was performed using a machine learning approach and reference models , and modeling performance was compared. The model's performance and early predictability were assessed using year-by-year cross-validation and early prediction. The findings of this study could be applied as basic studies to predict the yield of field crops in Korea.