• Title/Summary/Keyword: Prediction Method

Search Result 9,119, Processing Time 0.072 seconds

Product Recommender Systems using Multi-Model Ensemble Techniques (다중모형조합기법을 이용한 상품추천시스템)

  • Lee, Yeonjeong;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.39-54
    • /
    • 2013
  • Recent explosive increase of electronic commerce provides many advantageous purchase opportunities to customers. In this situation, customers who do not have enough knowledge about their purchases, may accept product recommendations. Product recommender systems automatically reflect user's preference and provide recommendation list to the users. Thus, product recommender system in online shopping store has been known as one of the most popular tools for one-to-one marketing. However, recommender systems which do not properly reflect user's preference cause user's disappointment and waste of time. In this study, we propose a novel recommender system which uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user's preference. The research data is collected from the real-world online shopping store, which deals products from famous art galleries and museums in Korea. The data initially contain 5759 transaction data, but finally remain 3167 transaction data after deletion of null data. In this study, we transform the categorical variables into dummy variables and exclude outlier data. The proposed model consists of two steps. The first step predicts customers who have high likelihood to purchase products in the online shopping store. In this step, we first use logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. We perform above data mining techniques using SAS E-Miner software. In this study, we partition datasets into two sets as modeling and validation sets for the logistic regression and decision trees. We also partition datasets into three sets as training, test, and validation sets for the artificial neural network model. The validation dataset is equal for the all experiments. Then we composite the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. Bagging is the abbreviation of "Bootstrap Aggregation" and it composite outputs from several machine learning techniques for raising the performance and stability of prediction or classification. This technique is special form of the averaging method. Bumping is the abbreviation of "Bootstrap Umbrella of Model Parameter," and it only considers the model which has the lowest error value. The results show that bumping outperforms bagging and the other predictors except for "Poster" product group. For the "Poster" product group, artificial neural network model performs better than the other models. In the second step, we use the market basket analysis to extract association rules for co-purchased products. We can extract thirty one association rules according to values of Lift, Support, and Confidence measure. We set the minimum transaction frequency to support associations as 5%, maximum number of items in an association as 4, and minimum confidence for rule generation as 10%. This study also excludes the extracted association rules below 1 of lift value. We finally get fifteen association rules by excluding duplicate rules. Among the fifteen association rules, eleven rules contain association between products in "Office Supplies" product group, one rules include the association between "Office Supplies" and "Fashion" product groups, and other three rules contain association between "Office Supplies" and "Home Decoration" product groups. Finally, the proposed product recommender systems provides list of recommendations to the proper customers. We test the usability of the proposed system by using prototype and real-world transaction and profile data. For this end, we construct the prototype system by using the ASP, Java Script and Microsoft Access. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The participants for the survey are 173 persons who use MSN Messenger, Daum Caf$\acute{e}$, and P2P services. We evaluate the user satisfaction using five-scale Likert measure. This study also performs "Paired Sample T-test" for the results of the survey. The results show that the proposed model outperforms the random selection model with 1% statistical significance level. It means that the users satisfied the recommended product list significantly. The results also show that the proposed system may be useful in real-world online shopping store.

Risk Ranking Analysis for the City-Gas Pipelines in the Underground Laying Facilities (지하매설물 중 도시가스 지하배관에 대한 위험성 서열화 분석)

  • Ko, Jae-Sun;Kim, Hyo
    • Fire Science and Engineering
    • /
    • v.18 no.1
    • /
    • pp.54-66
    • /
    • 2004
  • In this article, we are to suggest the hazard-assessing method for the underground pipelines, and find out the pipeline-maintenance schemes of high efficiency in cost. Three kinds of methods are applied in order to refer to the approaching methods of listing the hazards for the underground pipelines: the first is RBI(Risk Based Inspection), which firstly assess the effect of the neighboring population, the dimension, thickness of pipe, and working time. It enables us to estimate quantitatively the risk exposure. The second is the scoring system which is based on the environmental factors of the buried pipelines. Last we quantify the frequency of the releases using the present THOMAS' theory. In this work, as a result of assessing the hazard of it using SPC scheme, the hazard score related to how the gas pipelines erodes indicate the numbers from 30 to 70, which means that the assessing criteria define well the relative hazards of actual pipelines. Therefore. even if one pipeline region is relatively low score, it can have the high frequency of leakage due to its longer length. The acceptable limit of the release frequency of pipeline shows 2.50E-2 to 1.00E-l/yr, from which we must take the appropriate actions to have the consequence to be less than the acceptable region. The prediction of total frequency using regression analysis shows the limit operating time of pipeline is the range of 11 to 13 years, which is well consistent with that of the actual pipeline. Concludingly, the hazard-listing scheme suggested in this research will be very effectively applied to maintaining the underground pipelines.

Study(V) on Development of Charts and Equations Predicting Allowable Compressive Bearing Capacity for Prebored PHC Piles Socketed into Weathered Rock through Sandy Soil Layers - Analysis of Results and Data by Parametric Numerical Analysis - (사질토를 지나 풍화암에 소켓된 매입 PHC말뚝에서 지반의 허용압축지지력 산정도표 및 산정공식 개발에 관한 연속 연구(V) - 매개변수 수치해석 자료 분석 -)

  • Park, Mincheol;Kwon, Oh-Kyun;Kim, Chae Min;Yun, Do Kyun;Choi, Yongkyu
    • Journal of the Korean Geotechnical Society
    • /
    • v.35 no.10
    • /
    • pp.47-66
    • /
    • 2019
  • A parametric numerical analysis according to diameter, length, and N values of soil was conducted for the PHC pile socketed into weathered rock through sandy soil layers. In the numerical analysis, the Mohr-Coulomb model was applied to PHC pile and soils, and the contacted phases among the pile-soil-cement paste were modeled as interfaces with a virtual thickness. The parametric numerical analyses for 10 kinds of pile diameters were executed to obtain the load-settlement relationship and the axial load distribution according to N-values. The load-settlement curves were obtained for each load such as total load, total skin friction, skin friction of the sandy soil layer, skin friction of the weathered rock layer and end bearing resistance of the weathered rock. As a result of analysis of various load levels from the load-settlement curves, the settlements corresponding to the inflection point of each curve were appeared as about 5~7% of each pile diameter and were estimated conservatively as 5% of each pile diameter. The load at the inflection point was defined as the mobilized bearing capacity ($Q_m$) and it was used in analyses of pile bearing capacity. And SRF was appeared above average 70%, irrespective of diameter, embedment length of pile and N value of sandy soil layer. Also, skin frictional resistance of sandy soil layers was evaluated above average 80% of total skin frictional resistance. These results can be used in calculating the bearing capacity of prebored PHC pile, and also be utilized in developing the bearing capacity prediction method and chart for the prebored PHC pile socketed into weathered rock through sandy soil layers.

Estimation and Mapping of Soil Organic Matter using Visible-Near Infrared Spectroscopy (분광학을 이용한 토양 유기물 추정 및 분포도 작성)

  • Choe, Eun-Young;Hong, Suk-Young;Kim, Yi-Hyun;Zhang, Yong-Seon
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.43 no.6
    • /
    • pp.968-974
    • /
    • 2010
  • We assessed the feasibility of discrete wavelet transform (DWT) applied for the spectral processing to enhance the estimation performance quality of soil organic matters using visible-near infrared spectra and mapped their distribution via block Kriging model. Continuum-removal and $1^{st}$ derivative transform as well as Haar and Daubechies DWT were used to enhance spectral variation in terms of soil organic matter contents and those spectra were put into the PLSR (Partial Least Squares Regression) model. Estimation results using raw reflectance and transformed spectra showed similar quality with $R^2$ > 0.6 and RPD> 1.5. These values mean the approximation prediction on soil organic matter contents. The poor performance of estimation using DWT spectra might be caused by coarser approximation of DWT which not enough to express spectral variation based on soil organic matter contents. The distribution maps of soil organic matter were drawn via a spatial information model, Kriging. Organic contents of soil samples made Gaussian distribution centered at around 20 g $kg^{-1}$ and the values in the map were distributed with similar patterns. The estimated organic matter contents had similar distribution to the measured values even though some parts of estimated value map showed slightly higher. If the estimation quality is improved more, estimation model and mapping using spectroscopy may be applied in global soil mapping, soil classification, and remote sensing data analysis as a rapid and cost-effective method.

Calculation Method of Oil Slick Area on Sea Surface Using High-resolution Satellite Imagery: M/V Symphony Oil Spill Accident (고해상도 광학위성을 이용한 해상 유출유 면적 산출: 심포니호 기름유출 사고 사례)

  • Kim, Tae-Ho;Shin, Hye-Kyeong;Jang, So Yeong;Ryu, Joung-Mi;Kim, Pyeongjoong;Yang, Chan-Su
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1773-1784
    • /
    • 2021
  • In order to minimize damage to oil spill accidents in the ocean, it is essential to collect a spilled area as soon as possible. Thus satellite-based remote sensing is a powerful source to detect oil spills in the ocean. With the recent rapid increase in the number of available satellites, it has become possible to generate a status report of marine oil spills soon after the accident. In this study, the oil spill area was calculated using various satellite images for the Symphony oil spill accident that occurred off the coast of Qingdao Port, China, on April 27, 2021. In particular, improving the accuracy of oil spill area determination was applied using high-resolution commercial satellite images with a spatial resolution of 2m. Sentinel-1, Sentinel-2, LANDSAT-8, GEO-KOMPSAT-2B (GOCI-II) and Skysat satellite images were collected from April 27 to May 13, but five images were available considering the weather conditions. The spilled oil had spread northeastward, bound for coastal region of China. This trend was confirmed in the Skysat image and also similar to the movement prediction of oil particles from the accident location. From this result, the look-alike patch observed in the north area from the Sentinel-1A (2021.05.01) image was discriminated as a false alarm. Through the survey period, the spilled oil area tends to increase linearly after the accident. This study showed that high-resolution optical satellites can be used to calculate more accurately the distribution area of spilled oil and contribute to establishing efficient response strategies for oil spill accidents.

Prediction of Potential Species Richness of Plants Adaptable to Climate Change in the Korean Peninsula (한반도 기후변화 적응 대상 식물 종풍부도 변화 예측 연구)

  • Shin, Man-Seok;Seo, Changwan;Lee, Myungwoo;Kim, Jin-Yong;Jeon, Ja-Young;Adhikari, Pradeep;Hong, Seung-Bum
    • Journal of Environmental Impact Assessment
    • /
    • v.27 no.6
    • /
    • pp.562-581
    • /
    • 2018
  • This study was designed to predict the changes in species richness of plants under the climate change in South Korea. The target species were selected based on the Plants Adaptable to Climate Change in the Korean Peninsula. Altogether, 89 species including 23 native plants, 30 northern plants, and 36 southern plants. We used the Species Distribution Model to predict the potential habitat of individual species under the climate change. We applied ten single-model algorithms and the pre-evaluation weighted ensemble method. And then, species richness was derived from the results of individual species. Two representative concentration pathways (RCP 4.5 and RCP 8.5) were used to simulate the species richness of plants in 2050 and 2070. The current species richness was predicted to be high in the national parks located in the Baekdudaegan mountain range in Gangwon Province and islands of the South Sea. The future species richness was predicted to be lower in the national park and the Baekdudaegan mountain range in Gangwon Province and to be higher for southern coastal regions. The average value of the current species richness showed that the national park area was higher than the whole area of South Korea. However, predicted species richness were not the difference between the national park area and the whole area of South Korea. The difference between current and future species richness of plants could be the disappearance of a large number of native and northern plants from South Korea. The additional reason could be the expansion of potential habitat of southern plants under climate change. However, if species dispersal to a suitable habitat was not achieved, the species richness will be reduced drastically. The results were different depending on whether species were dispersed or not. This study will be useful for the conservation planning, establishment of the protected area, restoration of biological species and strategies for adaptation of climate change.

Coupled Hydro-Mechanical Modelling of Fault Reactivation Induced by Water Injection: DECOVALEX-2019 TASK B (Benchmark Model Test) (유체 주입에 의한 단층 재활성 해석기법 개발: 국제공동연구 DECOVALEX-2019 Task B(Benchmark Model Test))

  • Park, Jung-Wook;Kim, Taehyun;Park, Eui-Seob;Lee, Changsoo
    • Tunnel and Underground Space
    • /
    • v.28 no.6
    • /
    • pp.670-691
    • /
    • 2018
  • This study presents the research results of the BMT(Benchmark Model Test) simulations of the DECOVALEX-2019 project Task B. Task B named 'Fault slip modelling' is aiming at developing a numerical method to predict fault reactivation and the coupled hydro-mechanical behavior of fault. BMT scenario simulations of Task B were conducted to improve each numerical model of participating group by demonstrating the feasibility of reproducing the fault behavior induced by water injection. The BMT simulations consist of seven different conditions depending on injection pressure, fault properties and the hydro-mechanical coupling relations. TOUGH-FLAC simulator was used to reproduce the coupled hydro-mechanical process of fault slip. A coupling module to update the changes in hydrological properties and geometric features of the numerical mesh in the present study. We made modifications to the numerical model developed in Task B Step 1 to consider the changes in compressibility, Permeability and geometric features with hydraulic aperture of fault due to mechanical deformation. The effects of the storativity and transmissivity of the fault on the hydro-mechanical behavior such as the pressure distribution, injection rate, displacement and stress of the fault were examined, and the results of the previous step 1 simulation were updated using the modified numerical model. The simulation results indicate that the developed model can provide a reasonable prediction of the hydro-mechanical behavior related to fault reactivation. The numerical model will be enhanced by continuing interaction and collaboration with other research teams of DECOVALEX-2019 Task B and validated using the field experiment data in a further study.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Kriging of Daily PM10 Concentration from the Air Korea Stations Nationwide and the Accuracy Assessment (베리오그램 최적화 기반의 정규크리깅을 이용한 전국 에어코리아 PM10 자료의 일평균 격자지도화 및 내삽정확도 검증)

  • Jeong, Yemin;Cho, Subin;Youn, Youjeong;Kim, Seoyeon;Kim, Geunah;Kang, Jonggu;Lee, Dalgeun;Chung, Euk;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.379-394
    • /
    • 2021
  • Air pollution data in South Korea is provided on a real-time basis by Air Korea stations since 2005. Previous studies have shown the feasibility of gridding air pollution data, but they were confined to a few cities. This paper examines the creation of nationwide gridded maps for PM10 concentration using 333 Air Korea stations with variogram optimization and ordinary kriging. The accuracy of the spatial interpolation was evaluated by various sampling schemes to avoid a too dense or too sparse distribution of the validation points. Using the 114,745 matchups, a four-round blind test was conducted by extracting random validation points for every 365 days in 2019. The overall accuracy was stably high with the MAE of 5.697 ㎍/m3 and the CC of 0.947. Approximately 1,500 cases for high PM10 concentration also showed a result with the MAE of about 12 ㎍/m3 and the CC over 0.87, which means that the proposed method was effective and applicable to various situations. The gridded maps for daily PM10 concentration at the resolution of 0.05° also showed a reasonable spatial distribution, which can be used as an input variable for a gridded prediction of tomorrow's PM10 concentration.

A Machine Learning-based Total Production Time Prediction Method for Customized-Manufacturing Companies (주문생산 기업을 위한 기계학습 기반 총생산시간 예측 기법)

  • Park, Do-Myung;Choi, HyungRim;Park, Byung-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.177-190
    • /
    • 2021
  • Due to the development of the fourth industrial revolution technology, efforts are being made to improve areas that humans cannot handle by utilizing artificial intelligence techniques such as machine learning. Although on-demand production companies also want to reduce corporate risks such as delays in delivery by predicting total production time for orders, they are having difficulty predicting this because the total production time is all different for each order. The Theory of Constraints (TOC) theory was developed to find the least efficient areas to increase order throughput and reduce order total cost, but failed to provide a forecast of total production time. Order production varies from order to order due to various customer needs, so the total production time of individual orders can be measured postmortem, but it is difficult to predict in advance. The total measured production time of existing orders is also different, which has limitations that cannot be used as standard time. As a result, experienced managers rely on persimmons rather than on the use of the system, while inexperienced managers use simple management indicators (e.g., 60 days total production time for raw materials, 90 days total production time for steel plates, etc.). Too fast work instructions based on imperfections or indicators cause congestion, which leads to productivity degradation, and too late leads to increased production costs or failure to meet delivery dates due to emergency processing. Failure to meet the deadline will result in compensation for delayed compensation or adversely affect business and collection sectors. In this study, to address these problems, an entity that operates an order production system seeks to find a machine learning model that estimates the total production time of new orders. It uses orders, production, and process performance for materials used for machine learning. We compared and analyzed OLS, GLM Gamma, Extra Trees, and Random Forest algorithms as the best algorithms for estimating total production time and present the results.