• Title/Summary/Keyword: Neural data

Search Result 5,152, Processing Time 0.031 seconds

Multi-task Learning Based Tropical Cyclone Intensity Monitoring and Forecasting through Fusion of Geostationary Satellite Data and Numerical Forecasting Model Output (정지궤도 기상위성 및 수치예보모델 융합을 통한 Multi-task Learning 기반 태풍 강도 실시간 추정 및 예측)

  • Lee, Juhyun;Yoo, Cheolhee;Im, Jungho;Shin, Yeji;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1037-1051
    • /
    • 2020
  • The accurate monitoring and forecasting of the intensity of tropical cyclones (TCs) are able to effectively reduce the overall costs of disaster management. In this study, we proposed a multi-task learning (MTL) based deep learning model for real-time TC intensity estimation and forecasting with the lead time of 6-12 hours following the event, based on the fusion of geostationary satellite images and numerical forecast model output. A total of 142 TCs which developed in the Northwest Pacific from 2011 to 2016 were used in this study. The Communications system, the Ocean and Meteorological Satellite (COMS) Meteorological Imager (MI) data were used to extract the images of typhoons, and the Climate Forecast System version 2 (CFSv2) provided by the National Center of Environmental Prediction (NCEP) was employed to extract air and ocean forecasting data. This study suggested two schemes with different input variables to the MTL models. Scheme 1 used only satellite-based input data while scheme 2 used both satellite images and numerical forecast modeling. As a result of real-time TC intensity estimation, Both schemes exhibited similar performance. For TC intensity forecasting with the lead time of 6 and 12 hours, scheme 2 improved the performance by 13% and 16%, respectively, in terms of the root mean squared error (RMSE) when compared to scheme 1. Relative root mean squared errors(rRMSE) for most intensity levels were lessthan 30%. The lower mean absolute error (MAE) and RMSE were found for the lower intensity levels of TCs. In the test results of the typhoon HALONG in 2014, scheme 1 tended to overestimate the intensity by about 20 kts at the early development stage. Scheme 2 slightly reduced the error, resulting in an overestimation by about 5 kts. The MTL models reduced the computational cost about 300% when compared to the single-tasking model, which suggested the feasibility of the rapid production of TC intensity forecasts.

A Study on Enhancing Personalization Recommendation Service Performance with CNN-based Review Helpfulness Score Prediction (CNN 기반 리뷰 유용성 점수 예측을 통한 개인화 추천 서비스 성능 향상에 관한 연구)

  • Li, Qinglong;Lee, Byunghyun;Li, Xinzhe;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.29-56
    • /
    • 2021
  • Recently, various types of products have been launched with the rapid growth of the e-commerce market. As a result, many users face information overload problems, which is time-consuming in the purchasing decision-making process. Therefore, the importance of a personalized recommendation service that can provide customized products and services to users is emerging. For example, global companies such as Netflix, Amazon, and Google have introduced personalized recommendation services to support users' purchasing decisions. Accordingly, the user's information search cost can reduce which can positively affect the company's sales increase. The existing personalized recommendation service research applied Collaborative Filtering (CF) technique predicts user preference mainly use quantified information. However, the recommendation performance may have decreased if only use quantitative information. To improve the problems of such existing studies, many studies using reviews to enhance recommendation performance. However, reviews contain factors that hinder purchasing decisions, such as advertising content, false comments, meaningless or irrelevant content. When providing recommendation service uses a review that includes these factors can lead to decrease recommendation performance. Therefore, we proposed a novel recommendation methodology through CNN-based review usefulness score prediction to improve these problems. The results show that the proposed methodology has better prediction performance than the recommendation method considering all existing preference ratings. In addition, the results suggest that can enhance the performance of traditional CF when the information on review usefulness reflects in the personalized recommendation service.

Status and Implications of Hydrogeochemical Characterization of Deep Groundwater for Deep Geological Disposal of High-Level Radioactive Wastes in Developed Countries (고준위 방사성 폐기물 지질처분을 위한 해외 선진국의 심부 지하수 환경 연구동향 분석 및 시사점 도출)

  • Jaehoon Choi;Soonyoung Yu;SunJu Park;Junghoon Park;Seong-Taek Yun
    • Economic and Environmental Geology
    • /
    • v.55 no.6
    • /
    • pp.737-760
    • /
    • 2022
  • For the geological disposal of high-level radioactive wastes (HLW), an understanding of deep subsurface environment is essential through geological, hydrogeological, geochemical, and geotechnical investigations. Although South Korea plans the geological disposal of HLW, only a few studies have been conducted for characterizing the geochemistry of deep subsurface environment. To guide the hydrogeochemical research for selecting suitable repository sites, this study overviewed the status and trends in hydrogeochemical characterization of deep groundwater for the deep geological disposal of HLW in developed countries. As a result of examining the selection process of geological disposal sites in 8 countries including USA, Canada, Finland, Sweden, France, Japan, Germany, and Switzerland, the following geochemical parameters were needed for the geochemical characterization of deep subsurface environment: major and minor elements and isotopes (e.g., 34S and 18O of SO42-, 13C and 14C of DIC, 2H and 18O of water) of both groundwater and pore water (in aquitard), fracture-filling minerals, organic materials, colloids, and oxidation-reduction indicators (e.g., Eh, Fe2+/Fe3+, H2S/SO42-, NH4+/NO3-). A suitable repository was selected based on the integrated interpretation of these geochemical data from deep subsurface. In South Korea, hydrochemical types and evolutionary patterns of deep groundwater were identified using artificial neural networks (e.g., Self-Organizing Map), and the impact of shallow groundwater mixing was evaluated based on multivariate statistics (e.g., M3 modeling). The relationship between fracture-filling minerals and groundwater chemistry also has been investigated through a reaction-path modeling. However, these previous studies in South Korea had been conducted without some important geochemical data including isotopes, oxidationreduction indicators and DOC, mainly due to the lack of available data. Therefore, a detailed geochemical investigation is required over the country to collect these hydrochemical data to select a geological disposal site based on scientific evidence.

Rainfall image DB construction for rainfall intensity estimation from CCTV videos: focusing on experimental data in a climatic environment chamber (CCTV 영상 기반 강우강도 산정을 위한 실환경 실험 자료 중심 적정 강우 이미지 DB 구축 방법론 개발)

  • Byun, Jongyun;Jun, Changhyun;Kim, Hyeon-Joon;Lee, Jae Joon;Park, Hunil;Lee, Jinwook
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.6
    • /
    • pp.403-417
    • /
    • 2023
  • In this research, a methodology was developed for constructing an appropriate rainfall image database for estimating rainfall intensity based on CCTV video. The database was constructed in the Large-Scale Climate Environment Chamber of the Korea Conformity Laboratories, which can control variables with high irregularity and variability in real environments. 1,728 scenarios were designed under five different experimental conditions. 36 scenarios and a total of 97,200 frames were selected. Rain streaks were extracted using the k-nearest neighbor algorithm by calculating the difference between each image and the background. To prevent overfitting, data with pixel values greater than set threshold, compared to the average pixel value for each image, were selected. The area with maximum pixel variability was determined by shifting with every 10 pixels and set as a representative area (180×180) for the original image. After re-transforming to 120×120 size as an input data for convolutional neural networks model, image augmentation was progressed under unified shooting conditions. 92% of the data showed within the 10% absolute range of PBIAS. It is clear that the final results in this study have the potential to enhance the accuracy and efficacy of existing real-world CCTV systems with transfer learning.

Strategy for Store Management Using SOM Based on RFM (RFM 기반 SOM을 이용한 매장관리 전략 도출)

  • Jeong, Yoon Jeong;Choi, Il Young;Kim, Jae Kyeong;Choi, Ju Choel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.93-112
    • /
    • 2015
  • Depending on the change in consumer's consumption pattern, existing retail shop has evolved in hypermarket or convenience store offering grocery and daily products mostly. Therefore, it is important to maintain the inventory levels and proper product configuration for effectively utilize the limited space in the retail store and increasing sales. Accordingly, this study proposed proper product configuration and inventory level strategy based on RFM(Recency, Frequency, Monetary) model and SOM(self-organizing map) for manage the retail shop effectively. RFM model is analytic model to analyze customer behaviors based on the past customer's buying activities. And it can differentiates important customers from large data by three variables. R represents recency, which refers to the last purchase of commodities. The latest consuming customer has bigger R. F represents frequency, which refers to the number of transactions in a particular period and M represents monetary, which refers to consumption money amount in a particular period. Thus, RFM method has been known to be a very effective model for customer segmentation. In this study, using a normalized value of the RFM variables, SOM cluster analysis was performed. SOM is regarded as one of the most distinguished artificial neural network models in the unsupervised learning tool space. It is a popular tool for clustering and visualization of high dimensional data in such a way that similar items are grouped spatially close to one another. In particular, it has been successfully applied in various technical fields for finding patterns. In our research, the procedure tries to find sales patterns by analyzing product sales records with Recency, Frequency and Monetary values. And to suggest a business strategy, we conduct the decision tree based on SOM results. To validate the proposed procedure in this study, we adopted the M-mart data collected between 2014.01.01~2014.12.31. Each product get the value of R, F, M, and they are clustered by 9 using SOM. And we also performed three tests using the weekday data, weekend data, whole data in order to analyze the sales pattern change. In order to propose the strategy of each cluster, we examine the criteria of product clustering. The clusters through the SOM can be explained by the characteristics of these clusters of decision trees. As a result, we can suggest the inventory management strategy of each 9 clusters through the suggested procedures of the study. The highest of all three value(R, F, M) cluster's products need to have high level of the inventory as well as to be disposed in a place where it can be increasing customer's path. In contrast, the lowest of all three value(R, F, M) cluster's products need to have low level of inventory as well as to be disposed in a place where visibility is low. The highest R value cluster's products is usually new releases products, and need to be placed on the front of the store. And, manager should decrease inventory levels gradually in the highest F value cluster's products purchased in the past. Because, we assume that cluster has lower R value and the M value than the average value of good. And it can be deduced that product are sold poorly in recent days and total sales also will be lower than the frequency. The procedure presented in this study is expected to contribute to raising the profitability of the retail store. The paper is organized as follows. The second chapter briefly reviews the literature related to this study. The third chapter suggests procedures for research proposals, and the fourth chapter applied suggested procedure using the actual product sales data. Finally, the fifth chapter described the conclusion of the study and further research.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Liver cancer Prediction System using Biochip (바이오칩을 이용한 간암진단 예측 시스템)

  • Lee, Hyoung-Keun;Kim, Choong-Won;Lee, Joon;Kim, Sung-Chun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.05a
    • /
    • pp.967-970
    • /
    • 2008
  • The liver cancer in our country cancerous occurrence frequency to be the gastric cancer in the common cancer, to initially at second unique condition or symptom after the case which is slowly advanced without gets condition many the case which will be diagnosed in the liver cancer, most there was not a reasonable treatment method especially and if what kind of its treated and convalescence of the patient non quantity one, the case which will be discovered in early rising the treatment record was considered seriously about under the early detection. The system which it sees with the system for the early detection of the liver cancer reacts the blood of the control group other than the patient who is confirmed as the liver cancer and the liver cancer to the bio chip and bio chip Profiles mechanical studying leads and it is a system which it classifies. 1149 each other it reacted blood samples of the control group other than the liver cancer patient who is composed of the total 50 samples and the liver cancer which is composed of 100 samples to the bio chip which is composed with different oligo from the present paper and it was a data which it makes acquire worker the neural network it led and it analyzes the classification efficiency of the result $92{\sim}96%$ which it was visible.

  • PDF

Association between Subjective Distress Symptoms and Argon Welding among Shipyard Workers in Gyeongnam Province (경남소재 일개조선소 근로자의 건강이상소견과 아르곤 용접과의 관련성)

  • Choi, Woo-Ho;Jin, Seong-Mi;Kweon, Deok-Heon;Kim, Jang-Rak;Kang, Yune-Sik;Jeong, Baek-Geum;Park, Ki-Soo;Hwang, Young-Sil;Hong, Dae-Yong
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.24 no.4
    • /
    • pp.547-555
    • /
    • 2014
  • Objective: This study was conducted to investigate the association between subjective distress symptoms and argon welding among workers in Gyeongnam Province shipyard. Method: 31 argon and 29 non-argon welding workers were selected as study subjects in order to measure concentrations of personal dust, welding fumes and other hazardous materials such as ZnO, Pb, Cr, FeO, MnO, Cu, Ni, $TiO_2$, MgO, NO, $NO_2$, $O_3$, $O_2$, $CO_2$, CO and Ar. An interviewer-administered questionnaire survey was also performed on the same subjects. The items queried were as follows: age, height, weight, working duration, welding time, welding rod amounts used, drinking, smoking, and rate of subjective distress symptoms including headache and other symptoms such as fever, vomiting and nausea, metal fume fever, dizziness, tingling sensations, difficulty in breathing, memory loss, sleep disorders, emotional disturbance, hearing loss, hand tremors, visual impairment, neural abnormality, allergic reaction, runny nose and stuffiness, rhinitis, and suffocation. Statistical analysis was performed using SPSS software, version 18. Data are expressed as the mean ${\pm}SD$. An ${\chi}^2$-test and a normality test using a Shapiro wilk test were performed for the above variables. Logistic regression analysis was also conducted to identify the factors that affect the total score for subjective distress symptoms. Result: An association was shown between welding type (argon or non-argon welding) and the total score for subjective distress symptoms. Among the rate of complaining of subjective distress symptoms, vomiting and nausea, difficulty breathing, and allergic reactions were all significantly higher in the argon welding group. Only the concentration of dust and welding fumes was shown to be distributed normally after natural log transformation. According to logistic regression analysis, the correlations of working duration and welding type (argon or non-argon) between the total score of subjective distress symptoms were found to be statistically significant (p=0.041, p=0.049, respectively). Conclusion: Our results suggest that argon welding could cause subjective distress symptoms in shipyard workers.

Automatic Extraction of Eye and Mouth Fields from Face Images using MultiLayer Perceptrons and Eigenfeatures (고유특징과 다층 신경망을 이용한 얼굴 영상에서의 눈과 입 영역 자동 추출)

  • Ryu, Yeon-Sik;O, Se-Yeong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.2
    • /
    • pp.31-43
    • /
    • 2000
  • This paper presents a novel algorithm lot extraction of the eye and mouth fields (facial features) from 2D gray level face images. First of all, it has been found that Eigenfeatures, derived from the eigenvalues and the eigenvectors of the binary edge data set constructed from the eye and mouth fields are very good features to locate these fields. The Eigenfeatures, extracted from the positive and negative training samples for the facial features, ate used to train a MultiLayer Perceptron(MLP) whose output indicates the degree to which a particular image window contains the eye or the mouth within itself. Second, to ensure robustness, the ensemble network consisting of multiple MLPs is used instead of a single MLP. The output of the ensemble network becomes the average of the multiple locations of the field each found by the constituent MLPs. Finally, in order to reduce the computation time, we extracted the coarse search region lot eyes and mouth by using prior information on face images. The advantages of the proposed approach includes that only a small number of frontal faces are sufficient to train the nets and furthermore, lends themselves to good generalization to non-frontal poses and even to other people's faces. It was also experimentally verified that the proposed algorithm is robust against slight variations of facial size and pose due to the generalization characteristics of neural networks.

  • PDF

Development of DL-MCS Hybrid Expert System for Automatic Estimation of Apartment Remodeling (공동주택 리모델링 자동견적을 위한 DL-MCS Hybrid Expert System 개발)

  • Kim, Jun;Cha, Heesung
    • Korean Journal of Construction Engineering and Management
    • /
    • v.21 no.6
    • /
    • pp.113-124
    • /
    • 2020
  • Social movements to improve the performance of buildings through remodeling of aging apartment houses are being captured. To this end, the remodeling construction cost analysis, structural analysis, and political institutional review have been conducted to suggest ways to activate the remodeling. However, although the method of analyzing construction cost for remodeling apartment houses is currently being proposed for research purposes, there are limitations in practical application possibilities. Specifically, In order to be used practically, it is applicable to cases that have already been completed or in progress, but cases that will occur in the future are also used for construction cost analysis, so the sustainability of the analysis method is lacking. For the purpose of this, we would like to suggest an automated estimating method. For the sustainability of construction cost estimates, Deep-Learning was introduced in the estimating procedure. Specifically, a method for automatically finding the relationship between design elements, work types, and cost increase factors that can occur in apartment remodeling was presented. In addition, Monte Carlo Simulation was included in the estimation procedure to compensate for the lack of uncertainty, which is the inherent limitation of the Deep Learning-based estimation. In order to present higher accuracy as cases are accumulated, a method of calculating higher accuracy by comparing the estimate result with the existing accumulated data was also suggested. In order to validate the sustainability of the automated estimates proposed in this study, 13 cases of learning procedures and an additional 2 cases of cumulative procedures were performed. As a result, a new construction cost estimating procedure was automatically presented that reflects the characteristics of the two additional projects. In this study, the method of estimate estimate was used using 15 cases, If the cases are accumulated and reflected, the effect of this study is expected to increase.