• Title/Summary/Keyword: 모델 기반 검증

Search Result 2,898, Processing Time 0.032 seconds

A Study on the Design of Sustainable App Services for Medication Management and Disposal of Waste Drugs (약 복용 관리와 폐의약품 처리를 위한 지속 가능한 앱 서비스 디자인 연구)

  • Lee, Ri-Na;Hwang, Jeong-Un;Shin, Ji-Yoon;Hwang, Jin-Do
    • Journal of Service Research and Studies
    • /
    • v.14 no.2
    • /
    • pp.48-68
    • /
    • 2024
  • Due to the global pandemic aftermath of the coronavirus, the importance of health care is being emphasized more socially. Due to the influence of these changes, domestic pharmaceutical companies have introduced regular drug delivery services, that is, drug and health functional food subscription services. Currently, this market is continuously growing. However, these regular services are causing new environmental problems in which the number of waste drugs increases due to the presence of unused drugs. Therefore, this study proposes a service that not only promotes health management through regular medication adherence to reduce the amount of pharmaceutical waste but also aims to improve awareness and practices regarding proper medication disposal. As a preliminary survey for service design, a preliminary survey was conducted on 51 adults to confirm their perception of drug use habits and waste drug collection. Based on the Honey Comb model, a guideline for service design was created, and a prototype was produced by specifying the service using the preliminary survey results and service design methodology. In order to verify the effectiveness of the prototype, a first user task survey was conducted to identify the problems of the prototype, and after improving this, a second usability test was conducted on 49 adults to confirm the versatility of the service. Usability verification was conducted using SPSS Mac version 29.0. For the evaluation results of the questionnaire, Spearmann Correlation Analysis was conducted to confirm the relationship between frequency analysis and evaluation items. This study presents specific solutions to the problem of waste drugs due to the spread of drug subscription services.

Comparison of Forest Carbon Stocks Estimation Methods Using Forest Type Map and Landsat TM Satellite Imagery (임상도와 Landsat TM 위성영상을 이용한 산림탄소저장량 추정 방법 비교 연구)

  • Kim, Kyoung-Min;Lee, Jung-Bin;Jung, Jaehoon
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.5
    • /
    • pp.449-459
    • /
    • 2015
  • The conventional National Forest Inventory(NFI)-based forest carbon stock estimation method is suitable for national-scale estimation, but is not for regional-scale estimation due to the lack of NFI plots. In this study, for the purpose of regional-scale carbon stock estimation, we created grid-based forest carbon stock maps using spatial ancillary data and two types of up-scaling methods. Chungnam province was chosen to represent the study area and for which the $5^{th}$ NFI (2006~2009) data was collected. The first method (method 1) selects forest type map as ancillary data and uses regression model for forest carbon stock estimation, whereas the second method (method 2) uses satellite imagery and k-Nearest Neighbor(k-NN) algorithm. Additionally, in order to consider uncertainty effects, the final AGB carbon stock maps were generated by performing 200 iterative processes with Monte Carlo simulation. As a result, compared to the NFI-based estimation(21,136,911 tonC), the total carbon stock was over-estimated by method 1(22,948,151 tonC), but was under-estimated by method 2(19,750,315 tonC). In the paired T-test with 186 independent data, the average carbon stock estimation by the NFI-based method was statistically different from method2(p<0.01), but was not different from method1(p>0.01). In particular, by means of Monte Carlo simulation, it was found that the smoothing effect of k-NN algorithm and mis-registration error between NFI plots and satellite image can lead to large uncertainty in carbon stock estimation. Although method 1 was found suitable for carbon stock estimation of forest stands that feature heterogeneous trees in Korea, satellite-based method is still in demand to provide periodic estimates of un-investigated, large forest area. In these respects, future work will focus on spatial and temporal extent of study area and robust carbon stock estimation with various satellite images and estimation methods.

Analysis of the Effect of Corner Points and Image Resolution in a Mechanical Test Combining Digital Image Processing and Mesh-free Method (디지털 이미지 처리와 강형식 기반의 무요소법을 융합한 시험법의 모서리 점과 이미지 해상도의 영향 분석)

  • Junwon Park;Yeon-Suk Jeong;Young-Cheol Yoon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.1
    • /
    • pp.67-76
    • /
    • 2024
  • In this paper, we present a DIP-MLS testing method that combines digital image processing with a rigid body-based MLS differencing approach to measure mechanical variables and analyze the impact of target location and image resolution. This method assesses the displacement of the target attached to the sample through digital image processing and allocates this displacement to the node displacement of the MLS differencing method, which solely employs nodes to calculate mechanical variables such as stress and strain of the studied object. We propose an effective method to measure the displacement of the target's center of gravity using digital image processing. The calculation of mechanical variables through the MLS differencing method, incorporating image-based target displacement, facilitates easy computation of mechanical variables at arbitrary positions without constraints from meshes or grids. This is achieved by acquiring the accurate displacement history of the test specimen and utilizing the displacement of tracking points with low rigidity. The developed testing method was validated by comparing the measurement results of the sensor with those of the DIP-MLS testing method in a three-point bending test of a rubber beam. Additionally, numerical analysis results simulated only by the MLS differencing method were compared, confirming that the developed method accurately reproduces the actual test and shows good agreement with numerical analysis results before significant deformation. Furthermore, we analyzed the effects of boundary points by applying 46 tracking points, including corner points, to the DIP-MLS testing method. This was compared with using only the internal points of the target, determining the optimal image resolution for this testing method. Through this, we demonstrated that the developed method efficiently addresses the limitations of direct experiments or existing mesh-based simulations. It also suggests that digitalization of the experimental-simulation process is achievable to a considerable extent.

A Study of Fluoride and Arsenic Adsorption from Aqueous Solution Using Alum Sludge Based Adsorbent (알럼 슬러지 기반 흡착제를 이용한 수용액상 불소 및 비소 흡착에 관한 연구)

  • Lee, Joon Hak;Ji, Won Hyun;Lee, Jin Soo;Park, Seong Sook;Choi, Kung Won;Kang, Chan Ung;Kim, Sun Joon
    • Economic and Environmental Geology
    • /
    • v.53 no.6
    • /
    • pp.667-675
    • /
    • 2020
  • An Alum-sludge based adsorbent (ASBA) was synthesized by the hydrothermal treatment of alum sludge obtained from settling basin in water treatment plant. ASBA was applied to remove fluoride and arsenic in artificially-contaminated aqueous solutions and mine drainage. The mineralogical crystal structure, composition, and specific surface area of ASBA were identified. The result revealed that ASBA has irregular pores and a specific surface area of 87.25 ㎡ g-1 on its surface, which is advantageous for quick and facile adsorption. The main mineral components of the adsorbent were found to be quartz(SiO2), montmorillonite((Al,Mg)2Si4O10(OH)2·4H2O) and albite(NaAlSi3O8). The effects of pH, reaction time, initial concentration, and temperature on removal of fluoride and arsenic were examined. The results of the experiments showed that, the adsorbed amount of fluoride and arsenic gradually decreased with increasing pH. Based on the results of kinetic and isotherm experiments, the maximum adsorption capacity of fluoride and arsenic were 7.6 and 5.6 mg g-1, respectively. Developed models of fluoride and arsenic were suitable for the Langmuir and Freundlich models. Moreover, As for fluoride and arsenic, the increase rate of adsorption concentration decreased after 8 and 12 hr, respectively, after the start of the reaction. Also, the thermodynamic data showed that the amount of fluoride and arsenic adsorbed onto ASBA increased with increasing temperature from 25℃ to 35℃, indicating that the adsorption was endothermic and non-spontaneous reaction. As a result of regeneration experiments, ASBA can be regenerated by 1N of NaOH. In the actual mine drainage experiment, it was found that it has relatively high removal rates of 77% and 69%. The experimental results show ASBA is effective as an adsorbent for removal fluoride and arsenic from mine drainage, which has a small flow rate and acid/neutral pH environment.

Gap-Filling of Sentinel-2 NDVI Using Sentinel-1 Radar Vegetation Indices and AutoML (Sentinel-1 레이더 식생지수와 AutoML을 이용한 Sentinel-2 NDVI 결측화소 복원)

  • Youjeong Youn;Jonggu Kang;Seoyeon Kim;Yemin Jeong;Soyeon Choi;Yungyo Im;Youngmin Seo;Myoungsoo Won;Junghwa Chun;Kyungmin Kim;Keunchang Jang;Joongbin Lim;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1341-1352
    • /
    • 2023
  • The normalized difference vegetation index (NDVI) derived from satellite images is a crucial tool to monitor forests and agriculture for broad areas because the periodic acquisition of the data is ensured. However, optical sensor-based vegetation indices(VI) are not accessible in some areas covered by clouds. This paper presented a synthetic aperture radar (SAR) based approach to retrieval of the optical sensor-based NDVI using machine learning. SAR system can observe the land surface day and night in all weather conditions. Radar vegetation indices (RVI) from the Sentinel-1 vertical-vertical (VV) and vertical-horizontal (VH) polarizations, surface elevation, and air temperature are used as the input features for an automated machine learning (AutoML) model to conduct the gap-filling of the Sentinel-2 NDVI. The mean bias error (MAE) was 7.214E-05, and the correlation coefficient (CC) was 0.878, demonstrating the feasibility of the proposed method. This approach can be applied to gap-free nationwide NDVI construction using Sentinel-1 and Sentinel-2 images for environmental monitoring and resource management.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Landslide Vulnerability Mapping considering GCI(Geospatial Correlative Integration) and Rainfall Probability In Inje (GCI(Geospatial Correlative Integration) 및 확률강우량을 고려한 인제지역 산사태 취약성도 작성)

  • Lee, Moung-Jin;Lee, Sa-Ro;Jeon, Seong-Woo;Kim, Geun-Han
    • Journal of Environmental Policy
    • /
    • v.12 no.3
    • /
    • pp.21-47
    • /
    • 2013
  • The aim is to analysis landslide vulnerability in Inje, Korea, using GCI(Geospatial Correlative Integration) and probability rainfalls based on geographic information system (GIS). In order to achieve this goal, identified indicators influencing landslides based on literature review. We include indicators of exposure to climate(rainfall probability), sensitivity(slope, aspect, curvature, geology, topography, soil drainage, soil material, soil thickness and soil texture) and adaptive capacity(timber diameter, timber type, timber density and timber age). All data were collected, processed, and compiled in a spatial database using GIS. Karisan-ri that had experienced 470 landslides by Typhoon Ewinia in 2006 was selected for analysis and verification. The 50% of landslide data were randomly selected to use as training data, while the other 50% being used for verification. The probability of landslides for target years (1 year, 3 years, 10 years, 50 years, and 100 years) was calculated assuming that landslides are triggered by 3-day cumulative rainfalls of 449 mm. Results show that number of slope has comparatively strong influence on landslide damage. And inclination of $25{\sim}30^{\circ}C$, the highest correlation landslide. Improved previous landslide vulnerability methodology by adopting GCI. Also, vulnerability map provides meaningful information for decision makers regarding priority areas for implementing landslide mitigation policies.

  • PDF

Change Detection of land-surface Environment in Gongju Areas Using Spatial Relationships between Land-surface Change and Geo-spatial Information (지표변화와 지리공간정보의 연관성 분석을 통한 공주지역 지표환경 변화 분석)

  • Jang Dong-Ho
    • Journal of the Korean Geographical Society
    • /
    • v.40 no.3 s.108
    • /
    • pp.296-309
    • /
    • 2005
  • In this study, we investigated the change of future land-surface and relationships of land-surface change with geo-spatial information, using a Bayesian prediction model based on a likelihood ratio function, for analysing the land-surface change of the Gongju area. We classified the land-surface satellite images, and then extracted the changing area using a way of post classification comparison. land-surface information related to the land-surface change is constructed in a GIS environment, and the map of land-surface change prediction is made using the likelihood ratio function. As the results of this study, the thematic maps which definitely influence land-surface change of rural or urban areas are elevation, water system, population density, roads, population moving, the number of establishments, land price, etc. Also, thematic maps which definitely influence the land-surface change of forests areas are elevation, slope, population density, population moving, land price, etc. As a result of land-surface change analysis, center proliferation of old and new downtown is composed near Gum-river, and the downtown area will spread around the local roads and interchange areas in the urban area. In case of agricultural areas, a small tributary of Gum-river or an area of local roads which are attached with adjacent areas showed the high probability of change. Most of the forest areas are located in southeast and from this result we can guess why the wide chestnut-tree cultivation complex is located in these areas and the capability of forest damage is very high. As a result of validation using a prediction rate curve, a capability of prediction of urban area is $80\%$, agriculture area is $55\%$, forest area is $40\%$ in higher $10\%$ of possibility which the land-surface change would occur. This integration model is unsatisfactory to Predict the forest area in the study area and thus as a future work, it is necessary to apply new thematic maps or prediction models In conclusion, we can expect that this way can be one of the most essential land-surface change studies in a few years.

Setting Criteria of Suitable Site for Southern-type Garlic Using Non-linear Regression Model (비선형회귀 분석을 통한 난지형 마늘의 적지기준 설정연구)

  • Choi, Won Jun;Kim, Yong Seok;Shim, Kyo Moon;Hur, Jina;Jo, Sera;Kang, Mingu
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.366-373
    • /
    • 2021
  • This study attempted to establish a field data-based write analysis standard by analyzing field observation data, which is non-linear data of southern garlic. Five regions, including Goheung, Namhae, Sinan, Changnyeong, and Haenam, were selected for analysis. Observation values for each observation station were extracted from the temperature data of farmland in the region through inverse distance weighted. Southern-type garlic production and temperature data were collected for 10 years, from 2010 to 2019. Local regression analysis (Kernel) of the obtained data was performed, and growth temperatures were analyzed, such as 0.8 (18.781℃), 0.9 (18.930℃), 1.0 (19.542℃), 1.1 (20.165℃), and 1.2 (21.042℃) depending on the bandwidth. The analyzed optimum temperature and the grown temperature (4℃/25℃) were applied to extract the growth temperature for each temperature by using the temperature response model analysis. Regression analysis and correlation analysis were performed between the analyzed growth temperature and production data. The coefficient of determination(R2) was analyzed as 0.325 to 0.438, and in the correlation analysis, the correlation coefficient of 0.57 to 0.66 was analyzed at the significance probability 0.001 level. Overall, as the bandwidth increased, the coefficient of determination was higher. However, in all analyses except bandwidth 1.0, it was analyzed that all variables were not used due to bias. The purpose of this study is to accommodate all data through non-linear data. It was analyzed that bandwidth 1.0 with a high coefficient of determination while accepting modeling as a whole is the most suitable.

The prediction of the stock price movement after IPO using machine learning and text analysis based on TF-IDF (증권신고서의 TF-IDF 텍스트 분석과 기계학습을 이용한 공모주의 상장 이후 주가 등락 예측)

  • Yang, Suyeon;Lee, Chaerok;Won, Jonggwan;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.237-262
    • /
    • 2022
  • There has been a growing interest in IPOs (Initial Public Offerings) due to the profitable returns that IPO stocks can offer to investors. However, IPOs can be speculative investments that may involve substantial risk as well because shares tend to be volatile, and the supply of IPO shares is often highly limited. Therefore, it is crucially important that IPO investors are well informed of the issuing firms and the market before deciding whether to invest or not. Unlike institutional investors, individual investors are at a disadvantage since there are few opportunities for individuals to obtain information on the IPOs. In this regard, the purpose of this study is to provide individual investors with the information they may consider when making an IPO investment decision. This study presents a model that uses machine learning and text analysis to predict whether an IPO stock price would move up or down after the first 5 trading days. Our sample includes 691 Korean IPOs from June 2009 to December 2020. The input variables for the prediction are three tone variables created from IPO prospectuses and quantitative variables that are either firm-specific, issue-specific, or market-specific. The three prospectus tone variables indicate the percentage of positive, neutral, and negative sentences in a prospectus, respectively. We considered only the sentences in the Risk Factors section of a prospectus for the tone analysis in this study. All sentences were classified into 'positive', 'neutral', and 'negative' via text analysis using TF-IDF (Term Frequency - Inverse Document Frequency). Measuring the tone of each sentence was conducted by machine learning instead of a lexicon-based approach due to the lack of sentiment dictionaries suitable for Korean text analysis in the context of finance. For this reason, the training set was created by randomly selecting 10% of the sentences from each prospectus, and the sentence classification task on the training set was performed after reading each sentence in person. Then, based on the training set, a Support Vector Machine model was utilized to predict the tone of sentences in the test set. Finally, the machine learning model calculated the percentages of positive, neutral, and negative sentences in each prospectus. To predict the price movement of an IPO stock, four different machine learning techniques were applied: Logistic Regression, Random Forest, Support Vector Machine, and Artificial Neural Network. According to the results, models that use quantitative variables using technical analysis and prospectus tone variables together show higher accuracy than models that use only quantitative variables. More specifically, the prediction accuracy was improved by 1.45% points in the Random Forest model, 4.34% points in the Artificial Neural Network model, and 5.07% points in the Support Vector Machine model. After testing the performance of these machine learning techniques, the Artificial Neural Network model using both quantitative variables and prospectus tone variables was the model with the highest prediction accuracy rate, which was 61.59%. The results indicate that the tone of a prospectus is a significant factor in predicting the price movement of an IPO stock. In addition, the McNemar test was used to verify the statistically significant difference between the models. The model using only quantitative variables and the model using both the quantitative variables and the prospectus tone variables were compared, and it was confirmed that the predictive performance improved significantly at a 1% significance level.