• Title/Summary/Keyword: 성능 평가 기준

Search Result 2,420, Processing Time 0.031 seconds

Comparison and validation of rutin and quercetin contents according to the extraction method of tartary Buckwheat (Fagopyrum tataricum Gaertn.) (쓴메밀 종자의 추출방법에 따른 루틴 및 퀘세틴 함량 비교)

  • Kim, Su Jeong;Sohn, Hwang Bae;Kim, Geum Hee;Lee, Yu Young;Hong, Su Young;Kim, Ki Deog;Nam, Jeong Hwan;Chang, Dong Chil;Suh, Jong Taek;Koo, Bon Cheol;Kim, Yul Ho
    • Korean Journal of Food Science and Technology
    • /
    • v.49 no.3
    • /
    • pp.258-264
    • /
    • 2017
  • The stability and accuracy of ultra-performance liquid chromatography (UPLC) used for evaluating the contents of rutin and quercetin in tartary buckwheat (Fagopyrum tataricum Gaertn.) seeds extracted by seven different extraction methods were determined. The seven extraction methods were reflux extraction (RE), ultra-sonification extraction (UE), stirrer extraction (SE), RE after UE (UE+RE), RE after SE (SE+RE), UE after SE (SE+UE), and RE with UE after SE (SE+UE+RE). Among the seven extraction methods used, RE yielded comparatively higher contents of rutin (2,277 mg/ 100 g) and quercetin (158 mg/100 g) than those yielded by other six extraction methods. The intra-day repeatability and inter-day precision of RE was 0.4-3.2% considering relative standard deviation (RSD), while accuracy was 88.8-102.4%. Therefore, RE with UPLC would be a rapid, accurate, and stable method for analyzing rutin and quercetin contents in tartary buckwheat.

Effect of Air Contents, Deicing Salts, and Exposure Conditions on the Freeze-Thaw Durability of the Concrete (콘크리트의 동결융해 내구성에 공기량, 제설제, 노출조건이 미치는 영향에 관한 연구)

  • Lee, Byung-Duk
    • International Journal of Highway Engineering
    • /
    • v.12 no.2
    • /
    • pp.107-113
    • /
    • 2010
  • In this study, the relative effects of low-chloride deicier(LCD) and two other deicing agents on the scaling of concrete were conducted in a series of tests at laboratory accordance with the ASTM C 672. The solutions concentration of deicers tested included 1, 4, 10%. Tap water was used as control. The amount of scaling was evaluated gravimetrically. As test result of deicer solution types, when applied to 4% solutions, surface scaling of concrete after 56 freeze-thaw cycles was produced significantly as about 9 times on LCD solution, as about 18 times on $CaCl_2$ solution, and as about 33 times on NaCl solution comparing with tap water. As test result of deicer solution concentrations, relatively low concentrations (of the 4% by weight) of deicer were produced more surface scaling than higher concentrations (of the 10% by weight) or lower concentrations (of the 1% by weight) of deicer. It show that the damaging concentration is of the order of 3~4% for previous research result. It appears that the mechanism of surface scaling is primarily physical rather than chemical. Also, the effect of chloride deicier types, freeze-thaw cycling, and air contents on the performance of concrete was experimentally investigated. The results show that the concrete specimens subjected to freeze-thaw cycling scaled more severely in exposure to deicing salt than those in non-exposure to deicing salt, weight losses of the specimens tested in exposure to deicing salt were twice as much as those tested in non-exposure to deicing salt. Relative dynamic modulus of elasticity of concrete specimens decreased more quickly in exposure to deicing salt than in non-exposure to deicing salt. Also, relative dynamic modulus of elasticity of concrete specimens in exposure to sodium chloride deicing salt was decreased more quickly comparing with exposure to LCD salt. It is also shown that the chloride contents according to concrete specimen depths was more largely in exposure to LCD salt. When concrete specimen is exposed to chloride deicing salts and freeze-thaw cycling, performance degradation in the entrained air concrete(AE concrete) retarded more considerably comparing with non-entrained air concrete(Non-AE concrete).

Effects of Polyols on Antimicrobial and Preservative Efficacy in Cosmetics (화학방부제 배합량 감소를 위한 폴리올류의 항균, 방부영향력 연구)

  • Shin, Kye-Ho;Kwack, Il-Young;Lee, Sung-Won;Suh, Kyung-Hee;Moon, Sung-Joon;Chang, Ih-Seop
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.33 no.2
    • /
    • pp.111-115
    • /
    • 2007
  • It is inevitable to use germicidal agents like parabens, imidazolidinyl urea, phenoxyethanol and chlorphenesin to preserve the cosmetics. Although effective in reducing microblological contamination, chemical preservatives are irritative, allergenic and even toxic to human skin. So it is needed to decrease or eliminate usage of preservatives in cosmetic products Glycerin, butylene glycol (BG), prorylene glycol (PG), and dipropylene glycol (DPG) are widely used in cosmetics as skin conditioning agent or solvents. At high concentrations, they have antimicrobial activities, but deteriorate product quality like sensory feeling or safety. The purpose of study is to evaluate the effects of polyols on antimicrobial and preservative efficacy and confirm whether using adjusted polyols can decrease the contents of preservatives without deterioration of the quality of cosmetics. Effects of common polyols on antimicrobial activities of general preservatives were measured. BG and PG significantly (p < 0.05) increased activities of preservatives, but glycerin influenced little. It was inferred from the regression analysis of the results with S. aureus that adding 1% of PG increased activities of preservatives up to $2.1{\sim}8.4 %$ and BG improved activities of preservatives up to $1.8{\sim}8.4 %$. The challenge test results for oil in water lotions and creams showed that BG and PG improved the efficacy of preservative systems up to 40 % at a range of $5.5{\sim}9.9 %$, but glycerin had little effect on it. The measured rates of improvement were analogous to the inferences from regression analysis. It can be concluded that is possible to reduce total chemical preservatives up to 40 %, consequently improve the safety and sensory quality of cosmetics with the precision control of polyols. Added to that, using this paradigm, low preservative contents, praraben-free system, and even preservative-free systems can be expected in the near future.

Behavior of Steel Fiber-Reinforced Concrete Exterior Connections under Cyclic Loads (반복하중을 받는 강섬유 보강 철근콘크리트 외부 접합부의 거동 특성)

  • Kwon, Woo-Hyun;Kim, Woo-Suk;Kang, Thomas H.K.;Hong, Sung-Gul;Kwak, Yoon-Keun
    • Journal of the Korea Concrete Institute
    • /
    • v.23 no.6
    • /
    • pp.711-722
    • /
    • 2011
  • Beam-column gravity or Intermediate Moment frames subjected to unexpected large displacements are vulnerable when no seismic details are provided, which is typical. Conversely, economic efficiency of those frames is decreased if unnecessary special detailing is applied as the beam and column size becomes quite large and steel congestion is caused by joint transverse reinforcement in beam-column connections. Moderate seismic design is used in Korea for beam-column connections of buildings with structural walls, which are to be destroyed when the unexpected large earthquake occurs. Nonetheless, performance of such beamcolumn connections may be substantially improved by the addition of steel fibers. This study was conducted to investigate the effect of steel fibers in reinforced concrete exterior beam-column connections and possibility for the replacement of some joint transverse reinforcement. Ten half-scale beam-column connections with non-seismic details were tested under cyclic loads with two cycles at each drift up to 19 cycles. Main test parameters used were the volume ratio of steel fibers (0%, 1%, 1.5%) and joint transverse reinforcement amount. The test results show that maximum capacity, energy dissipation capacity, shear strength and bond condition are improved with the application of steel fibers to substitute transverse reinforcement of beam-column connections. Furthermore, several shear strength equations for exterior connections were examined, including the proposed equation for steel fiber-reinforced concrete exterior connections with non-seismic details.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

A Method of Reproducing the CCT of Natural Light using the Minimum Spectral Power Distribution for each Light Source of LED Lighting (LED 조명의 광원별 최소 분광분포를 사용하여 자연광 색온도를 재현하는 방법)

  • Yang-Soo Kim;Seung-Taek Oh;Jae-Hyun Lim
    • Journal of Internet Computing and Services
    • /
    • v.24 no.2
    • /
    • pp.19-26
    • /
    • 2023
  • Humans have adapted and evolved to natural light. However, as humans stay in indoor longer in modern times, the problem of biorhythm disturbance has been induced. To solve this problem, research is being conducted on lighting that reproduces the correlated color temperature(CCT) of natural light that varies from sunrise to sunset. In order to reproduce the CCT of natural light, multiple LED light sources with different CCTs are used to produce lighting, and then a control index DB is constructed by measuring and collecting the light characteristics of the combination of input currents for each light source in hundreds to thousands of steps, and then using it to control the lighting through the light characteristic matching method. The problem with this control method is that the more detailed the steps of the combination of input currents, the more time and economic costs are incurred. In this paper, an LED lighting control method that applies interpolation and combination calculation based on the minimum spectral power distribution information for each light source is proposed to reproduce the CCT of natural light. First, five minimum SPD information for each channel was measured and collected for the LED lighting, which consisted of light source channels with different CCTs and implemented input current control function of a 256-steps for each channel. Interpolation calculation was performed to generate SPD of 256 steps for each channel for the minimum SPD information, and SPD for all control combinations of LED lighting was generated through combination calculation of SPD for each channel. Illuminance and CCT were calculated through the generated SPD, a control index DB was constructed, and the CCT of natural light was reproduced through a matching technique. In the performance evaluation, the CCT for natural light was provided within the range of an average error rate of 0.18% while meeting the recommended indoor illumination standard.

The Validity Test of Statistical Matching Simulation Using the Data of Korea Venture Firms and Korea Innovation Survey (벤처기업정밀실태조사와 한국기업혁신조사 데이터를 활용한 통계적 매칭의 타당성 검증)

  • An, Kyungmin;Lee, Young-Chan
    • Knowledge Management Research
    • /
    • v.24 no.1
    • /
    • pp.245-271
    • /
    • 2023
  • The change to the data economy requires a new analysis beyond ordinary research in the management field. Data matching refers to a technique or processing method that combines data sets collected from different samples with the same population. In this study, statistical matching was performed using random hotdeck and Mahalanobis distance functions using 2020 Survey of Korea Venture Firms and 2020 Korea Innovation Survey datas. Among the variables used for statistical matching simulation, the industry and the number of workers were set to be completely consistent, and region, business power, listed market, and sales were set as common variables. Simulation verification was confirmed by mean test and kernel density. As a result of the analysis, it was confirmed that statistical matching was appropriate because there was a difference in the average test, but a similar pattern was shown in the kernel density. This result attempted to expand the spectrum of the research method by experimenting with a data matching research methodology that has not been sufficiently attempted in the management field, and suggests implications in terms of data utilization and diversity.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.