• Title/Summary/Keyword: 모델 성능 평가

Search Result 3,577, Processing Time 0.032 seconds

Gold Recovery from Cyanide Solution through Biosorption, Desorption and Incineration with Waste Biomass of Corynebacterium glutamicum as Biosorbent (생체흡착, 탈착 및 회화를 이용한 시안 용액으로부터 금의 회수)

  • Bae, Min-A;Kwak, In-Seob;Won, Sung-Wook;Yun, Yeoung-Sang
    • Clean Technology
    • /
    • v.16 no.2
    • /
    • pp.117-123
    • /
    • 2010
  • In this study, we propose two methods able to recover different type of gold from gold-cyanide solutions: biosorption and desorption process for mono-valent gold recovery and biosorption and incineration process for zero-valent gold recovery. The waste bacterial biomass of Corynebacterium glutamicum generated from amino acid fermentation industry was used as a biosorbent. The pH edge experiments indicated that the optimal pH range was pH 2 - 3. From isothermal experiment and its fitting with Langmuir equation, the maximum uptake capacity of Au(I) at pH 2.5 were determined to be 35.15 mg/g. Kinetic tests evidenced that the process is very fast so that biosorption equilibrium was completed within the 60 min. To recover Au(I), the gold ions were able to be successfully eluted from the Au-loaded biosorbent by changing the pH to pH 7 and the desorption efficiency was 91%. This indicates that the combined process of biosorption and desorption would be effective for the recovery of Au(I). In order to recover zero-valent gold, the Au-loaded biosorbents were incinerated. The content of zero-valent gold in the incineration ash was as high as 85%. Therefore, we claim on the basis of the results that two suggested combined processes could be useful to recover gold from cyanide solutions and chosen according to the type of gold to be recovered.

A Study on the Prediction of Mortality Rate after Lung Cancer Diagnosis for Men and Women in 80s, 90s, and 100s Based on Deep Learning (딥러닝 기반 80대·90대·100대 남녀 대상 폐암 진단 후 사망률 예측에 관한 연구)

  • Kyung-Keun Byun;Doeg-Gyu Lee;Se-Young Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.2
    • /
    • pp.87-96
    • /
    • 2023
  • Recently, research on predicting the treatment results of diseases using deep learning technology is also active in the medical community. However, small patient data and specific deep learning algorithms were selected and utilized, and research was conducted to show meaningful results under specific conditions. In this study, in order to generalize the research results, patients were further expanded and subdivided to derive the results of a study predicting mortality after lung cancer diagnosis for men and women in their 80s, 90s, and 100s. Using AutoML, which provides large-scale medical information and various deep learning algorithms from the Health Insurance Review and Assessment Service, five algorithms such as Decision Tree, Random Forest, Gradient Boosting, XGBoost, and Logistic Registration were created to predict mortality rates for 84 months after lung cancer diagnosis. As a result of the study, men in their 80s and 90s had a higher mortality prediction rate than women, and women in their 100s had a higher mortality prediction rate than men. And the factor that has the greatest influence on the mortality rate was analyzed as the treatment period.

Reduction of Inference time in Neuromorphic Based Platform for IoT Computing Environments (IoT 컴퓨팅 환경을 위한 뉴로모픽 기반 플랫폼의 추론시간 단축)

  • Kim, Jaeseop;Lee, Seungyeon;Hong, Jiman
    • Smart Media Journal
    • /
    • v.11 no.2
    • /
    • pp.77-83
    • /
    • 2022
  • The neuromorphic architecture uses a spiking neural network (SNN) model to derive more accurate results as more spike values are accumulated through inference experiments. When the inference result converges to a specific value, even if the inference experiment is further performed, the change in the result is smaller and power consumption may increase. In particular, in an AI-based IoT environment, power consumption can be a big problem. Therefore, in this paper, we propose a technique to reduce the power consumption of AI-based IoT by reducing the inference time by adjusting the inference image exposure time in the neuromorphic architecture environment. The proposed technique calculates the next inferred image exposure time by reflecting the change in inference accuracy. In addition, the rate of reflection of the change in inference accuracy can be adjusted with a coefficient value, and an optimal coefficient value is found through a comparison experiment of various coefficient values. In the proposed technique, the inference image exposure time corresponding to the target accuracy is greater than that of the linear technique, but the overall power consumption is less than that of the linear technique. As a result of measuring and evaluating the performance of the proposed method, it is confirmed that the inference experiment applying the proposed method can reduce the final exposure time by about 90% compared to the inference experiment applying the linear method.

ViscoElastic Continuum Damage (VECD) Finite Element (FE) Analysis on Asphalt Pavements (아스팔트 콘크리트 포장의 선형 점탄성 유한요소해석)

  • Seo, Youngguk;Bak, Chul-Min;Kim, Y. Richard;Im, Jeong-Hyuk
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.6D
    • /
    • pp.809-817
    • /
    • 2008
  • This paper deals with the development of ViscoElastic Continuum Damage Finite Element Program (VECD-FEP++) and its verification with the results from both field and laboratory accelerated pavement tests. Damage characteristics of asphalt concrete mixture have been defined by Schapery's work potential theory, and uniaxial constant crosshead rate tests were carried out to be used for damage model implementation. VECD-FEP++ predictions were compared with strain responses (longitudinal and transverse strains) under moving wheel loads running at different constant speeds. To this end, an asphalt pavement section (A5) of Korea Expressway Corporation Test Road (KECTR) instrumented with strain gauges were loaded with a dump truck. Also, a series of accelerated pavement fatigue tests have been conducted at pavement sections surfaced with four asphalt concrete mixtures (Dense-graded, SBS, Terpolymer, CR-TB). Planar strain responses were in good agreement with field measurements at base layers, whereas strains at both surface and intermediate layers were found different from simulation results due to the complexity of tire-road contact pressures. Finally, fatigue characteristics of four asphalt mixtures were reasonably described with VECD-FEP++.

Review on Rock-Mechanical Models and Numerical Analyses for the Evaluation on Mechanical Stability of Rockmass as a Natural Barriar (천연방벽 장기 안정성 평가를 위한 암반역학적 모델 고찰 및 수치해석 검토)

  • Myung Kyu Song;Tae Young Ko;Sean S. W., Lee;Kunchai Lee;Byungchan Kim;Jaehoon Jung;Yongjin Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.445-471
    • /
    • 2023
  • Long-term safety over millennia is the top priority consideration in the construction of disposal sites. However, ensuring the mechanical stability of deep geological repositories for spent fuel, a.k.a. radwaste, disposal during construction and operation is also crucial for safe operation of the repository. Imposing restrictions or limitations on tunnel support and lining materials such as shotcrete, concrete, grouting, which might compromise the sealing performance of backfill and buffer materials which are essential elements for the long-term safety of disposal sites, presents a highly challenging task for rock engineers and tunnelling experts. In this study, as part of an extensive exploration to aid in the proper selection of disposal sites, the anticipation of constructing a deep geological repository at a depth of 500 meters in an unknown state has been carried out. Through a review of 2D and 3D numerical analyses, the study aimed to explore the range of properties that ensure stability. Preliminary findings identified the potential range of rock properties that secure the stability of central and disposal tunnels, while the stability of the vertical tunnel network was confirmed through 3D analysis, outlining fundamental rock conditions necessary for the construction of disposal sites.

Analysis of Research Trends Related to drug Repositioning Based on Machine Learning (머신러닝 기반의 신약 재창출 관련 연구 동향 분석)

  • So Yeon Yoo;Gyoo Gun Lim
    • Information Systems Review
    • /
    • v.24 no.1
    • /
    • pp.21-37
    • /
    • 2022
  • Drug repositioning, one of the methods of developing new drugs, is a useful way to discover new indications by allowing drugs that have already been approved for use in people to be used for other purposes. Recently, with the development of machine learning technology, the case of analyzing vast amounts of biological information and using it to develop new drugs is increasing. The use of machine learning technology to drug repositioning will help quickly find effective treatments. Currently, the world is having a difficult time due to a new disease caused by coronavirus (COVID-19), a severe acute respiratory syndrome. Drug repositioning that repurposes drugsthat have already been clinically approved could be an alternative to therapeutics to treat COVID-19 patients. This study intends to examine research trends in the field of drug repositioning using machine learning techniques. In Pub Med, a total of 4,821 papers were collected with the keyword 'Drug Repositioning'using the web scraping technique. After data preprocessing, frequency analysis, LDA-based topic modeling, random forest classification analysis, and prediction performance evaluation were performed on 4,419 papers. Associated words were analyzed based on the Word2vec model, and after reducing the PCA dimension, K-Means clustered to generate labels, and then the structured organization of the literature was visualized using the t-SNE algorithm. Hierarchical clustering was applied to the LDA results and visualized as a heat map. This study identified the research topics related to drug repositioning, and presented a method to derive and visualize meaningful topics from a large amount of literature using a machine learning algorithm. It is expected that it will help to be used as basic data for establishing research or development strategies in the field of drug repositioning in the future.

A Study on 3-Dimensional Near-Field Source Localization Using Interference Pattern Matching in Shallow Water Environments (천해에서 간섭패턴 정합을 이용한 근거리 음원의 3차원 위치추정 기법연구)

  • Kim, Se-Young;Chun, Seung-Yong;Son, Yoon-Jun;Kim, Ki-Man
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.318-327
    • /
    • 2009
  • In this paper, we propose a 3-D geometric localization method for near-field broadband source in shallow water environments. According to the waveguide invariant theory, slope of the interference pattern which is seen in a sensor spectrogram directly proportional to a range of the source. The relative ratio of the range between source and sensors was estimated by matching of two interference patterns in spectrogram. Then this ratio is applied to the Apollonius's circle which shows the locus of a source whose range ratio from two sensors is constant. Two Apollonius's circles from three sensors make the intersection point that means the horizontal range and the azimuth angle of the source. And this intersection point is constant with source depth. Therefore the source depth can be estimated using 3-D hyperboloid equation whose range difference from two sensors is constant. To evaluate a performance of the proposed localization algorithm, simulation is performed using acoustic propagation program and analysis of localization error is demonstrated. From simulation results, error estimate for range and depth is described within 50 m and 15 m respectively.

Low-cost Prosthetic Hand Model using Machine Learning and 3D Printing (머신러닝과 3D 프린팅을 이용한 저비용 인공의수 모형)

  • Donguk Shin;Hojun Yeom;Sangsoo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.19-23
    • /
    • 2024
  • Patients with amputations of both hands need prosthetic hands that serve both cosmetic and functional purposes, and research on prosthetic hands using electromyography of remaining muscles is active, but there is still the problem of high cost. In this study, an artificial prosthetic hand was manufactured and its performance was evaluated using low-cost parts and software such as a surface electromyography sensor, machine learning software Edge Impulse, Arduino Nano 33 BLE, and 3D printing. Using signals acquired with surface electromyography sensors and subjected to digital signal processing through Edge Impulse, the flexing movement signals of each finger were transmitted to the fingers of the prosthetic hand model through training to determine the type of finger movement using machine learning. When the digital signal processing conditions were set to a notch filter of 60 Hz, a bandpass filter of 10-300 Hz, and a sampling frequency of 1,000 Hz, the accuracy of machine learning was the highest at 82.1%. The possibility of being confused between each finger flexion movement was highest for the ring finger, with a 44.7% chance of being confused with the movement of the index finger. More research is needed to successfully develop a low-cost prosthetic hand.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

Analysis of Stability Indexes for Lightning by Using Upper Air Observation Data over South Korea (남한에서 낙뢰발생시 근접 고층기상관측 자료를 이용한 안정도 지수 분석)

  • Eom, Hyo-Sik;Suh, Myoung-Seok
    • Atmosphere
    • /
    • v.20 no.4
    • /
    • pp.467-482
    • /
    • 2010
  • In this study, characteristics of various stability indexes (SI) and environmental parameters (EP) for the lightning are analysed by using 5 upper air observatories (Osan, Gwangju, Jeju, Pohang, and Baengnyeongdo) for the years 2002-2006 over South Korea. The analysed SI and EP are the lifted index, K-index, Showalter stability index, total precipitable water, mixing ratio, wind shear and temperature of lifting condensation level. The lightning data occurred on the range of -2 hr~+1 hr and within 100 km based on the launch time of rawinsonde and observing location are selected. In general, summer averaged temperature and mixing ratio of lower troposphere for the lightning cases are higher about 1 K and $1{\sim}2gkg^{-1}$ than no lightning cases, respectively. The Box-Whisker plot shows that the range of various SI and EP values for lightning and no lightning cases are well separated but overlapping of SI and EP values between lightning and no lightning are not a little. The optimized threshold values for the detection of lightning are determined objectively based on the highest Heidke skill socre (HSS), which is the most favorable validation parameter for the rare event, such as lightning, by using the simulation of SI and EP threshold values. Although the HSS is not high (0.15~0.30) and the number and values of selected SI and EP are dependent on geographic location, the new threshold values can be used as a supplementary tool for the detection or forecast of lightning over South Korea.