• Title/Summary/Keyword: random potential

Search Result 440, Processing Time 0.022 seconds

A Study on the dose distribution produced by $^{32}$ P source form in treatment for inhibiting restenosis of coronary artery (관상동맥 재협착 방지를 위한 치료에서 $^{32}$ P 핵종의 선원 형태에 따른 선량분포에 관한 연구)

  • 김경화;김영미;박경배
    • Progress in Medical Physics
    • /
    • v.10 no.1
    • /
    • pp.1-7
    • /
    • 1999
  • In this study, the dose distributions of a $^{32}$ p uniform cylindrical volume source and a surface source, a pure $\beta$emitter, were calculated in order to obtain information relevant to the utilization of a balloon catheter and a radioactive stent. The dose distributions of $^{32}$ p were calculated by means of the EGS4 code system. The sources are considered to be distributed uniformly in the volume and on the surface in the form of a cylinder with a radius of 1.5 mm and length of 20 mm. The energy of $\beta$particles emitted is chosen at random in the $\beta$ energy spectrum evaluated by the solution of the Dirac equation for the Coulomb potential. Liquid water is used to simulate the particle transport in the human body. The dose rates in a target at a 0.5mm radial distance from the surface of cylindrical volume and surface source are 12.133 cGy/s per GBq (0.449 cGy/s per mCi, uncertainty: 1.51%) and 24.732 cGy/s per GBq (0.915 cGy/s per mCi, uncertainty: 1.01%), respectively. The dose rates in the two sources decrease with distance in both radial and axial direction. On the basis of the above results, the determined initial activities were 29.69 mCi and 1.2278 $\mu$Ci for the balloon catheter and the radioactive stent using $^{32}$ P isotope, respectively. The total absorbed dose for optimal therapeutic regimen is considered to be 20 Gy and the treatment time in the case of the balloon catheter is less than 3 min. Absorbed doses in targets placed in a radial direction for the two sources were also calculated when it expressed initial activity in a 1 mCi/ml volume activity density for the cylindrical volume source and a 0.1 mCi/cm$^2$ area activity density for the surface source. The absorbed dose distribution around the $^{32}$ P cylindrical source with different size can be easily calculated using our results when the volume activity density and area activity density for the source are known.

  • PDF

The Evaluation of UV-induced Mutation of the Microalgae, Chlorella vulgaris in Mass Production Systems (자외선에 의해 유도된 Chlorella vulgaris 돌연변이 균주의 대량 생산 시스템에서의 평가)

  • Choi, Tae-O;Kim, Kyong-Ho;Kim, Gun-Do;Choi, Tae-Jin;Jeon, Young Jae
    • Journal of Life Science
    • /
    • v.27 no.10
    • /
    • pp.1137-1144
    • /
    • 2017
  • The microalgae Chlorella vulgaris has been considered an important alternative resource for biodiesel production. However, its industrial-scale production has been constrained by the low productivity of the biomass and lipid. To overcome this problem, we isolated and characterized a potentially economical oleaginous strain of C. vulgaris via the random mutagenesis technique using UV irradiation. Two types of mass production systems were compared for their yield of biomass and lipid content. Among the several putatively oleaginous strains that were isolated, the particular mutant strain designated as UBM1-10 in the laboratory showed an approximately 1.5-fold higher cell yield and lipid content than those from the wild type. Based on these results, UBM1-10 was selected and cultivated under outdoor conditions using two different types of reactors, a tubular-type photobioreactor (TBPR) and an open pond-type reactor (OPR). The results indicated that the mutant strain cultivated in the TBPR showed more than 5 times higher cell concentrations ($2.6g\;l^{-1}$) as compared to that from the strain cultured in the OPR ($0.5g\;l^{-1}$). After the mass cultivation, the cells of UBM1-10 and the parental strain were further investigated for crude lipid content and composition. The results indicate a 3-fold higher crude lipid content from UBM1-10 (0.3%, w/w) as compared to that from the parent strain (0.1% w/w). Therefore, this study demonstrated that the economic potential of C. vulgaris as a biodiesel production resource can be increased with the use of a photoreactor type as well as the strategic mutant isolation technique.

A Study on the Reliability Analysis and Risk Assessment of Liquefied Natural Gas Supply Utilities (천연가스 공급설비에 대한 기기신뢰도 분석 및 위험성 평가)

  • Ko, Jae-Sun;Kim, Hyo
    • Fire Science and Engineering
    • /
    • v.17 no.1
    • /
    • pp.8-20
    • /
    • 2003
  • Natural gas has been supplied through underground pipelines and valve stations as a new city gas in Seoul. In contrast to its handiness the natural gas has very substantial hazards due to fires and explosions occurring from careless treatments or malfunctions of the transporting system. The main objectives of this study are to identify major hazards and to perform risk assessments after assessing reliabilities of the composing units in dealing with typical pipeline networks. there-fore two method, fault tree analysis ;1nd event tree analysis, are used here. Random valve stations are selected and considered its situation in location. The value of small leakage, large rupture, and no supply of liquefied natural gas is estimated as that of top event. By this calculation the values of small leakage are 3.29 in I)C valve station, 1.41 in DS valve station, those of large rup-lure are $1.90Times10_{-2}$ in DC valve station, $2.32$\times$10^{-2}$ in DS valve station, and those of no supply of LNG to civil gas company are $2.33$\times$10 ^{-2}$ , $2.89$\times$10^{-2}$ in each valve station. And through minimal cut set we can find the parts that is important and should be more important in overall system. In DC valve station one line must be added between basic event 26,27 because the potential hazard of these parts is the highest value. If it is added the failure rate of no supply of LNG is reduced to one fourth. In DS valve station the failure rate of basic event 4 is 92eye of no supply of LNG. Therefore if the portion of this part is reduced (one line added) the total failure rate can be decreased to one tenth. This analytical study on the risk assessment is very useful to prepare emergency actions or procedures in case of gas accidents around underground pipeline networks and to establish a resolute gas safety management system for loss prevention in Seoul metropolitan area.

Estimation of Annual Trends and Environmental Effects on the Racing Records of Jeju Horses (제주마 주파기록에 대한 연도별 추세 및 환경효과 분석)

  • Lee, Jongan;Lee, Soo Hyun;Lee, Jae-Gu;Kim, Nam-Young;Choi, Jae-Young;Shin, Sang-Min;Choi, Jung-Woo;Cho, In-Cheol;Yang, Byoung-Chul
    • Journal of Life Science
    • /
    • v.31 no.9
    • /
    • pp.840-848
    • /
    • 2021
  • This study was conducted to estimate annual trends and the environmental effects in the racing records of Jeju horses. The Korean Racing Authority (KRA) collected 48,645 observations for 2,167 Jeju horses from 2002 to 2019. Racing records were preprocessed to eliminate errors that occur during the data collection. Racing times were adjusted for comparison between race distances. A stepwise Akaike information criterion (AIC) variable selection method was applied to select appropriate environment variables affecting racing records. The annual improvement of the race time was -0.242 seconds. The model with the lowest AIC value was established when variables were selected in the following order: year, budam classification, jockey ranking, trainer ranking, track condition, weather, age, and gender. The most suitable model was constructed when the jockey ranking and age variables were considered as random effects. Our findings have potential for application as basic data when building models for evaluating genetic abilities of Jeju horses.

Development of a Classification Method for Forest Vegetation on the Stand Level, Using KOMPSAT-3A Imagery and Land Coverage Map (KOMPSAT-3A 위성영상과 토지피복도를 활용한 산림식생의 임상 분류법 개발)

  • Song, Ji-Yong;Jeong, Jong-Chul;Lee, Peter Sang-Hoon
    • Korean Journal of Environment and Ecology
    • /
    • v.32 no.6
    • /
    • pp.686-697
    • /
    • 2018
  • Due to the advance in remote sensing technology, it has become easier to more frequently obtain high resolution imagery to detect delicate changes in an extensive area, particularly including forest which is not readily sub-classified. Time-series analysis on high resolution images requires to collect extensive amount of ground truth data. In this study, the potential of land coverage mapas ground truth data was tested in classifying high-resolution imagery. The study site was Wonju-si at Gangwon-do, South Korea, having a mix of urban and natural areas. KOMPSAT-3A imagery taken on March 2015 and land coverage map published in 2017 were used as source data. Two pixel-based classification algorithms, Support Vector Machine (SVM) and Random Forest (RF), were selected for the analysis. Forest only classification was compared with that of the whole study area except wetland. Confusion matrixes from the classification presented that overall accuracies for both the targets were higher in RF algorithm than in SVM. While the overall accuracy in the forest only analysis by RF algorithm was higher by 18.3% than SVM, in the case of the whole region analysis, the difference was relatively smaller by 5.5%. For the SVM algorithm, adding the Majority analysis process indicated a marginal improvement of about 1% than the normal SVM analysis. It was found that the RF algorithm was more effective to identify the broad-leaved forest within the forest, but for the other classes the SVM algorithm was more effective. As the two pixel-based classification algorithms were tested here, it is expected that future classification will improve the overall accuracy and the reliability by introducing a time-series analysis and an object-based algorithm. It is considered that this approach will contribute to improving a large-scale land planning by providing an effective land classification method on higher spatial and temporal scales.

A Study on Formulation Optimization for Improving Skin Absorption of Glabridin-Containing Nanoemulsion Using Response Surface Methodology (반응표면분석법을 활용한 Glabridin 함유 나노에멀젼의 피부흡수 향상을 위한 제형 최적화 연구)

  • Se-Yeon Kim;Won Hyung Kim;Kyung-Sup Yoon
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.49 no.3
    • /
    • pp.231-245
    • /
    • 2023
  • In the cosmetics industry, it is important to develop new materials for functional cosmetics such as whitening, wrinkles, anti-oxidation, and anti-aging, as well as technology to increase absorption when applied to the skin. Therefore, in this study, we tried to optimize the nanoemulsion formulation by utilizing response surface methodology (RSM), an experimental design method. A nanoemulsion was prepared by a high-pressure emulsification method using Glabridin as an active ingredient, and finally, the optimized skin absorption rate of the nanoemulsion was evaluated. Nanoemulsions were prepared by varying the surfactant content, cholesterol content, oil content, polyol content, high-pressure homogenization pressure, and cycling number of high-pressure homogenization as RSM factors. Among them, surfactant content, oil content, high-pressure homogenization pressure, and cycling number of high-pressure homogenization, which are factors that have the greatest influence on particle size, were used as independent variables, and particle size and skin absorption rate of nanoemulsion were used as response variables. A total of 29 experiments were conducted at random, including 5 repetitions of the center point, and the particle size and skin absorption of the prepared nanoemulsion were measured. Based on the results, the formulation with the minimum particle size and maximum skin absorption was optimized, and the surfactant content of 5.0 wt%, oil content of 2.0 wt%, high-pressure homogenization pressure of 1,000 bar, and the cycling number of high-pressure homogenization of 4 pass were derived as the optimal conditions. As the physical properties of the nanoemulsion prepared under optimal conditions, the particle size was 111.6 ± 0.2 nm, the PDI was 0.247 ± 0.014, and the zeta potential was -56.7 ± 1.2 mV. The skin absorption rate of the nanoemulsion was compared with emulsion as a control. As a result of the nanoemulsion and general emulsion skin absorption test, the cumulative absorption of the nanoemulsion was 79.53 ± 0.23%, and the cumulative absorption of the emulsion as a control was 66.54 ± 1.45% after 24 h, which was 13% higher than the emulsion.

Data-centric XAI-driven Data Imputation of Molecular Structure and QSAR Model for Toxicity Prediction of 3D Printing Chemicals (3D 프린팅 소재 화학물질의 독성 예측을 위한 Data-centric XAI 기반 분자 구조 Data Imputation과 QSAR 모델 개발)

  • ChanHyeok Jeong;SangYoun Kim;SungKu Heo;Shahzeb Tariq;MinHyeok Shin;ChangKyoo Yoo
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.523-541
    • /
    • 2023
  • As accessibility to 3D printers increases, there is a growing frequency of exposure to chemicals associated with 3D printing. However, research on the toxicity and harmfulness of chemicals generated by 3D printing is insufficient, and the performance of toxicity prediction using in silico techniques is limited due to missing molecular structure data. In this study, quantitative structure-activity relationship (QSAR) model based on data-centric AI approach was developed to predict the toxicity of new 3D printing materials by imputing missing values in molecular descriptors. First, MissForest algorithm was utilized to impute missing values in molecular descriptors of hazardous 3D printing materials. Then, based on four different machine learning models (decision tree, random forest, XGBoost, SVM), a machine learning (ML)-based QSAR model was developed to predict the bioconcentration factor (Log BCF), octanol-air partition coefficient (Log Koa), and partition coefficient (Log P). Furthermore, the reliability of the data-centric QSAR model was validated through the Tree-SHAP (SHapley Additive exPlanations) method, which is one of explainable artificial intelligence (XAI) techniques. The proposed imputation method based on the MissForest enlarged approximately 2.5 times more molecular structure data compared to the existing data. Based on the imputed dataset of molecular descriptor, the developed data-centric QSAR model achieved approximately 73%, 76% and 92% of prediction performance for Log BCF, Log Koa, and Log P, respectively. Lastly, Tree-SHAP analysis demonstrated that the data-centric-based QSAR model achieved high prediction performance for toxicity information by identifying key molecular descriptors highly correlated with toxicity indices. Therefore, the proposed QSAR model based on the data-centric XAI approach can be extended to predict the toxicity of potential pollutants in emerging printing chemicals, chemical process, semiconductor or display process.

Primiparas만 Perceptions of Their Delivery Experience and Their Maternal-Infant Interaction : Compared According to Delivery Method (초산모의 분만유형별 분만경험에 대한 지각과 모아상호작용 과정에 관한 연구)

  • 조미영
    • Journal of Korean Academy of Nursing
    • /
    • v.20 no.2
    • /
    • pp.153-173
    • /
    • 1990
  • One of the important tasks for new parents. especially mothers, is to establish warm, mutually affirming interpersonal relationships with the new baby in the family, with the purpose of promoting the healthy development of the child and the wellbeing of the whole family. Nurses assess the quality of the behavioral characteristics of the maternal-infant interaction. This study examined the relationships between primiparas pereptions of their delivery experience and their maternal infant interaction. It compared to delivery experience of mothers having a normal vaginal delivery with those having a casearean section. The purpose was to explore the relationships between the mother's perceptions of her delivery experience with her maternal infant interaction. The aim was to contribute to the development of theoretical understanding on which to base care toward promoting the quality of maternal-infant interaction. Data were collected directly by the investigator and a trained associate from Dec. 1, 1987 to March 8, 1988. Subjects were 3 random sample of 62 mothers, 32 who had a normal vaginal delivery and 30 who had a non-elective cesarean section (but without other perinatal complications) at three general hospitals in Seoul. Instruments used were the Stainton Parent -infant Interaction Scale(1981) and the Marut and Mercer Perception of Birth Scale(1979). The first observations were made in the delivery room (for vaginally delivered mothers only), followed by day 1, day 2, day 3, and 2 weeks, 4 weeks, 6 weeks and 8 weeks after birth, for a total of 7-8 contacts(Cesarean section mothers were observed on days 4 and 5 but the data not used for analysis). Observations in the hospital were made during the hour prior to scheduled feedings. The infant was placed beside the mother. Later contacts were made at home. Data analysis was done by computer using as SPSS program and indulded X² test, paired t-test, t-test, and Pearson Correlation coefficient ; the results were as follows. 1. Mothers who had a normal vaginal delivery tended to perceive the delivery experience more positively than cesarean section mothers(p=0.002). The finding supported the hypothesis I that perception of delivery would vary according to the method of delivery. Mothers' perceptions of birth were classified into three dimensions, labor, delivery and the bady. There was a significantly different and positive perception by the vaginally delivered mothers to the delivery experience(p=0.000) but no differences for labor or the bady according to the delivery method(p=0.096, p=0.389), 2. Mothers who had a normal vaginal delivery had higher average maternal-infant interaction scores(p=0.029) than mothers who had a cesarean section. There were similar higher scores for the 1st day(p=0.042), 2nd day (p=0.009), and the 3rd day(p=0.006) after delivery but not for later times. The findings supported the hypothesis Ⅱ that there would be differences in maternal-infant interaction for mothers having vaginal and cesarean section deliveries. However these differences deccreased section deliveries. However these differences decreased over time . by eight weeks the scores for vaginal delivery mothers averaged 8.1 and for cesarean section mothers, 7.9. 3. The more highly positive the pereption of the delivery experience, the higher the maternal-infant interaction score for all subjects(F=.3206, p=.006). The findings supported the hypothesis Ⅲ that there would be correlations between perceptions of delivery and maternal-infant interaction. The maternal infant interaction was highest when the perception of the bady and deliery was positive(r=.4363, p=.000, r=.2881, p=.012). No correlations between perceptions of labor and maternal-infant interaction were found(p=0.062). 4. The daily maternal-infant interaction score for the initial contact after birth to 8 weeks postpartum had the lowest average score 5.20 and the highest 7.98(in a range of 0-10). This subjects group of mothers needed nursing intervention to promote their maternal- infant interaction. The daily scores for the maternal-infant over the period of eight weeks. However, there were significantly different increases in maternal-infant interaction only from the first to second day(p=0.000) and from the fourth to sixth weeks after birth(P=0.000). 5. When the eight items of maternal-infant interaction were evaluated separately, “Expresses feelings about her role as mother” had the highest average score, 1.64(ina range of 0-3)and “Speaks to baby” the lowest, 0.9. All items, with the possible exception of “Expresses feelings about her role as mother”, suggested the subjects' need of nursing intervention to promote maternal-infant interaction. 6. There were positive correlations between certain general charateristis, namely, both a higher economic status(p=0.002) and breast feeding(p=0.202) and maternal - infant interaction. There were positive correlations between a mother's confidence in her role as a mother and the perception of the birth experience(p=0.004). For mothers who had a cesarean section, a positive perception of the birth experience was related to the duration of her marriage(p=0.010), a wanted pregnancy (P=0.030) and her confidence in her role as a mother(p=0.000). Pereptions of birth for mothers who had a normal vaginal delivery were positive than those for mothers who had a cesarean section. The level of maternalinfant interaction for mothers delivered vaginally was higher than for cesarean section mothers. The relationship between perception of birth and materanalinfant interaction was confirmed. Cesarean section has an impact on the mother's perceived experience of birth which, in turn, is positively related to maternal-infant in turn, is positively related to maternal-infant interaction. Nursing intervention to enhance maternal-infant interaction should begin in prenatal classes with an exploration of the potential impact of cesarean section on the perceptions of the birth experience and continue throughout the perinatal and post-natal periods to promote the mother's ability to control with this crisis experience and to mobilize social support. Nursing should help transform a relatively negatively perceived experience into an accepted, positively perceived and self affirming experience which enhances the maternal-infant relationship.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

A Study on Searching for Export Candidate Countries of the Korean Food and Beverage Industry Using Node2vec Graph Embedding and Light GBM Link Prediction (Node2vec 그래프 임베딩과 Light GBM 링크 예측을 활용한 식음료 산업의 수출 후보국가 탐색 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Seo, Jinny
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.73-95
    • /
    • 2021
  • This study uses Node2vec graph embedding method and Light GBM link prediction to explore undeveloped export candidate countries in Korea's food and beverage industry. Node2vec is the method that improves the limit of the structural equivalence representation of the network, which is known to be relatively weak compared to the existing link prediction method based on the number of common neighbors of the network. Therefore, the method is known to show excellent performance in both community detection and structural equivalence of the network. The vector value obtained by embedding the network in this way operates under the condition of a constant length from an arbitrarily designated starting point node. Therefore, it has the advantage that it is easy to apply the sequence of nodes as an input value to the model for downstream tasks such as Logistic Regression, Support Vector Machine, and Random Forest. Based on these features of the Node2vec graph embedding method, this study applied the above method to the international trade information of the Korean food and beverage industry. Through this, we intend to contribute to creating the effect of extensive margin diversification in Korea in the global value chain relationship of the industry. The optimal predictive model derived from the results of this study recorded a precision of 0.95 and a recall of 0.79, and an F1 score of 0.86, showing excellent performance. This performance was shown to be superior to that of the binary classifier based on Logistic Regression set as the baseline model. In the baseline model, a precision of 0.95 and a recall of 0.73 were recorded, and an F1 score of 0.83 was recorded. In addition, the light GBM-based optimal prediction model derived from this study showed superior performance than the link prediction model of previous studies, which is set as a benchmarking model in this study. The predictive model of the previous study recorded only a recall rate of 0.75, but the proposed model of this study showed better performance which recall rate is 0.79. The difference in the performance of the prediction results between benchmarking model and this study model is due to the model learning strategy. In this study, groups were classified by the trade value scale, and prediction models were trained differently for these groups. Specific methods are (1) a method of randomly masking and learning a model for all trades without setting specific conditions for trade value, (2) arbitrarily masking a part of the trades with an average trade value or higher and using the model method, and (3) a method of arbitrarily masking some of the trades with the top 25% or higher trade value and learning the model. As a result of the experiment, it was confirmed that the performance of the model trained by randomly masking some of the trades with the above-average trade value in this method was the best and appeared stably. It was found that most of the results of potential export candidates for Korea derived through the above model appeared appropriate through additional investigation. Combining the above, this study could suggest the practical utility of the link prediction method applying Node2vec and Light GBM. In addition, useful implications could be derived for weight update strategies that can perform better link prediction while training the model. On the other hand, this study also has policy utility because it is applied to trade transactions that have not been performed much in the research related to link prediction based on graph embedding. The results of this study support a rapid response to changes in the global value chain such as the recent US-China trade conflict or Japan's export regulations, and I think that it has sufficient usefulness as a tool for policy decision-making.