• Title/Summary/Keyword: Machine System

Search Result 8,827, Processing Time 0.037 seconds

Changes of Weed Community in Lowland Rice Field in Korea (한국(韓國)의 논 잡초분포(雜草分布) 현황(現況))

  • Park, K.H.;Oh, Y.J.;Ku, Y.C.;Kim, H.D.;Sa, J.K.;Park, J.S.;Kim, H.H.;Kwon, S.J.;Shin, H.R.;Kim, S.J.;Lee, B.J.;Ko, M.S.
    • Korean Journal of Weed Science
    • /
    • v.15 no.4
    • /
    • pp.254-261
    • /
    • 1995
  • The nationwide weed survey was conducted in lowland rice fields over whole country of Korea in 1992 in order to determine a change of weed community and to identify a major dominant weed species and/or problem weed. Based on morphological characteristics of weeds, population ratio of broad leaf weed was 42.6%, grasses weed-9.0%, sedges-33.4% and others were 15.0%. Annual weed was 33.4% while perennial weed was 66.6% in terms of life cycle of weeds. Meanwhile, there was different weed occurrence as affected by planting method of the rice plant. In hand transplanted paddy fields predominant weed species was Sagittaria trifolia L., Monochoria vaginalis Presl., and Aneilema japonica Kunth. In machine transplanted rice fields of infant and young rice seedling Eleocharis kuroguwai Ohwi. and S. trifolia L. were more predominant. There was high occurrence of M. vaginalis, Echinochloa crus-galli L., and Leesia japonica Makino in water seeding while E. crus-galli and Cyperus serotinus Rottb. were predominant weed species in dry seeded rice. Monoculture of the rice plant would cause to high occurrence of E. kuroguwai, S. trifolia, M. vaginalis, E. crus-galli, and Sagittaria pygmaea Miq and there was higher population of S. trifolia, S. pygmaea, M. vaginalis, E crus-galli, and E. kuroguwai in double cropping system based on rice culture. In particular, there was high different weed occurrence under different transplanting times. E. kuroguwai, S. trifolia, S. pygmaea, M. vaginalis, and C. serotinus were higher population at the transplanting of 25 May and S. trifolia, E crus-galli, C. serotinus, and M. vaginalis at 10 June and S. pygmaea, E. kuroguwai, M. vaginalis, S. trifolia, and E. crusgalli at 25 June in Korea, respectively. Autumn tillage in terms of tillage time would cause more predominant weed species such as S. trifolia, E. kuroguwai, M. vaginalis, and S. pygmaea while spring tillage was higher population of E. kuroguwai, S. trifolia, E. crusgalli, M. vaginalis, and S. pygmaea. In plain area of paddy field there was higher occurrence of E. kuroguwai, S. trifolia, M. vaginalis, E. crus-galli, and S. pygmaea and in mid-mountainous area S. trifolia, E. kuroguwai, M. vaginalis, E. crus-galli, and Ludwigia prostrate Roxb. while in mountainous area S. trifolia, M. vaginalis, Potamogeton distinctus Ben., E. kuroguwai, and E. crus-galli were. In 1992 the most ten predominant weed species at the rice field of Korea based on summed dominant ratio(SDR) were E. kuroguwai > S. trifolia > E. crus-galli > M. vaginalis > S. pygmaea > C. serotinus > L. prostrate > P. distinctus > A. japonica > Scirpus juncoides Roxb.

  • PDF

The Empirical Exploration of the Conception on Nursing (간호개념에 대한 기초조사)

  • 백혜자
    • Journal of Korean Academy of Nursing
    • /
    • v.11 no.1
    • /
    • pp.65-87
    • /
    • 1981
  • The study is aimed at exploring concept held by clinical nurses of nursing. The data were collected from 225 nurses conviniently selected from the population of nurses working in Kang Won province. Findings include. 1) Nurse's Qualification. The respondents view that specialized knowledge is more important qualification of the nurse. Than warm personality. Specifically, 92.9% of the respondents indicated specialized knowledge as the most important qualification while only 43.1% indicated warm personality. 2) On Nursing Profession. The respondents view that nursing profession as health service oriented rather than independent profession specifically. This suggests that nursing profession is not consistentic present health care delivery system nor support nurses working independently. 3) On Clients of Nursing Care The respondents include patients, family and the community residents in the category of nursing care. Specifically, 92.0% of the respondents view that patient is the client, while only 67.1% of nursing student and 74.7% of herself. This indicates the lack of the nurse's recognition toward their clients. 4) On the Priority of Nursing care. Most of the respondents view the clients physical psychological respects as important component of nursing care but not the spiritual ones. Specially, 96.0% of the respondents indicated the physical respects, 93% psychological ones, while 64.1% indicated the spiritual ones. This means the lack of comprehensive conception on nursing aimension. 5) On Nursing Care. 91.6% of the respondents indicated that nursing care is the activity decreasing pain or helping to recover illness, while only 66.2% indicated earring out the physicians medical orders. 6) On Purpose of Nursing Care. 89.8% of the respondents indicated preventing illness and than 76.6% of them decreasing 1;ai of clients. On the other hand, maintaining health has the lowest selection at the degree of 13.8%. This means the lack of nurses' recognition for maintaining health as the most important point. 7) On Knowledge Needed in Nursing Care. Most of the respondents view that the knowledge faced with the spot of nursing care is needed. Specially, 81.3% of the respondents indicated simple curing method and 75.1%, 73.3%, 71.6% each indicated child nursing, maternal nursing and controlling for the communicable disease. On the other hand, knowledge w hick has been neglected in the specialized courses of nursing education, that is, thinking line among com-w unity members, overcoming style against between stress and personal relation in each home, and administration, management have a low selection at the depree of 48.9%,41.875 and 41.3%. 8) On Nursing Idea. The highest degree of selection is that they know themselves rightly, (The mean score measuring distribution was 4.205/5) In the lowest degree,3.016/5 is that devotion is the essential element of nursing, 2.860/5 the religious problems that human beings can not settle, such as a fatal ones, 2,810/5 the nursing profession is worth trying in one's life. This means that the peculiarly essential ideas on the professional sense of value. 9) On Nursing Services. The mean score measuring distribution for the nursing services showed that the inserting of machine air way is 2.132/5, the technique and knowledge for surviving heart-lung resuscitating is 2.892/s, and the preventing air pollution 3.021/5. Specially, 41.1% of the respondents indicated the lack of the replied ratio. 10) On Nurses' Qualifications. The respondents were selected five items as the most important qualifications. Specially, 17.4% of the respondents indicated specialized knowledge, 15.3% the nurses' health, 10.6% satisfaction for nursing profession, 9.8% the experience need, 9.2% comprehension and cooperation, while warm personality as nursing qualifications have a tendency of being lighted. 11) On the Priority of Nursing Care The respondents were selected three items as the most important component. Most of the respondents view the client's physical, spiritual: economic points as important components of nursing care. They showed each 36.8%, 27.6%, 13.8% while educational ones showed 1.8%. 12) On Purpose of Nursing Care. The respondents were selected four items as the most important purpose. Specially,29.3% of the respondents indicated curing illness for clients, 21.3% preventing illness for client 17.4% decreasing pain, 15.3% surviving. 13) On the Analysis of Important Nursing Care Ranging from 5 point to 25 point, the nurses' qualification are concentrated at the degree of 95.1%. Ranging from 3 point to 25, the priorities of nursing care are concentrated at the degree of 96.4%. Ranging from 4 point to 16, the purpose of nursing care is concentrated at the degree of 84.0%. 14) The Analysis, of General Characteristics and Facts of Nursing Concept. The correlation between the educational high level and nursing care showed significance. (P < 0.0262). The correction between the educational low level and purpose of nursing care showed significance. (P < 0.002) The correlation between nurses' working yeras and the degree of importance for the purpose of nursing care showed significance (P < 0.0155) Specially, the most affirmative answers were showed from two years to four ones. 15) On Nunes' qualification and its Degree of Importance The correlation between nurses' qualification and its degree of importance showed significance. (r = 0.2172, p< 0.001) 0.005) B. General characteristics of the subjects The mean age of the subject was 39 ; with 38.6% with in the age range of 20-29 ; 52.6% were male; 57.9% were Schizophrenia; 35.1% were graduated from high school or high school dropouts; 56.l% were not have any religion; 52.6% were unmarried; 47.4% were first admission; 91.2% were involuntary admission patients. C. Measurement of anxiety variables. 1. Measurement tools of affective anxiety in this study demonstrated high reliability (.854). 2. Measurement tools of somatic anxiety in this study demonstrated high reliability (.920). D. Relationship between the anxiety variables and the general characteristics. 1. Relationship between affective anxiety and general characteristics. 1) The level of female patients were higher than that of the male patient (t = 5.41, p < 0.05). 2) Frequencies of admission were related to affective anxiety, so in the first admission the anxiety level was the highest. (F = 5.50, p < 0.005). 2, Relationship between somatic anxiety and general characteristics. 1) The age range of 30-39 was found to have the highest level of the somatic anxiety. (F = 3.95, p < 0.005). 2) Frequencies of admission were related to the somatic anxiety, so .in first admission the anxiety level was the highest. (F = 9.12, p < 0.005) 0. Analysis of significant anxiety symptoms for nursing intervention. 1. Seven items such as dizziness, mental integration, sweating, restlessness, anxiousness, urinary frequency and insomnia, init. accounted for 96% of the variation within the first 24 hours after admission. 2. Seven items such as fear, paresthesias, restlessness, sweating insomnia, init., tremors and body aches and pains accounted for 84% of the variation on the 10th day after admission.

  • PDF

Analysis of Success Cases of InsurTech and Digital Insurance Platform Based on Artificial Intelligence Technologies: Focused on Ping An Insurance Group Ltd. in China (인공지능 기술 기반 인슈어테크와 디지털보험플랫폼 성공사례 분석: 중국 평안보험그룹을 중심으로)

  • Lee, JaeWon;Oh, SangJin
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.71-90
    • /
    • 2020
  • Recently, the global insurance industry is rapidly developing digital transformation through the use of artificial intelligence technologies such as machine learning, natural language processing, and deep learning. As a result, more and more foreign insurers have achieved the success of artificial intelligence technology-based InsurTech and platform business, and Ping An Insurance Group Ltd., China's largest private company, is leading China's global fourth industrial revolution with remarkable achievements in InsurTech and Digital Platform as a result of its constant innovation, using 'finance and technology' and 'finance and ecosystem' as keywords for companies. In response, this study analyzed the InsurTech and platform business activities of Ping An Insurance Group Ltd. through the ser-M analysis model to provide strategic implications for revitalizing AI technology-based businesses of domestic insurers. The ser-M analysis model has been studied so that the vision and leadership of the CEO, the historical environment of the enterprise, the utilization of various resources, and the unique mechanism relationships can be interpreted in an integrated manner as a frame that can be interpreted in terms of the subject, environment, resource and mechanism. As a result of the case analysis, Ping An Insurance Group Ltd. has achieved cost reduction and customer service development by digitally innovating its entire business area such as sales, underwriting, claims, and loan service by utilizing core artificial intelligence technologies such as facial, voice, and facial expression recognition. In addition, "online data in China" and "the vast offline data and insights accumulated by the company" were combined with new technologies such as artificial intelligence and big data analysis to build a digital platform that integrates financial services and digital service businesses. Ping An Insurance Group Ltd. challenged constant innovation, and as of 2019, sales reached $155 billion, ranking seventh among all companies in the Global 2000 rankings selected by Forbes Magazine. Analyzing the background of the success of Ping An Insurance Group Ltd. from the perspective of ser-M, founder Mammingz quickly captured the development of digital technology, market competition and changes in population structure in the era of the fourth industrial revolution, and established a new vision and displayed an agile leadership of digital technology-focused. Based on the strong leadership led by the founder in response to environmental changes, the company has successfully led InsurTech and Platform Business through innovation of internal resources such as investment in artificial intelligence technology, securing excellent professionals, and strengthening big data capabilities, combining external absorption capabilities, and strategic alliances among various industries. Through this success story analysis of Ping An Insurance Group Ltd., the following implications can be given to domestic insurance companies that are preparing for digital transformation. First, CEOs of domestic companies also need to recognize the paradigm shift in industry due to the change in digital technology and quickly arm themselves with digital technology-oriented leadership to spearhead the digital transformation of enterprises. Second, the Korean government should urgently overhaul related laws and systems to further promote the use of data between different industries and provide drastic support such as deregulation, tax benefits and platform provision to help the domestic insurance industry secure global competitiveness. Third, Korean companies also need to make bolder investments in the development of artificial intelligence technology so that systematic securing of internal and external data, training of technical personnel, and patent applications can be expanded, and digital platforms should be quickly established so that diverse customer experiences can be integrated through learned artificial intelligence technology. Finally, since there may be limitations to generalization through a single case of an overseas insurance company, I hope that in the future, more extensive research will be conducted on various management strategies related to artificial intelligence technology by analyzing cases of multiple industries or multiple companies or conducting empirical research.

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

A Study on Searching for Export Candidate Countries of the Korean Food and Beverage Industry Using Node2vec Graph Embedding and Light GBM Link Prediction (Node2vec 그래프 임베딩과 Light GBM 링크 예측을 활용한 식음료 산업의 수출 후보국가 탐색 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Seo, Jinny
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.73-95
    • /
    • 2021
  • This study uses Node2vec graph embedding method and Light GBM link prediction to explore undeveloped export candidate countries in Korea's food and beverage industry. Node2vec is the method that improves the limit of the structural equivalence representation of the network, which is known to be relatively weak compared to the existing link prediction method based on the number of common neighbors of the network. Therefore, the method is known to show excellent performance in both community detection and structural equivalence of the network. The vector value obtained by embedding the network in this way operates under the condition of a constant length from an arbitrarily designated starting point node. Therefore, it has the advantage that it is easy to apply the sequence of nodes as an input value to the model for downstream tasks such as Logistic Regression, Support Vector Machine, and Random Forest. Based on these features of the Node2vec graph embedding method, this study applied the above method to the international trade information of the Korean food and beverage industry. Through this, we intend to contribute to creating the effect of extensive margin diversification in Korea in the global value chain relationship of the industry. The optimal predictive model derived from the results of this study recorded a precision of 0.95 and a recall of 0.79, and an F1 score of 0.86, showing excellent performance. This performance was shown to be superior to that of the binary classifier based on Logistic Regression set as the baseline model. In the baseline model, a precision of 0.95 and a recall of 0.73 were recorded, and an F1 score of 0.83 was recorded. In addition, the light GBM-based optimal prediction model derived from this study showed superior performance than the link prediction model of previous studies, which is set as a benchmarking model in this study. The predictive model of the previous study recorded only a recall rate of 0.75, but the proposed model of this study showed better performance which recall rate is 0.79. The difference in the performance of the prediction results between benchmarking model and this study model is due to the model learning strategy. In this study, groups were classified by the trade value scale, and prediction models were trained differently for these groups. Specific methods are (1) a method of randomly masking and learning a model for all trades without setting specific conditions for trade value, (2) arbitrarily masking a part of the trades with an average trade value or higher and using the model method, and (3) a method of arbitrarily masking some of the trades with the top 25% or higher trade value and learning the model. As a result of the experiment, it was confirmed that the performance of the model trained by randomly masking some of the trades with the above-average trade value in this method was the best and appeared stably. It was found that most of the results of potential export candidates for Korea derived through the above model appeared appropriate through additional investigation. Combining the above, this study could suggest the practical utility of the link prediction method applying Node2vec and Light GBM. In addition, useful implications could be derived for weight update strategies that can perform better link prediction while training the model. On the other hand, this study also has policy utility because it is applied to trade transactions that have not been performed much in the research related to link prediction based on graph embedding. The results of this study support a rapid response to changes in the global value chain such as the recent US-China trade conflict or Japan's export regulations, and I think that it has sufficient usefulness as a tool for policy decision-making.