• Title/Summary/Keyword: The Fourth Industrial Technology

Search Result 877, Processing Time 0.026 seconds

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Weights for Evaluation items of Conformity index of Bird breeding sites on the West and South coasts of Korea (서·남해 연안성 조류번식지 적합성지수 평가항목 가중치 설정)

  • Kim, Chang-Hyeon;Kim, Won-Bin;Kim, Kyou-Sub;Lee, Chang-Hun
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.41 no.4
    • /
    • pp.40-48
    • /
    • 2023
  • This study is part of a foundational research effort aimed at developing a suitability index for breeding grounds related to avian activities along the domestic South and West coasts, including islands. Focus Group Interviews (FGI) and Analytic Hierarchy Process (AHP) analyses were conducted. The results are as follows. First, as a result of determining the value of the suitability of coastal bird breeding sites, the 'Natural Value(0.763)' was higher than the 'Artificial Value(0.237)'. Other artificial values were identified as sub-ranked except for 'Protected Areas' to ensure continuous integrity of breeding spaces. Second, as a result of re-establishing the 25 evaluation items classified in the two-time FGI as higher concepts, nine natural values and five artificial values were finally selected as a total of 14. Third, the results of the mid-classification evaluation of the importance of the suitability of coastal bird breeding sites were identified in the order of 'Ecological Value(0.392)', 'Topographic Value(0.251)', 'Passive Interference(0.124)', 'Geological Value(0.120)', and 'Active Interference(0.113)'. Fourth, the results of the priority of evaluation items of coastal bird breeding sites were in the order of 'Vegetation Distribution (0.187)', 'Area of Mudflats(0.118)', 'Presence or Absence of Mudflats(0.092)', 'Appearance of Natural Enemies(0.087)', 'Protected Areas(0.08)', 'Island Area (0.069)', 'Over-Breeding devastation(0.064)', 'Soil Composition Ratio(0.056)', 'Distance from Land(0.054)', 'Ocean farm area (0.045)', 'Cultivated land area(0.041)', 'Cultivation behavior(0.038)', 'Angle of the Surface(0.036)', and 'Land Use(0.033)'. It is judged that the weighting result value of the evaluation items derived in this study can be used for priority evaluation focusing on the coastal bird breeding area space. However, it seems that the correlation with the unique habitat suitability of bird individuals needs to be supplemented, and spatial analysis research incorporating species-specific characteristics will be left as a future task.

A Comparative Study of the Foreign Trade Strategies of Gaisong Merchants and Modern Companies in Korea. (현대기업과 개성상인의 해외진출전략의 비교분석)

  • Park, Sang-Gyu
    • Korean Business Review
    • /
    • v.17
    • /
    • pp.153-183
    • /
    • 2004
  • The Gaisong Merchants can be regarded to playa pioneering role to activate the Korea's trade with foreign countries. In the early period of Yi-Dynasty, the Gaisong Merchants focused on personal trade, but in the middle period of Yi-Dynasty, they entered to the realm of governmental trade. Furthermore, their business activities widened to various forms of trades, for example, smuggling. Utilizing accumulated capital, Gaisong merchants expanded their trading activities to their neighboring countries such as Japan and China. In recent times, it is necessary for modem Korean companies to diversify risks through the establishment of corporations for production, marketing and R&D abroad or through joint venture, M&A and strategic alliance with foreign companies in order to reduce the risks originated from volatile economic and political situations. In this study, we utilize tools of comparative study to compare Gaisong Merchants' foreign trade strategies with those of modem companies such as AMOREPACIFIC, HANILCEMENT and SHINDORICO. The purpose of the paper is to test the hypothesis that modem Korean companies grew up by following the cases of Gaisong Merchants' business activities. We summarize our main findings as follows. First, both Gaisong Merchants and modem Korean companies have common functional core capability in the field of marketing, manufacturing technology, R&D, and human resources development. Second, both Gaisong Merchants and modem Korean companies have common organizational core capability. Third, both Gaisong Merchants and modem Korean companies have common infrastructures such as planning, finance, accounting and MIS. It constitutes the infrastructure of Korea's foreign trade sector. Fourth, both Gaisong Merchants and modem companies have common organizational culture in the field of management policy and philosophy. Actually, those factors are evaluated to be driving forces of Koera's success in foreign trade. In conclusion, the business activities of Gaisong Merchants who represented the peculiarity of Korean business spirit are partially inherited to current Korean business management. The value system and behavior pattern of modern Korean companies is succeeded from the spirit of Gaisong Merchants and it playa major role to specify the identity of Korean business administration.

  • PDF

A Study on the Development of Ultra-precision Small Angle Spindle for Curved Processing of Special Shape Pocket in the Fourth Industrial Revolution of Machine Tools (공작기계의 4차 산업혁명에서 특수한 형상 포켓 곡면가공을 위한 초정밀 소형 앵글 스핀들 개발에 관한 연구)

  • Lee Ji Woong
    • Journal of Practical Engineering Education
    • /
    • v.15 no.1
    • /
    • pp.119-126
    • /
    • 2023
  • Today, in order to improve fuel efficiency and dynamic behavior of automobiles, an era of light weight and simplification of automobile parts is being formed. In order to simplify and design and manufacture the shape of the product, various components are integrated. For example, in order to commercialize three products into one product, product processing is occurring to a very narrow area. In the case of existing parts, precision die casting or casting production is used for processing convenience, and the multi-piece method requires a lot of processes and reduces the precision and strength of the parts. It is very advantageous to manufacture integrally to simplify the processing air and secure the strength of the parts, but if a deep and narrow pocket part needs to be processed, it cannot be processed with the equipment's own spindle. To solve a problem, research on cutting processing is being actively conducted, and multi-axis composite processing technology not only solves this problem. It has many advantages, such as being able to cut into composite shapes that have been difficult to flexibly cut through various processes with one machine tool so far. However, the reality is that expensive equipment increases manufacturing costs and lacks engineers who can operate the machine. In the five-axis cutting processing machine, when producing products with deep and narrow sections, the cycle time increases in product production due to the indirectness of tools, and many problems occur in processing. Therefore, dedicated machine tools and multi-axis composite machines should be used. Alternatively, an angle spindle may be used as a special tool capable of multi-axis composite machining of five or more axes in a three-axis machining center. Various and continuous studies are needed in areas such as processing vibration absorption, low heat generation and operational stability, excellent dimensional stability, and strength securing by using the angle spindle.

A Machine Learning-based Total Production Time Prediction Method for Customized-Manufacturing Companies (주문생산 기업을 위한 기계학습 기반 총생산시간 예측 기법)

  • Park, Do-Myung;Choi, HyungRim;Park, Byung-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.177-190
    • /
    • 2021
  • Due to the development of the fourth industrial revolution technology, efforts are being made to improve areas that humans cannot handle by utilizing artificial intelligence techniques such as machine learning. Although on-demand production companies also want to reduce corporate risks such as delays in delivery by predicting total production time for orders, they are having difficulty predicting this because the total production time is all different for each order. The Theory of Constraints (TOC) theory was developed to find the least efficient areas to increase order throughput and reduce order total cost, but failed to provide a forecast of total production time. Order production varies from order to order due to various customer needs, so the total production time of individual orders can be measured postmortem, but it is difficult to predict in advance. The total measured production time of existing orders is also different, which has limitations that cannot be used as standard time. As a result, experienced managers rely on persimmons rather than on the use of the system, while inexperienced managers use simple management indicators (e.g., 60 days total production time for raw materials, 90 days total production time for steel plates, etc.). Too fast work instructions based on imperfections or indicators cause congestion, which leads to productivity degradation, and too late leads to increased production costs or failure to meet delivery dates due to emergency processing. Failure to meet the deadline will result in compensation for delayed compensation or adversely affect business and collection sectors. In this study, to address these problems, an entity that operates an order production system seeks to find a machine learning model that estimates the total production time of new orders. It uses orders, production, and process performance for materials used for machine learning. We compared and analyzed OLS, GLM Gamma, Extra Trees, and Random Forest algorithms as the best algorithms for estimating total production time and present the results.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

A Conceptual Review of the Transaction Costs within a Distribution Channel (유통경로내의 거래비용에 대한 개념적 고찰)

  • Kwon, Young-Sik;Mun, Jang-Sil
    • Journal of Distribution Science
    • /
    • v.10 no.2
    • /
    • pp.29-41
    • /
    • 2012
  • This paper undertakes a conceptual review of transaction cost to broaden the understanding of the transaction cost analysis (TCA) approach. More than 40 years have passed since Coase's fundamental insight that transaction, coordination, and contracting costs must be considered explicitly in explaining the extent of vertical integration. Coase (1937) forced economists to identify previously neglected constraints on the trading process to foster efficient intrafirm, rather than interfirm, transactions. The transaction cost approach to economic organization study regards transactions as the basic units of analysis and holds that understanding transaction cost economy is central to organizational study. The approach applies to determining efficient boundaries, as between firms and markets, and to internal transaction organization, including employment relations design. TCA, developed principally by Oliver Williamson (1975,1979,1981a) blends institutional economics, organizational theory, and contract law. Further progress in transaction costs research awaits the identification of critical dimensions in which transaction costs differ and an examination of the economizing properties of alternative institutional modes for organizing transactions. The crucial investment distinction is: To what degree are transaction-specific (non-marketable) expenses incurred? Unspecialized items pose few hazards, since buyers can turn toalternative sources, and suppliers can sell output intended for one order to other buyers. Non-marketability problems arise when specific parties' identities have important cost-bearing consequences. Transactions of this kind are labeled idiosyncratic. The summarized results of the review are as follows. First, firms' distribution decisions often prompt examination of the make-or-buy question: Should a marketing activity be performed within the organization by company employees or contracted to an external agent? Second, manufacturers introducing an industrial product to a foreign market face a difficult decision. Should the product be marketed primarily by captive agents (the company sales force and distribution division) or independent intermediaries (outside sales agents and distribution)? Third, the authors develop a theoretical extension to the basic transaction cost model by combining insights from various theories with the TCA approach. Fourth, other such extensions are likely required for the general model to be applied to different channel situations. It is naive to assume the basic model appliesacross markedly different channel contexts without modifications and extensions. Although this study contributes to scholastic research, it is limited by several factors. First, the theoretical perspective of TCA has attracted considerable recent interest in the area of marketing channels. The analysis aims to match the properties of efficient governance structures with the attributes of the transaction. Second, empirical evidence about TCA's basic propositions is sketchy. Apart from Anderson's (1985) study of the vertical integration of the selling function and John's (1984) study of opportunism by franchised dealers, virtually no marketing studies involving the constructs implicated in the analysis have been reported. We hope, therefore, that further research will clarify distinctions between the different aspects of specific assets. Another important line of future research is the integration of efficiency-oriented TCA with organizational approaches that emphasize specific assets' conceptual definition and industry structure. Finally, research of transaction costs, uncertainty, opportunism, and switching costs is critical to future study.

  • PDF