• Title/Summary/Keyword: using artificial intelligence (AI)

Search Result 1,008, Processing Time 0.027 seconds

A fundamental study on the automation of tunnel blasting design using a machine learning model (머신러닝을 이용한 터널발파설계 자동화를 위한 기초연구)

  • Kim, Yangkyun;Lee, Je-Kyum;Lee, Sean Seungwon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.5
    • /
    • pp.431-449
    • /
    • 2022
  • As many tunnels generally have been constructed, various experiences and techniques have been accumulated for tunnel design as well as tunnel construction. Hence, there are not a few cases that, for some usual tunnel design works, it is sufficient to perform the design by only modifying or supplementing previous similar design cases unless a tunnel has a unique structure or in geological conditions. In particular, for a tunnel blast design, it is reasonable to refer to previous similar design cases because the blast design in the stage of design is a preliminary design, considering that it is general to perform additional blast design through test blasts prior to the start of tunnel excavation. Meanwhile, entering the industry 4.0 era, artificial intelligence (AI) of which availability is surging across whole industry sector is broadly utilized to tunnel and blasting. For a drill and blast tunnel, AI is mainly applied for the estimation of blast vibration and rock mass classification, etc. however, there are few cases where it is applied to blast pattern design. Thus, this study attempts to automate tunnel blast design by means of machine learning, a branch of artificial intelligence. For this, the data related to a blast design was collected from 25 tunnel design reports for learning as well as 2 additional reports for the test, and from which 4 design parameters, i.e., rock mass class, road type and cross sectional area of upper section as well as bench section as input data as well as16 design elements, i.e., blast cut type, specific charge, the number of drill holes, and spacing and burden for each blast hole group, etc. as output. Based on this design data, three machine learning models, i.e., XGBoost, ANN, SVM, were tested and XGBoost was chosen as the best model and the results show a generally similar trend to an actual design when assumed design parameters were input. It is not enough yet to perform the whole blast design using the results from this study, however, it is planned that additional studies will be carried out to make it possible to put it to practical use after collecting more sufficient blast design data and supplementing detailed machine learning processes.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

Analysis of the impact of mathematics education research using explainable AI (설명가능한 인공지능을 활용한 수학교육 연구의 영향력 분석)

  • Oh, Se Jun
    • The Mathematical Education
    • /
    • v.62 no.3
    • /
    • pp.435-455
    • /
    • 2023
  • This study primarily focused on the development of an Explainable Artificial Intelligence (XAI) model to discern and analyze papers with significant impact in the field of mathematics education. To achieve this, meta-information from 29 domestic and international mathematics education journals was utilized to construct a comprehensive academic research network in mathematics education. This academic network was built by integrating five sub-networks: 'paper and its citation network', 'paper and author network', 'paper and journal network', 'co-authorship network', and 'author and affiliation network'. The Random Forest machine learning model was employed to evaluate the impact of individual papers within the mathematics education research network. The SHAP, an XAI model, was used to analyze the reasons behind the AI's assessment of impactful papers. Key features identified for determining impactful papers in the field of mathematics education through the XAI included 'paper network PageRank', 'changes in citations per paper', 'total citations', 'changes in the author's h-index', and 'citations per paper of the journal'. It became evident that papers, authors, and journals play significant roles when evaluating individual papers. When analyzing and comparing domestic and international mathematics education research, variations in these discernment patterns were observed. Notably, the significance of 'co-authorship network PageRank' was emphasized in domestic mathematics education research. The XAI model proposed in this study serves as a tool for determining the impact of papers using AI, providing researchers with strategic direction when writing papers. For instance, expanding the paper network, presenting at academic conferences, and activating the author network through co-authorship were identified as major elements enhancing the impact of a paper. Based on these findings, researchers can have a clear understanding of how their work is perceived and evaluated in academia and identify the key factors influencing these evaluations. This study offers a novel approach to evaluating the impact of mathematics education papers using an explainable AI model, traditionally a process that consumed significant time and resources. This approach not only presents a new paradigm that can be applied to evaluations in various academic fields beyond mathematics education but also is expected to substantially enhance the efficiency and effectiveness of research activities.

Study of the Operation of Actuated signal control Based on Vehicle Queue Length estimated by Deep Learning (딥러닝으로 추정한 차량대기길이 기반의 감응신호 연구)

  • Lee, Yong-Ju;Sim, Min-Gyeong;Kim, Yong-Man;Lee, Sang-Su;Lee, Cheol-Gi
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.4
    • /
    • pp.54-62
    • /
    • 2018
  • As a part of realization of artificial intelligence signal(AI Signal), this study proposed an actuated signal algorithm based on vehicle queue length that estimates in real time by deep learning. In order to implement the algorithm, we built an API(COM Interface) to control the micro traffic simulator Vissim in the tensorflow that implements the deep learning model. In Vissim, when the link travel time and the traffic volume collected by signal cycle are transferred to the tensorflow, the vehicle queue length is estimated by the deep learning model. The signal time is calculated based on the vehicle queue length, and the simulation is performed by adjusting the signaling inside Vissim. The algorithm developed in this study is analyzed that the vehicle delay is reduced by about 5% compared to the current TOD mode. It is applied to only one intersection in the network and its effect is limited. Future study is proposed to expand the space such as corridor control or network control using this algorithm.

Technology Convergence & Trend Analysis of Biohealth Industry in 5 Countries : Using patent co-classification analysis and text mining (5개국 바이오헬스 산업의 기술융합과 트렌드 분석 : 특허 동시분류분석과 텍스트마이닝을 활용하여)

  • Park, Soo-Hyun;Yun, Young-Mi;Kim, Ho-Yong;Kim, Jae-Soo
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.4
    • /
    • pp.9-21
    • /
    • 2021
  • The study aims to identify convergence and trends in technology-based patent data for the biohealth sector in IP5 countries (KR, EP, JP, US, CN) and present the direction of development in that industry. We used patent co-classification analysis-based network analysis and TF-IDF-based text mining as the principal methodology to understand the current state of technology convergence. As a result, the technology convergence cluster in the biohealth industry was derived in three forms: (A) Medical device for treatment, (B) Medical data processing, and (C) Medical device for biometrics. Besides, as a result of trend analysis based on technology convergence results, it is analyzed that Korea is likely to dominate the market with patents with high commercial value in the future as it is derived as a market leader in (B) medical data processing. In particular, the field is expected to require technology convergence activation policies and R&D support strategies for the technology as the possibility of medical data utilization by domestic bio-health companies expands, along with the policy conversion of the "Data 3 Act" passed by the National Assembly in January 2019.

A Study on Geospatial Information Role in Digital Twin (디지털트윈에서 공간정보 역할에 관한 연구)

  • Lee, In-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.268-278
    • /
    • 2021
  • Technologies that are leading the fourth industrial revolution, such as the Internet of Things (IoT), big data, artificial intelligence (AI), and cyber-physical systems (CPS) are developing and generalizing. The demand to improve productivity, economy, safety, etc., is spreading in various industrial fields by applying these technologies. Digital twins are attracting attention as an important technology trend to meet demands and is one of the top 10 tasks of the Korean version of the New Deal. In this study, papers, magazines, reports, and other literature were searched using Google. In order to investigate the contribution or role of geospatial information in the digital twin application, the definition of a digital twin, we investigated technology trends of domestic and foreign companies; the components of digital twins required in manufacturing, plants, and smart cities; and the core techniques for driving a digital twin. In addition, the contributing contents of geospatial information were summarized by searching for a sentence or word linked between geospatial-related keywords (i.e., Geospatial Information, Geospatial data, Location, Map, and Geodata and Digital Twin). As a result of the survey, Geospatial information is not only providing a role as a medium connecting objects, things, people, processes, data, and products, but also providing reliable decision-making support, linkage fusion, location information provision, and frameworks. It was found that it can contribute to maximizing the value of utilization of digital twins.

The Perception Analysis of Autonomous Vehicles using Network Graph (네트워크 그래프를 활용한 자율주행차에 대한 인식 분석)

  • Hyo-gyeong Park;Yeon-hwi You;Sung-jung Yong;Seo-young Lee;Il-young Moon
    • Journal of Practical Engineering Education
    • /
    • v.15 no.1
    • /
    • pp.97-105
    • /
    • 2023
  • Recently, with the development of artificial intelligence technology, many technologies for user convenience are being developed. Among them, interest in autonomous vehicles is increasing day by day. Currently, many automobile companies are aiming to commercialize autonomous vehicles. In order to lay the foundation for the government's new and reasonable policy establishment to support commercialization, we tried to analyze changes and perceptions of public opinion through news article data. Therefore, in this paper, 35,891 news article data mentioning terms similar to 'autonomous vehicles' over the past three years were collected and network analyzed. As a result of the analysis, major keywords such as 'autonomous driving', 'AI', 'future', 'Hyundai Motor', 'autonomous driving vehicle', 'automobile', 'industrial', and 'electric vehicle' were derived. In addition, the autonomous vehicle industry is developing into a faster and more diverse platform and service industry by converging with various industries such as semiconductor companies and big tech companies as well as automobile companies and is paying attention to the convergence of industries. To continuously confirm changes and perceptions in public opinion, it is necessary to analyze perceptions through continuous analysis of SNS data or technology trends.

The Study on well-aging using digital fitness technology (디지털피트니스 기술을 활용한 웰에이징에 관한 연구)

  • Seungae Kang
    • Convergence Security Journal
    • /
    • v.24 no.3
    • /
    • pp.231-237
    • /
    • 2024
  • The rapid aging of the global population poses significant challenges to public health systems, as it often correlates with various physical, cognitive, and social declines among the elderly. Traditional approaches to promoting healthy aging emphasize the importance of physical activity, mental engagement, and social connectivity. However, factors such as mobility issues and resource constraints can limit the accessibility and effectiveness of these approaches. Digital fitness technologies, including wearable devices, mobile applications, virtual reality platforms, and AI-based feedback systems, present innovative solutions with the potential to enhance the physical, cognitive, and social well-being of older adults. This study analyzes the latest trends in digital fitness technologies and proposes strategies for effective utilization in promoting well-aging. Specifically, it addresses the need for improved technology accessibility through affordable devices and user-friendly interfaces, the development of personalized fitness programs, strategies to enhance ongoing participation such as social interaction and gamification, and solutions for data protection and ethical issues. Effective implementation of these strategies is expected to significantly improve the health and well-being of older adults. Future research and policy development should incorporate these elements to maximize the impact of digital fitness technologies and enhance the overall quality of life for the elderly.

eXtensible Rule Markup Language (XRML): Design Principles and Application (확장형 규칙 표식 언어(eXtensible Rule Markup Language) : 설계 원리 및 응용)

  • 이재규;손미애;강주영
    • Journal of Intelligence and Information Systems
    • /
    • v.8 no.1
    • /
    • pp.141-157
    • /
    • 2002
  • extensible Markup Language (XML) is a new markup language for data exchange on the Internet. In this paper, we propose a language extensible Rule Markup Language (XRML) which is an extension of XML. The implicit rules embedded in the Web pages should be identifiable, interchangeable with structured rule format, and finally accessible by various applications. It is possible to realize by using XRML. In this light, Web based Knowledge Management Systems (KMS) can be integrated with rule-based expert systems. To meet this end, we propose the six design criteria: Expressional Completeness, Relevance Linkability, Polymorphous Consistency, Applicative Universality, Knowledge Integrability and Interoperability. Furthermore, we propose three components such as RIML (Rule Identification Markup Language), RSML (Rule Structure Markup Language) and RTML (Rule Triggering Markup Language), and the Document Type Definition DTD). We have designed the XRML version 0.5 as illustrated above, and developed its prototype named Form/XRML which is an automated form processing for disbursement of the research fund in the Korea Advanced Institute of Science and Technology (KAISI). Since XRML allows both human and software agent to use the rules, there is huge application potential. We expect that XRML can contribute to the progress of Semantic Web platforms making knowledge management and e-commerce more intelligent. Since there are many emerging research groups and vendors who investigate this issue, it will not take long to see XRML commercial products. Matured XRML applications may change the way of designing information and knowledge systems in the near future.

  • PDF

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.