• Title/Summary/Keyword: 인공지능-딥러닝

Search Result 699, Processing Time 0.031 seconds

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Study on Tire Surface Defect Detection Method Using Depth Image (깊이 이미지를 이용한 타이어 표면 결함 검출 방법에 관한 연구)

  • Kim, Hyun Suk;Ko, Dong Beom;Lee, Won Gok;Bae, You Suk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.5
    • /
    • pp.211-220
    • /
    • 2022
  • Recently, research on smart factories triggered by the 4th industrial revolution is being actively conducted. Accordingly, the manufacturing industry is conducting various studies to improve productivity and quality based on deep learning technology with robust performance. This paper is a study on the method of detecting tire surface defects in the visual inspection stage of the tire manufacturing process, and introduces a tire surface defect detection method using a depth image acquired through a 3D camera. The tire surface depth image dealt with in this study has the problem of low contrast caused by the shallow depth of the tire surface and the difference in the reference depth value due to the data acquisition environment. And due to the nature of the manufacturing industry, algorithms with performance that can be processed in real time along with detection performance is required. Therefore, in this paper, we studied a method to normalize the depth image through relatively simple methods so that the tire surface defect detection algorithm does not consist of a complex algorithm pipeline. and conducted a comparative experiment between the general normalization method and the normalization method suggested in this paper using YOLO V3, which could satisfy both detection performance and speed. As a result of the experiment, it is confirmed that the normalization method proposed in this paper improved performance by about 7% based on mAP 0.5, and the method proposed in this paper is effective.

Anomaly Detections Model of Aviation System by CNN (합성곱 신경망(CNN)을 활용한 항공 시스템의 이상 탐지 모델 연구)

  • Hyun-Jae Im;Tae-Rim Kim;Jong-Gyu Song;Bum-Su Kim
    • Journal of Aerospace System Engineering
    • /
    • v.17 no.4
    • /
    • pp.67-74
    • /
    • 2023
  • Recently, Urban Aircraft Mobility (UAM) has been attracting attention as a transportation system of the future, and small drones also play a role in various industries. The failure of various types of aviation systems can lead to crashes, which can result in significant property damage or loss of life. In the defense industry, where aviation systems are widely used, the failure of aviation systems can lead to mission failure. Therefore, this study proposes an anomaly detection model using deep learning technology to detect anomalies in aviation systems to improve the reliability of development and production, and prevent accidents during operation. As training and evaluating data sets, current data from aviation systems in an extremely low-temperature environment was utilized, and a deep learning network was implemented using the convolutional neural network, which is a deep learning technique that is commonly used for image recognition. In an extremely low-temperature environment, various types of failure occurred in the system's internal sensors and components, and singular points in current data were observed. As a result of training and evaluating the model using current data in the case of system failure and normal, it was confirmed that the abnormality was detected with a recall of 98 % or more.

Approach to Improving the Performance of Network Intrusion Detection by Initializing and Updating the Weights of Deep Learning (딥러닝의 가중치 초기화와 갱신에 의한 네트워크 침입탐지의 성능 개선에 대한 접근)

  • Park, Seongchul;Kim, Juntae
    • Journal of the Korea Society for Simulation
    • /
    • v.29 no.4
    • /
    • pp.73-84
    • /
    • 2020
  • As the Internet began to become popular, there have been hacking and attacks on networks including systems, and as the techniques evolved day by day, it put risks and burdens on companies and society. In order to alleviate that risk and burden, it is necessary to detect hacking and attacks early and respond appropriately. Prior to that, it is necessary to increase the reliability in detecting network intrusion. This study was conducted on applying weight initialization and weight optimization to the KDD'99 dataset to improve the accuracy of detecting network intrusion. As for the weight initialization, it was found through experiments that the initialization method related to the weight learning structure, like Xavier and He method, affects the accuracy. In addition, the weight optimization was confirmed through the experiment of the network intrusion detection dataset that the Adam algorithm, which combines the advantages of the Momentum reflecting the previous change and RMSProp, which allows the current weight to be reflected in the learning rate, stands out in terms of accuracy.

Autoencoder-Based Defense Technique against One-Pixel Adversarial Attacks in Image Classification (이미지 분류를 위한 오토인코더 기반 One-Pixel 적대적 공격 방어기법)

  • Jeong-hyun Sim;Hyun-min Song
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1087-1098
    • /
    • 2023
  • The rapid advancement of artificial intelligence (AI) technology has led to its proactive utilization across various fields. However, this widespread adoption of AI-based systems has raised concerns about the increasing threat of attacks on these systems. In particular, deep neural networks, commonly used in deep learning, have been found vulnerable to adversarial attacks that intentionally manipulate input data to induce model errors. In this study, we propose a method to protect image classification models from visually imperceptible One-Pixel attacks, where only a single pixel is altered in an image. The proposed defense technique utilizes an autoencoder model to remove potential threat elements from input images before forwarding them to the classification model. Experimental results, using the CIFAR-10 dataset, demonstrate that the autoencoder-based defense approach significantly improves the robustness of pretrained image classification models against One-Pixel attacks, with an average defense rate enhancement of 81.2%, all without the need for modifications to the existing models.

Artificial Intelligence Based Medical Imaging: An Overview (AI 의료영상 분석의 개요 및 연구 현황에 대한 고찰)

  • Hong, Jun-Yong;Park, Sang Hyun;Jung, Young-Jin
    • Journal of radiological science and technology
    • /
    • v.43 no.3
    • /
    • pp.195-208
    • /
    • 2020
  • Artificial intelligence(AI) is a field of computer science that is defined as allowing computers to imitate human intellectual behavior, even though AI's performance is to imitate humans. It is grafted across software-based fields with the advantages of high accuracy and speed of processing that surpasses humans. Indeed, the AI based technology has become a key technology in the medical field that will lead the development of medical image analysis. Therefore, this article introduces and discusses the concept of deep learning-based medical imaging analysis using the principle of algorithms for convolutional neural network(CNN) and back propagation. The research cases application of the AI based medical imaging analysis is used to classify the various disease(such as chest disease, coronary artery disease, and cerebrovascular disease), and the performance estimation comparing between AI based medical imaging classifier and human experts.

Changes in the environment of electronic finance and its challenges -Focusing on the prospects and implications of changes in electronic finance- (국내 전자금융의 환경 변화와 그 과제 -전자금융의 변화 전망과 시사점을 중심으로-)

  • Kim, Daehyun
    • Journal of Digital Convergence
    • /
    • v.19 no.5
    • /
    • pp.229-239
    • /
    • 2021
  • For this study, we have extensively analyzed the presentation data of the government's financial-related departments and the data of each financial institution and electronic financial institution.. As a result, In Korea's electronic financial environment, real changes such as first) expansion of non-face-to-face finance, second) teleworking in the financial sector, third) abolition of accredited certification, fourth) advanced voice phishing, fifth) openness of the financial industry and diversification of forms, sixth) the'walletless society'. In addition to the above, however, global changes triggered by the Fourth Industrial Revolution spread to the financial security sector, making it difficult to respond to problems such as artificial intelligence/ deep learning/ user analysis/ deepfake technology. As the proportion of electronic finance is increasing socially, it should be studied in the fields of electronic finance and its environment, and crime and criminal investigation.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

A Study on Bigdata Utilization in Cultural and Artistic Contents Production and Distribution (문화예술 콘텐츠 제작 및 유통에서의 빅데이터 활용 연구)

  • Kim, Hyun-Young;Kim, Jae-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.7
    • /
    • pp.384-392
    • /
    • 2019
  • Big data-related research that deals with the amount of explosive information in the era of the Fourth Industrial Revolution is actively underway. Big data is an essential element that promotes the development of artificial intelligence with a wide range of data that become learning data for machine learning, or deep learning. The use of deep learning and big data in various fields has produced meaningful results. In this paper, we have investigated the use of Big Data in the cultural arts industry, focusing on video contents. Noteworthy is that big data is used not only in the distribution of cultural and artistic contents but also in the production stage. In particular, we first looked at what kind of achievements and changes the Netflix in the US brought to the OTT business, and analyzed the current state of the OTT business in Korea. After that, Netflix analyzed the success stories of 'House of Cards', which was produced / circulated through 'Deep Learning' cinematique, which is a prediction algorithm, through accumulated customer data. After that, FGI (Focus Group Interview) was held for cultural and artistic contents experts. In this way, the future prospects of Big Data in the domestic culture and arts industry are divided into technical aspect, creative aspect, and ethical aspect.

Development of Image Classification Model for Urban Park User Activity Using Deep Learning of Social Media Photo Posts (소셜미디어 사진 게시물의 딥러닝을 활용한 도시공원 이용자 활동 이미지 분류모델 개발)

  • Lee, Ju-Kyung;Son, Yong-Hoon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.50 no.6
    • /
    • pp.42-57
    • /
    • 2022
  • This study aims to create a basic model for classifying the activity photos that urban park users shared on social media using Deep Learning through Artificial Intelligence. Regarding the social media data, photos related to urban parks were collected through a Naver search, were collected, and used for the classification model. Based on the indicators of Naturalness, Potential Attraction, and Activity, which can be used to evaluate the characteristics of urban parks, 21 classification categories were created. Urban park photos shared on Naver were collected by category, and annotated datasets were created. A custom CNN model and a transfer learning model utilizing a CNN pre-trained on the collected photo datasets were designed and subsequently analyzed. As a result of the study, the Xception transfer learning model, which demonstrated the best performance, was selected as the urban park user activity image classification model and evaluated through several evaluation indicators. This study is meaningful in that it has built AI as an index that can evaluate the characteristics of urban parks by using user-shared photos on social media. The classification model using Deep Learning mitigates the limitations of manual classification, and it can efficiently classify large amounts of urban park photos. So, it can be said to be a useful method that can be used for the monitoring and management of city parks in the future.