• Title/Summary/Keyword: Semantic Technology

Search Result 938, Processing Time 0.029 seconds

International Patent Classificaton Using Latent Semantic Indexing (잠재 의미 색인 기법을 이용한 국제 특허 분류)

  • Jin, Hoon-Tae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.1294-1297
    • /
    • 2013
  • 본 논문은 기계학습을 통하여 특허문서를 국제 특허 분류(IPC) 기준에 따라 자동으로 분류하는 시스템에 관한 연구로 잠재 의미 색인 기법을 이용하여 분류의 성능을 높일 수 있는 방법을 제안하기 위한 연구이다. 종래 특허문서에 관한 IPC 자동 분류에 관한 연구가 단어 매칭 방식의 색인 기법에 의존해서 이루어진바가 있으나, 현대 기술용어의 발생 속도와 다양성 등을 고려할 때 특허문서들 간의 관련성을 분석하는데 있어서는 단어 자체의 빈도 보다는 용어의 개념에 의한 접근이 보다 효과적일 것이라 판단하여 잠재 의미 색인(LSI) 기법에 의한 분류에 관한 연구를 하게 된 것이다. 실험은 단어 매칭 방식의 색인 기법의 대표적인 자질선택 방법인 정보획득량(IG)과 카이제곱 통계량(CHI)을 이용했을 때의 성능과 잠재 의미 색인 방법을 이용했을 때의 성능을 SVM, kNN 및 Naive Bayes 분류기를 사용하여 분석하고, 그중 가장 성능이 우수하게 나오는 SVM을 사용하여 잠재 의미 색인에서 명사가 해당 용어의 개념적 의미 구조를 구축하는데 기여하는 정도가 어느 정도인지 평가함과 아울러, LSI 기법 이용시 최적의 성능을 나타내는 특이값의 범위를 실험을 통해 비교 분석 하였다. 분석결과 LSI 기법이 단어 매칭 기법(IG, CHI)에 비해 우수한 성능을 보였으며, SVM, Naive Bayes 분류기는 단어 매칭 기법에서는 비슷한 수준을 보였으나, LSI 기법에서는 SVM의 성능이 월등이 우수한 것으로 나왔다. 또한, SVM은 LSI 기법에서 약 3%의 성능 향상을 보였지만 Naive Bayes는 오히려 20%의 성능 저하를 보였다. LSI 기법에서 명사가 잠재적 의미 구조에 미치는 영향은 모든 단어들을 내용어로 한 경우 보다 약 10% 더 향상된 결과를 보여주었고, 특이값의 범위에 따른 성능 분석에 있어서는 30% 수준에 Rank 되는 범위에서 가장 높은 성능의 결과가 나왔다.

Analysis on the Trend of The Journal of Information Systems Using TLS Mining (TLS 마이닝을 이용한 '정보시스템연구' 동향 분석)

  • Yun, Ji Hye;Oh, Chang Gyu;Lee, Jong Hwa
    • The Journal of Information Systems
    • /
    • v.31 no.1
    • /
    • pp.289-304
    • /
    • 2022
  • Purpose The development of the network and mobile industries has induced companies to invest in information systems, leading a new industrial revolution. The Journal of Information Systems, which developed the information system field into a theoretical and practical study in the 1990s, retains a 30-year history of information systems. This study aims to identify academic values and research trends of JIS by analyzing the trends. Design/methodology/approach This study aims to analyze the trend of JIS by compounding various methods, named as TLS mining analysis. TLS mining analysis consists of a series of analysis including Term Frequency-Inverse Document Frequency (TF-IDF) weight model, Latent Dirichlet Allocation (LDA) topic modeling, and a text mining with Semantic Network Analysis. Firstly, keywords are extracted from the research data using the TF-IDF weight model, and after that, topic modeling is performed using the Latent Dirichlet Allocation (LDA) algorithm to identify issue keywords. Findings The current study used the summery service of the published research paper provided by Korea Citation Index to analyze JIS. 714 papers that were published from 2002 to 2012 were divided into two periods: 2002-2011 and 2012-2021. In the first period (2002-2011), the research trend in the information system field had focused on E-business strategies as most of the companies adopted online business models. In the second period (2012-2021), data-based information technology and new industrial revolution technologies such as artificial intelligence, SNS, and mobile had been the main research issues in the information system field. In addition, keywords for improving the JIS citation index were presented.

Performance Improvement of Context-Sensitive Spelling Error Correction Techniques using Knowledge Graph Embedding of Korean WordNet (alias. KorLex) (한국어 어휘 의미망(alias. KorLex)의 지식 그래프 임베딩을 이용한 문맥의존 철자오류 교정 기법의 성능 향상)

  • Lee, Jung-Hun;Cho, Sanghyun;Kwon, Hyuk-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.3
    • /
    • pp.493-501
    • /
    • 2022
  • This paper is a study on context-sensitive spelling error correction and uses the Korean WordNet (KorLex)[1] that defines the relationship between words as a graph to improve the performance of the correction[2] based on the vector information of the word embedded in the correction technique. The Korean WordNet replaced WordNet[3] developed at Princeton University in the United States and was additionally constructed for Korean. In order to learn a semantic network in graph form or to use it for learned vector information, it is necessary to transform it into a vector form by embedding learning. For transformation, we list the nodes (limited number) in a line format like a sentence in a graph in the form of a network before the training input. One of the learning techniques that use this strategy is Deepwalk[4]. DeepWalk is used to learn graphs between words in the Korean WordNet. The graph embedding information is used in concatenation with the word vector information of the learned language model for correction, and the final correction word is determined by the cosine distance value between the vectors. In this paper, In order to test whether the information of graph embedding affects the improvement of the performance of context- sensitive spelling error correction, a confused word pair was constructed and tested from the perspective of Word Sense Disambiguation(WSD). In the experimental results, the average correction performance of all confused word pairs was improved by 2.24% compared to the baseline correction performance.

Attention-based word correlation analysis system for big data analysis (빅데이터 분석을 위한 어텐션 기반의 단어 연관관계 분석 시스템)

  • Chi-Gon, Hwang;Chang-Pyo, Yoon;Soo-Wook, Lee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.27 no.1
    • /
    • pp.41-46
    • /
    • 2023
  • Recently, big data analysis can use various techniques according to the development of machine learning. Big data collected in reality lacks an automated refining technique for the same or similar terms based on semantic analysis of the relationship between words. Since most of the big data is described in general sentences, it is difficult to understand the meaning and terms of the sentences. To solve these problems, it is necessary to understand the morphological analysis and meaning of sentences. Accordingly, NLP, a technique for analyzing natural language, can understand the word's relationship and sentences. Among the NLP techniques, the transformer has been proposed as a way to solve the disadvantages of RNN by using self-attention composed of an encoder-decoder structure of seq2seq. In this paper, transformers are used as a way to form associations between words in order to understand the words and phrases of sentences extracted from big data.

A Study on Zero Pay Image Recognition Using Big Data Analysis

  • Kim, Myung-He;Ryu, Ki-Hwan
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.193-204
    • /
    • 2022
  • The 2018 Seoul Zero Pay is a policy actively promoted by the government as an economic stimulus package for small business owners and the self-employed who are experiencing economic depression due to COVID-19. However, the controversy over the effectiveness of Zero Pay continues even after two years have passed since the implementation of the policy. Zero Pay is a joint QR code mobile payment service introduced by the government, Seoul city, financial companies, and private simple payment providers to reduce the burden of card merchant fees for small business owners and self-employed people who are experiencing economic difficulties due to the economic downturn., it was attempted in the direction of economic revitalization for the return of alleyways[1]. Therefore, this study intends to draw implications for improvement measures so that the ongoing zero-pay can be further activated and the economy can be settled normally. The analysis results of this study are as follows. First, it shows the effect of increasing the income of small business owners by inducing consumption in alleyways through the economic revitalization policy of Zero Pay. Second, the issuance and distribution of Zero Pay helps to revitalize the local economy and contribute to the establishment of a virtuous cycle system. Third, stable operation is being realized by the introduction of blockchain technology to the Zero Pay platform. In terms of academic significance, the direction of Zero Pay's policies and systems was able to identify changes in the use of Zero Pay through big data analysis. The implementation of the zero-pay policy is in its infancy, and there are limitations in factors for examining the consumer image perception of zero-pay as there are insufficient prior studies. Therefore, continuous follow-up research on Zero Pay should be conducted.

Research on Tourist Perception of Grand Canal Cultural Heritage Based on Network Text Analysis : The Pingjiang Historical and Cultural District of Suzhou City as an example (네트워크 텍스트 분석을 통한 대운하 문화유산에 대한 관광객 인식 연구 : 쑤저우시 핑장역사문화지구의 예)

  • Chengkang Zheng;Qiwei Jing;Nam Kyung Hyeon
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.215-231
    • /
    • 2023
  • Taking Pingjiang historical and cultural block in Suzhou as an example, this paper collects 1436 tourist comment data from Ctrip. com with Python technology, and uses network text analysis method to analyze frequency words, semantic network and emotion, so as to evaluate the tourist perception characteristics and levels of the Grand Canal cultural heritage. The study found that: natural and humanistic landscapes, historical and cultural deposits, and the style of the Jiangnan Canal are fully reflected in the perception of visitors to the Pingjiang Historical and Cultural District; Tourists hold strong positive emotions towards the Pingjiang Road historical and cultural district, however, there is still more space for the transformation and upgrading of the district. Finally,suggestions for measures to improve the perception of tourists of the Grand Canal cultural heritage are given in terms of conservation first, cultural integration and innovative utilization.

Deep learning-based post-disaster building inspection with channel-wise attention and semi-supervised learning

  • Wen Tang;Tarutal Ghosh Mondal;Rih-Teng Wu;Abhishek Subedi;Mohammad R. Jahanshahi
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.365-381
    • /
    • 2023
  • The existing vision-based techniques for inspection and condition assessment of civil infrastructure are mostly manual and consequently time-consuming, expensive, subjective, and risky. As a viable alternative, researchers in the past resorted to deep learning-based autonomous damage detection algorithms for expedited post-disaster reconnaissance of structures. Although a number of automatic damage detection algorithms have been proposed, the scarcity of labeled training data remains a major concern. To address this issue, this study proposed a semi-supervised learning (SSL) framework based on consistency regularization and cross-supervision. Image data from post-earthquake reconnaissance, that contains cracks, spalling, and exposed rebars are used to evaluate the proposed solution. Experiments are carried out under different data partition protocols, and it is shown that the proposed SSL method can make use of unlabeled images to enhance the segmentation performance when limited amount of ground truth labels are provided. This study also proposes DeepLab-AASPP and modified versions of U-Net++ based on channel-wise attention mechanism to better segment the components and damage areas from images of reinforced concrete buildings. The channel-wise attention mechanism can effectively improve the performance of the network by dynamically scaling the feature maps so that the networks can focus on more informative feature maps in the concatenation layer. The proposed DeepLab-AASPP achieves the best performance on component segmentation and damage state segmentation tasks with mIoU scores of 0.9850 and 0.7032, respectively. For crack, spalling, and rebar segmentation tasks, modified U-Net++ obtains the best performance with Igou scores (excluding the background pixels) of 0.5449, 0.9375, and 0.5018, respectively. The proposed architectures win the second place in IC-SHM2021 competition in all five tasks of Project 2.

A Study on the Evaluation Differences of Korean and Chinese Users in Smart Home App Services through Text Mining based on the Two-Factor Theory: Focus on Trustness (이요인 이론 기반 텍스트 마이닝을 통한 한·중 스마트홈 앱 서비스 사용자 평가 차이에 대한 연구: 신뢰성 중심)

  • Yuning Zhao;Gyoo Gun Lim
    • Journal of Information Technology Services
    • /
    • v.22 no.3
    • /
    • pp.141-165
    • /
    • 2023
  • With the advent of the fourth industrial revolution, technologies such as the Internet of Things, artificial intelligence and cloud computing are developing rapidly, and smart homes enabled by these technologies are rapidly gaining popularity. To gain a competitive advantage in the global market, companies must understand the differences in consumer needs in different countries and cultures and develop corresponding business strategies. Therefore, this study conducts a comparative analysis of consumer reviews of smart homes in South Korea and China. This study collected online reviews of SmartThings, ThinQ, Msmarthom, and MiHome, the four most commonly used smart home apps in Korea and China. The collected review data is divided into satisfied reviews and dissatisfied reviews according to the ratings, and topics are extracted for each review dataset using LDA topic modeling. Next, the extracted topics are classified according to five evaluation factors of Perceived Usefulness, Reachability, Interoperability,Trustness, and Product Brand proposed by previous studies. Then, by comparing the importance of each evaluation factor in the two datasets of satisfaction and dissatisfaction, we find out the factors that affect consumer satisfaction and dissatisfaction, and compare the differences between users in Korea and China. We found Trustness and Reachability are very important factors. Finally, through language network analysis, the relationship between dissatisfied factors is analyzed from a more microscopic level, and improvement plans are proposed to the companies according to the analysis results.

Road Image Recognition Technology based on Deep Learning Using TIDL NPU in SoC Enviroment (SoC 환경에서 TIDL NPU를 활용한 딥러닝 기반 도로 영상 인식 기술)

  • Yunseon Shin;Juhyun Seo;Minyoung Lee;Injung Kim
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.25-31
    • /
    • 2022
  • Deep learning-based image processing is essential for autonomous vehicles. To process road images in real-time in a System-on-Chip (SoC) environment, we need to execute deep learning models on a NPU (Neural Procesing Units) specialized for deep learning operations. In this study, we imported seven open-source image processing deep learning models, that were developed on GPU servers, to Texas Instrument Deep Learning (TIDL) NPU environment. We confirmed that the models imported in this study operate normally in the SoC virtual environment through performance evaluation and visualization. This paper introduces the problems that occurred during the migration process due to the limitations of NPU environment and how to solve them, and thereby, presents a reference case worth referring to for developers and researchers who want to port deep learning models to SoC environments.

A Study on the Definition of Data Literacy for Elementary and Secondary Artificial Intelligence Education (초·중등 인공지능 교육을 위한 데이터 리터러시 정의 연구)

  • Kim, SeulKi;Kim, Taeyoung
    • 한국정보교육학회:학술대회논문집
    • /
    • 2021.08a
    • /
    • pp.59-67
    • /
    • 2021
  • The development of AI technology has brought about a big change in our lives. As AI's influence grows from life to society to the economy, the importance of education on AI and data is also growing. In particular, the OECD Education Research Report and various domestic information and curriculum studies address data literacy and present it as an essential competency. Looking at domestic and international studies, one can see that the definition of data literacy differs in its specific content and scope from researchers to researchers. Thus, the definition of major research related to data literacy was analyzed from various angles and derived from various angles. In key studies, Word2vec natural language processing methods, along with word frequency analysis used to define data literacy, are used to analyze semantic similarities and nominate them based on content elements of curriculum research to derive the definition of 'understanding and using data to process information'. Based on the definition of data literacy derived from this study, we hope that the contents will be revised and supplemented, and more research will be conducted to provide a good foundation for educational research that develops students' future capabilities.

  • PDF