• Title/Summary/Keyword: New Words Detection

Search Result 40, Processing Time 0.026 seconds

Strategy of Technology Development for Landslide Hazards by Patent Analysis (특허 분석을 통한 산사태재해 관련 기술개발 전략)

  • Bae, Khee Su;Sawng, Yeong-Wha;Chae, Byung-Gon;Choi, Junghae;Son, Jeong Keun
    • The Journal of Engineering Geology
    • /
    • v.24 no.4
    • /
    • pp.615-629
    • /
    • 2014
  • This study analyzed existing patents related to real-time monitoring and detection technology for landslides on natural terrain. The purpose of patent analysis is to understand landslide hazard technology trends and to develop new advanced technology. This study searched patent data using key words related to landslide monitoring and detection in Korea, the USA, Japan, China (Hong Kong), Europe, and Taiwan. The patents were divided into five main categories and five to seven subcategories in each main category and analyzed by year, country, and applicants. The results were utilized to derive a portfolio of promising technologies for each country. The analysis results will also contribute to the development of more effective research strategies and to categorize research findings from previous studies on landslide hazards.

A New True Ortho-photo Generation Algorithm for High Resolution Satellite Imagery

  • Bang, Ki-In;Kim, Chang-Jae
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.3
    • /
    • pp.347-359
    • /
    • 2010
  • Ortho-photos provide valuable spatial and spectral information for various Geographic Information System (GIS) and mapping applications. The absence of relief displacement and the uniform scale in ortho-photos enable interested users to measure distances, compute areas, derive geographic locations, and quantify changes. Differential rectification has traditionally been used for ortho-photo generation. However, differential rectification produces serious problems (in the form of ghost images) when dealing with large scale imagery over urban areas. To avoid these artifacts, true ortho-photo generation techniques have been devised to remove ghost images through visibility analysis and occlusion detection. So far, the Z-buffer method has been one of the most popular methods for true ortho-photo generation. However, it is quite sensitive to the relationship between the cell size of the Digital Surface Model (DSM) and the Ground Sampling Distance (GSD) of the imaging sensor. Another critical issue of true ortho-photo generation using high resolution satellite imagery is the scan line search. In other words, the perspective center corresponding to each ground point should be identified since we are dealing with a line camera. This paper introduces alternative methodology for true ortho-photo generation that circumvents the drawbacks of the Z-buffer technique and the existing scan line search methods. The experiments using real data are carried out while comparing the performance of the proposed and the existing methods through qualitative and quantitative evaluations and computational efficiency. The experimental analysis proved that the proposed method provided the best success ratio of the occlusion detection and had reasonable processing time compared to all other true ortho-photo generation methods tested in this paper.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

A Study of Natural Language Plagiarism Detection

  • Ahn, Byung-Ryul;Kim, Heon;Kim, Moon-Hyun
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.325-329
    • /
    • 2005
  • Vast amount of information is generated and shared in this active digital As the digital informatization is vividly going on now, most of documents are in digitalized forms, and this kind of information is on the increase. It is no exaggeration to say that this kind of newly created information and knowledge would affect the competitiveness and the future of our nation. In addition to that, a lot of investment is being made in information and knowledge based industries at national level and in reality, a lot of efforts are intensively made for research and development of human resources. It becomes easier in digital era to create and share the information as there are various tools that have been developed to create documents along with the internet, and as a result, the share of dual information is increasing day in and day out. At present, a lot of information that is provided online is actually being plagiarized or illegally copied. Specifically, it is very tricky to identify some plagiarism from tremendous amount of information because the original sentences can be simply restructured or replaced with similar words, which would make them look different from original sentences. This means that managing and protecting the knowledge start to be regarded as important, though it is important to create the knowledge through the investment and efforts. This dissertation tries to suggest new method and theory that would be instrumental in effectively detecting any infringement on and plagiarism of intellectual property of others. DICOM(Dynamic Incremental Comparison Method), a method which was developed by this research to detect plagiarism of document, focuses on realizing a system that can detect plagiarized documents and parts efficiently, accurately and immediately by creating positive and various detectors.

  • PDF

Behavior and Script Similarity-Based Cryptojacking Detection Framework Using Machine Learning (머신러닝을 활용한 행위 및 스크립트 유사도 기반 크립토재킹 탐지 프레임워크)

  • Lim, EunJi;Lee, EunYoung;Lee, IlGu
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.6
    • /
    • pp.1105-1114
    • /
    • 2021
  • Due to the recent surge in popularity of cryptocurrency, the threat of cryptojacking, a malicious code for mining cryptocurrencies, is increasing. In particular, web-based cryptojacking is easy to attack because the victim can mine cryptocurrencies using the victim's PC resources just by accessing the website and simply adding mining scripts. The cryptojacking attack causes poor performance and malfunction. It can also cause hardware failure due to overheating and aging caused by mining. Cryptojacking is difficult for victims to recognize the damage, so research is needed to efficiently detect and block cryptojacking. In this work, we take representative distinct symptoms of cryptojacking as an indicator and propose a new architecture. We utilized the K-Nearst Neighbors(KNN) model, which trained computer performance indicators as behavior-based dynamic analysis techniques. In addition, a K-means model, which trained the frequency of malicious script words for script similarity-based static analysis techniques, was utilized. The KNN model had 99.6% accuracy, and the K-means model had a silhouette coefficient of 0.61 for normal clusters.

Performance Comparison of Out-Of-Vocabulary Word Rejection Algorithms in Variable Vocabulary Word Recognition (가변어휘 단어 인식에서의 미등록어 거절 알고리즘 성능 비교)

  • 김기태;문광식;김회린;이영직;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.27-34
    • /
    • 2001
  • Utterance verification is used in variable vocabulary word recognition to reject the word that does not belong to in-vocabulary word or does not belong to correctly recognized word. Utterance verification is an important technology to design a user-friendly speech recognition system. We propose a new utterance verification algorithm for no-training utterance verification system based on the minimum verification error. First, using PBW (Phonetically Balanced Words) DB (445 words), we create no-training anti-phoneme models which include many PLUs(Phoneme Like Units), so anti-phoneme models have the minimum verification error. Then, for OOV (Out-Of-Vocabulary) rejection, the phoneme-based confidence measure which uses the likelihood between phoneme model (null hypothesis) and anti-phoneme model (alternative hypothesis) is normalized by null hypothesis, so the phoneme-based confidence measure tends to be more robust to OOV rejection. And, the word-based confidence measure which uses the phoneme-based confidence measure has been shown to provide improved detection of near-misses in speech recognition as well as better discrimination between in-vocabularys and OOVs. Using our proposed anti-model and confidence measure, we achieve significant performance improvement; CA (Correctly Accept for In-Vocabulary) is about 89%, and CR (Correctly Reject for OOV) is about 90%, improving about 15-21% in ERR (Error Reduction Rate).

  • PDF

Host-Based Intrusion Detection Model Using Few-Shot Learning (Few-Shot Learning을 사용한 호스트 기반 침입 탐지 모델)

  • Park, DaeKyeong;Shin, DongIl;Shin, DongKyoo;Kim, Sangsoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.271-278
    • /
    • 2021
  • As the current cyber attacks become more intelligent, the existing Intrusion Detection System is difficult for detecting intelligent attacks that deviate from the existing stored patterns. In an attempt to solve this, a model of a deep learning-based intrusion detection system that analyzes the pattern of intelligent attacks through data learning has emerged. Intrusion detection systems are divided into host-based and network-based depending on the installation location. Unlike network-based intrusion detection systems, host-based intrusion detection systems have the disadvantage of having to observe the inside and outside of the system as a whole. However, it has the advantage of being able to detect intrusions that cannot be detected by a network-based intrusion detection system. Therefore, in this study, we conducted a study on a host-based intrusion detection system. In order to evaluate and improve the performance of the host-based intrusion detection system model, we used the host-based Leipzig Intrusion Detection-Data Set (LID-DS) published in 2018. In the performance evaluation of the model using that data set, in order to confirm the similarity of each data and reconstructed to identify whether it is normal data or abnormal data, 1D vector data is converted to 3D image data. Also, the deep learning model has the drawback of having to re-learn every time a new cyber attack method is seen. In other words, it is not efficient because it takes a long time to learn a large amount of data. To solve this problem, this paper proposes the Siamese Convolutional Neural Network (Siamese-CNN) to use the Few-Shot Learning method that shows excellent performance by learning the little amount of data. Siamese-CNN determines whether the attacks are of the same type by the similarity score of each sample of cyber attacks converted into images. The accuracy was calculated using Few-Shot Learning technique, and the performance of Vanilla Convolutional Neural Network (Vanilla-CNN) and Siamese-CNN was compared to confirm the performance of Siamese-CNN. As a result of measuring Accuracy, Precision, Recall and F1-Score index, it was confirmed that the recall of the Siamese-CNN model proposed in this study was increased by about 6% from the Vanilla-CNN model.

Caricaturing using Local Warping and Edge Detection (로컬 와핑 및 윤곽선 추출을 이용한 캐리커처 제작)

  • Choi, Sung-Jin;Bae, Hyeon;Kim, Sung-Shin;Woo, Kwang-Bang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.4
    • /
    • pp.403-408
    • /
    • 2003
  • A general meaning of caricaturing is that a representation, especially pictorial or literary, in which the subject's distinctive features or peculiarities are deliberately exaggerated to produce a comic or grotesque effect. In other words, a caricature is defined as a rough sketch(dessin) which is made by detecting features from human face and exaggerating or warping those. There have been developed many methods which can make a caricature image from human face using computer. In this paper, we propose a new caricaturing system. The system uses a real-time image or supplied image as an input image and deals with it on four processing steps and then creates a caricatured image finally. The four Processing steps are like that. The first step is detecting a face from input image. The second step is extracting special coordinate values as facial geometric information. The third step is deforming the face image using local warping method and the coordinate values acquired in the second step. In fourth step, the system transforms the deformed image into the better improved edge image using a fuzzy Sobel method and then creates a caricatured image finally. In this paper , we can realize a caricaturing system which is simpler than any other exiting systems in ways that create a caricatured image and does not need complex algorithms using many image processing methods like image recognition, transformation and edge detection.

A New Endpoint Detection Method Based on Chaotic System Features for Digital Isolated Word Recognition System (음성인식을 위한 혼돈시스템 특성기반의 종단탐색 기법)

  • Zang, Xian;Chong, Kil-To
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.5
    • /
    • pp.8-14
    • /
    • 2009
  • In the research field of speech recognition, pinpointing the endpoints of speech utterance even with the presence of background noise is of great importance. These noise present during recording introduce disturbances which complicates matters since what we just want is to get the stationary parameters corresponding to each speech section. One major cause of error in automatic recognition of isolated words is the inaccurate detection of the beginning and end boundaries of the test and reference templates, thus the necessity to find an effective method in removing the unnecessary regions of a speech signal. The conventional methods for speech endpoint detection are based on two linear time-domain measurements: the short-time energy, and short-time zero-crossing rate. They perform well for clean speech but their precision is not guaranteed if there is noise present, since the high energy and zero-crossing rate of the noise is mistaken as a part of the speech uttered. This paper proposes a novel approach in finding an apparent threshold between noise and speech based on Lyapunov Exponents (LEs). This proposed method adopts the nonlinear features to analyze the chaos characteristics of the speech signal instead of depending on the unreliable factor-energy. The excellent performance of this approach compared with the conventional methods lies in the fact that it detects the endpoints as a nonlinearity of speech signal, which we believe is an important characteristic and has been neglected by the conventional methods. The proposed method extracts the features based only on the time-domain waveform of the speech signal illustrating its low complexity. Simulations done showed the effective performance of the Proposed method in a noisy environment with an average recognition rate of up 92.85% for unspecified person.

Development of a Detection Model for the Companies Designated as Administrative Issue in KOSDAQ Market (KOSDAQ 시장의 관리종목 지정 탐지 모형 개발)

  • Shin, Dong-In;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.157-176
    • /
    • 2018
  • The purpose of this research is to develop a detection model for companies designated as administrative issue in KOSDAQ market using financial data. Administration issue designates the companies with high potential for delisting, which gives them time to overcome the reasons for the delisting under certain restrictions of the Korean stock market. It acts as an alarm to inform investors and market participants of which companies are likely to be delisted and warns them to make safe investments. Despite this importance, there are relatively few studies on administration issues prediction model in comparison with the lots of studies on bankruptcy prediction model. Therefore, this study develops and verifies the detection model of the companies designated as administrative issue using financial data of KOSDAQ companies. In this study, logistic regression and decision tree are proposed as the data mining models for detecting administrative issues. According to the results of the analysis, the logistic regression model predicted the companies designated as administrative issue using three variables - ROE(Earnings before tax), Cash flows/Shareholder's equity, and Asset turnover ratio, and its overall accuracy was 86% for the validation dataset. The decision tree (Classification and Regression Trees, CART) model applied the classification rules using Cash flows/Total assets and ROA(Net income), and the overall accuracy reached 87%. Implications of the financial indictors selected in our logistic regression and decision tree models are as follows. First, ROE(Earnings before tax) in the logistic detection model shows the profit and loss of the business segment that will continue without including the revenue and expenses of the discontinued business. Therefore, the weakening of the variable means that the competitiveness of the core business is weakened. If a large part of the profits is generated from one-off profit, it is very likely that the deterioration of business management is further intensified. As the ROE of a KOSDAQ company decreases significantly, it is highly likely that the company can be delisted. Second, cash flows to shareholder's equity represents that the firm's ability to generate cash flow under the condition that the financial condition of the subsidiary company is excluded. In other words, the weakening of the management capacity of the parent company, excluding the subsidiary's competence, can be a main reason for the increase of the possibility of administrative issue designation. Third, low asset turnover ratio means that current assets and non-current assets are ineffectively used by corporation, or that asset investment by corporation is excessive. If the asset turnover ratio of a KOSDAQ-listed company decreases, it is necessary to examine in detail corporate activities from various perspectives such as weakening sales or increasing or decreasing inventories of company. Cash flow / total assets, a variable selected by the decision tree detection model, is a key indicator of the company's cash condition and its ability to generate cash from operating activities. Cash flow indicates whether a firm can perform its main activities(maintaining its operating ability, repaying debts, paying dividends and making new investments) without relying on external financial resources. Therefore, if the index of the variable is negative(-), it indicates the possibility that a company has serious problems in business activities. If the cash flow from operating activities of a specific company is smaller than the net profit, it means that the net profit has not been cashed, indicating that there is a serious problem in managing the trade receivables and inventory assets of the company. Therefore, it can be understood that as the cash flows / total assets decrease, the probability of administrative issue designation and the probability of delisting are increased. In summary, the logistic regression-based detection model in this study was found to be affected by the company's financial activities including ROE(Earnings before tax). However, decision tree-based detection model predicts the designation based on the cash flows of the company.