• Title/Summary/Keyword: Task Extraction

Search Result 208, Processing Time 0.029 seconds

A Deep Learning Application for Automated Feature Extraction in Transaction-based Machine Learning (트랜잭션 기반 머신러닝에서 특성 추출 자동화를 위한 딥러닝 응용)

  • Woo, Deock-Chae;Moon, Hyun Sil;Kwon, Suhnbeom;Cho, Yoonho
    • Journal of Information Technology Services
    • /
    • v.18 no.2
    • /
    • pp.143-159
    • /
    • 2019
  • Machine learning (ML) is a method of fitting given data to a mathematical model to derive insights or to predict. In the age of big data, where the amount of available data increases exponentially due to the development of information technology and smart devices, ML shows high prediction performance due to pattern detection without bias. The feature engineering that generates the features that can explain the problem to be solved in the ML process has a great influence on the performance and its importance is continuously emphasized. Despite this importance, however, it is still considered a difficult task as it requires a thorough understanding of the domain characteristics as well as an understanding of source data and the iterative procedure. Therefore, we propose methods to apply deep learning for solving the complexity and difficulty of feature extraction and improving the performance of ML model. Unlike other techniques, the most common reason for the superior performance of deep learning techniques in complex unstructured data processing is that it is possible to extract features from the source data itself. In order to apply these advantages to the business problems, we propose deep learning based methods that can automatically extract features from transaction data or directly predict and classify target variables. In particular, we applied techniques that show high performance in existing text processing based on the structural similarity between transaction data and text data. And we also verified the suitability of each method according to the characteristics of transaction data. Through our study, it is possible not only to search for the possibility of automated feature extraction but also to obtain a benchmark model that shows a certain level of performance before performing the feature extraction task by a human. In addition, it is expected that it will be able to provide guidelines for choosing a suitable deep learning model based on the business problem and the data characteristics.

FastText and BERT for Automatic Term Extraction (FastText 와 BERT 를 이용한 자동 용어 추출)

  • Choi, Kyu-Hyun;Na, Seung-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.612-616
    • /
    • 2021
  • 자연어 처리의 다양한 task 들을 잘 수행하기 위해서 텍스트 내에서 적절한 용어를 골라내는 것은 중요하다. 텍스트에서 적절한 용어들을 자동으로 추출하기 위해 다양한 모델들을 학습시켜 용어의 특성을 잘 반영하는 n 그램을 추출할 수 있다. 본 연구에서는 기존에 존재하는 신경망 모델들을 조합하여 자동 용어 추출 성능을 개선할 수 있는 방법들을 제시하고 각각의 결과들을 비교한다.

  • PDF

Latent Keyphrase Extraction Using Deep Belief Networks

  • Jo, Taemin;Lee, Jee-Hyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.3
    • /
    • pp.153-158
    • /
    • 2015
  • Nowadays, automatic keyphrase extraction is considered to be an important task. Most of the previous studies focused only on selecting keyphrases within the body of input documents. These studies overlooked latent keyphrases that did not appear in documents. In addition, a small number of studies on latent keyphrase extraction methods had some structural limitations. Although latent keyphrases do not appear in documents, they can still undertake an important role in text mining because they link meaningful concepts or contents of documents and can be utilized in short articles such as social network service, which rarely have explicit keyphrases. In this paper, we propose a new approach that selects qualified latent keyphrases from input documents and overcomes some structural limitations by using deep belief networks in a supervised manner. The main idea of this approach is to capture the intrinsic representations of documents and extract eligible latent keyphrases by using them. Our experimental results showed that latent keyphrases were successfully extracted using our proposed method.

Survey of Temporal Information Extraction

  • Lim, Chae-Gyun;Jeong, Young-Seob;Choi, Ho-Jin
    • Journal of Information Processing Systems
    • /
    • v.15 no.4
    • /
    • pp.931-956
    • /
    • 2019
  • Documents contain information that can be used for various applications, such as question answering (QA) system, information retrieval (IR) system, and recommendation system. To use the information, it is necessary to develop a method of extracting such information from the documents written in a form of natural language. There are several kinds of the information (e.g., temporal information, spatial information, semantic role information), where different kinds of information will be extracted with different methods. In this paper, the existing studies about the methods of extracting the temporal information are reported and several related issues are discussed. The issues are about the task boundary of the temporal information extraction, the history of the annotation languages and shared tasks, the research issues, the applications using the temporal information, and evaluation metrics. Although the history of the tasks of temporal information extraction is not long, there have been many studies that tried various methods. This paper gives which approach is known to be the better way of extracting a particular part of the temporal information, and also provides a future research direction.

A Study on 3D Road Extraction From Three Linear Scanner

  • Yun, SHI;SHIBASAKI, Ryosuke
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.301-303
    • /
    • 2003
  • The extraction of 3D road network from high-resolution aerial images is still one of the current challenges in digital photogrammetry and computer vision. For many years, there are many researcher groups working for this task, but unt il now, there are no papers for doing this with TLS (Three linear scanner), which has been developed for the past several years, and has very high-resolution (about 3 cm in ground resolution). In this paper, we present a methodology of road extraction from high-resolution digital imagery taken over urban areas using this modern photogrammetry’s scanner (TLS). The key features of the approach are: (1) Because of high resolution of TLS image, our extraction method is especially designed for constructing 3D road map for next -generation digital navigation map; (2) for extracting road, we use the global context of the intensity variations associated with different features of road (i.e. zebra line and center line), prior to any local edge. So extraction can become comparatively easy, because we can use different special edge detector according different features. The results achieved with our approach show that it is possible and economic to extract 3D road data from Three Linear Scanner to construct next -generation digital navigation road map.

  • PDF

A study on the speech feature extraction based on the hearing model (청각 모델에 기초한 음성 특징 추출에 관한 연구)

  • 김바울;윤석현;홍광석;박병철
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.4
    • /
    • pp.131-140
    • /
    • 1996
  • In this paper, we propose the method that extracts the speech feature using the hearing model through signal precessing techniques. The proposed method includes following procedure ; normalization of the short-time speech block by its maximum value, multi-resolution analysis using the discrete wavelet transformation and re-synthesize using thediscrete inverse wavelet transformation, differentiation after analysis and synthesis, full wave rectification and integration. In order to verify the performance of the proposed speech feature in the speech recognition task, korean digita recognition experiments were carried out using both the dTW and the VQ-HMM. The results showed that, in case of using dTW, the recognition rates were 99.79% and 90.33% for speaker-dependent and speaker-independent task respectively and, in case of using VQ-HMM, the rate were 96.5% and 81.5% respectively. And it indicates that the proposed speech feature has the potentials to use as a simple and efficient feature for recognition task.

  • PDF

Design & Implementation of a PC-Cluster for Image Feature Extraction of a Content-Based Image Retrieval System (내용기반 화상검색 시스템의 화상 특징 추출을 위한 PC-Cluster의 설계 및 구현)

  • 김영균;오길호
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04a
    • /
    • pp.700-702
    • /
    • 2004
  • 본 논문에서는 내용 기반의 화상 검색 시스템을 위한 화상 특징 추출을 고속으로 수행하기 위하여 TCP/IP 프로토콜을 사용하는 LAN 환경에서 유휴(Idle) PC들을 사용한 PC 클러스터에 관해 연구하였다. 실험에 사용한 화상 특징(Image feature)으로서는 칼라의 응집도를 사용하는 CCV(Color Coherence Vector), 화상의 엔트로피를 정량화한 PIM(Picture Information Measure), Gaussian-Laplacian 에지 검출 연산을 사용한 SEV(Spatial Edge Histogram Vector)로서 이들을 추출하기 위한 Task를 Master rude에서 Slave rude들로 전송하고, 연산에 사용 될 화상 데이터를 전송한 후 연산을 수행하고 결과를 다시 Master node로 전송하는 전통적인 Task-Farming형태의 PC Cluster를 구성하였다. 연산에 참여하는 클러스터 노드의 개수를 증가시키며 Task와 화상데이터를 전송하여 이에 따른 연산시간을 측정하고 비교하였다. 실험 결과는 유휴 PC들로 구성된 PC클러스터를 이용한 효율적인 내용기반의 화상 검색 시스템을 구성하기 위해 활용이 가능하다.

  • PDF

Content-Aware Convolutional Neural Network for Object Recognition Task

  • Poernomo, Alvin;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • v.5 no.3
    • /
    • pp.1-7
    • /
    • 2016
  • In existing Convolutional Neural Network (CNNs) for object recognition task, there are only few efforts known to reduce the noises from the images. Both convolution and pooling layers perform the features extraction without considering the noises of the input image, treating all pixels equally important. In computer vision field, there has been a study to weight a pixel importance. Seam carving resizes an image by sacrificing the least important pixels, leaving only the most important ones. We propose a new way to combine seam carving approach with current existing CNN model for object recognition task. We attempt to remove the noises or the "unimportant" pixels in the image before doing convolution and pooling, in order to get better feature representatives. Our model shows promising result with CIFAR-10 dataset.

Speech Feature Extraction Based on the Human Hearing Model

  • Chung, Kwang-Woo;Kim, Paul;Hong, Kwang-Seok
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.435-447
    • /
    • 1996
  • In this paper, we propose the method that extracts the speech feature using the hearing model through signal processing techniques. The proposed method includes the following procedure ; normalization of the short-time speech block by its maximum value, multi-resolution analysis using the discrete wavelet transformation and re-synthesize using the discrete inverse wavelet transformation, differentiation after analysis and synthesis, full wave rectification and integration. In order to verify the performance of the proposed speech feature in the speech recognition task, korean digit recognition experiments were carried out using both the DTW and the VQ-HMM. The results showed that, in the case of using DTW, the recognition rates were 99.79% and 90.33% for speaker-dependent and speaker-independent task respectively and, in the case of using VQ-HMM, the rate were 96.5% and 81.5% respectively. And it indicates that the proposed speech feature has the potential for use as a simple and efficient feature for recognition task

  • PDF