• Title/Summary/Keyword: 문장형태 정보

Search Result 270, Processing Time 0.025 seconds

Investigating an Automatic Method for Summarizing and Presenting a Video Speech Using Acoustic Features (음향학적 자질을 활용한 비디오 스피치 요약의 자동 추출과 표현에 관한 연구)

  • Kim, Hyun-Hee
    • Journal of the Korean Society for information Management
    • /
    • v.29 no.4
    • /
    • pp.191-208
    • /
    • 2012
  • Two fundamental aspects of speech summary generation are the extraction of key speech content and the style of presentation of the extracted speech synopses. We first investigated whether acoustic features (speaking rate, pitch pattern, and intensity) are equally important and, if not, which one can be effectively modeled to compute the significance of segments for lecture summarization. As a result, we found that the intensity (that is, difference between max DB and min DB) is the most efficient factor for speech summarization. We evaluated the intensity-based method of using the difference between max-DB and min-DB by comparing it to the keyword-based method in terms of which method produces better speech summaries and of how similar weight values assigned to segments by two methods are. Then, we investigated the way to present speech summaries to the viewers. As such, for speech summarization, we suggested how to extract key segments from a speech video efficiently using acoustic features and then present the extracted segments to the viewers.

Product Evaluation Summarization Through Linguistic Analysis of Product Reviews (상품평의 언어적 분석을 통한 상품 평가 요약 시스템)

  • Lee, Woo-Chul;Lee, Hyun-Ah;Lee, Kong-Joo
    • The KIPS Transactions:PartB
    • /
    • v.17B no.1
    • /
    • pp.93-98
    • /
    • 2010
  • In this paper, we introduce a system that summarizes product evaluation through linguistic analysis to effectively utilize explosively increasing product reviews. Our system analyzes polarities of product reviews by product features, based on which customers evaluate each product like 'design' and 'material' for a skirt product category. The system shows to customers a graph as a review summary that represents percentages of positive and negative reviews. We build an opinion word dictionary for each product feature through context based automatic expansion with small seed words, and judge polarity of reviews by product features with the extracted dictionary. In experiment using product reviews from online shopping malls, our system shows average accuracy of 69.8% in extracting judgemental word dictionary and 81.8% in polarity resolution for each sentence.

Analysis of Scientific Item Networks from Science and Biology Textbooks (고등학교 과학 및 생물교과서 과학용어 네트워크 분석)

  • Park, Byeol-Na;Lee, Yoon-Kyeong;Ku, Ja-Eul;Hong, Young-Soo;Kim, Hak-Yong
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.427-435
    • /
    • 2010
  • We extracted core terms by constructing scientific item networks from textbooks, analyzing their structures, and investigating the connected information and their relationships. For this research, we chose three high-school textbooks from different publishers for each three subjects, i.e, Science, Biology I and Biology II, to construct networks by linking scientific items in each sentence, where used items were regarded as nodes. Scientific item networks from all textbooks showed scare-free character. When core networks were established by applying k-core algorithm which is one of generally used methods for removing lesser weighted nodes and links from complex network, they showed the modular structure. Science textbooks formed four main modules of physics, chemistry, biology and earth science, while Biology I and Biology II textbooks revealed core networks composed of more detailed specific items in each field. These findings demonstrate the structural characteristics of networks in textbooks, and suggest core scientific items helpful for students' understanding of concept in Science and Biology.

Implementation of Dead Code Elimination in CTOC (CTOC에서 죽은 코드 제거 구현)

  • Kim, Ki-Tae;Kim, Je-Min;Yoo, Won-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.2 s.46
    • /
    • pp.1-8
    • /
    • 2007
  • Although the Java bytecode has numerous advantages, there are also shortcomings such as slow execution speed and difficulty in analysis. Therefore, in order for the Java class file to be effectively executed under the execution environment such as the network, it is necessary to convert it into optimized code. We implements CTOC. In order to statically determine the value and type, CTOC uses the SSA Form which separates the variable according to assignment. Also, it uses a Tree Form for statements. But, due to insertion of the $\phi$-function in the process of conversion into the SSA Form, the number of nodes increased. This paper shows the dead code elimination to obtain a more optimized code in SSA Form. We add new live field in each node and achieve dead code elimination in tree structures. We can confirm after dead code elimination though test results that nodes decreases.

  • PDF

Design of Geocoder service for LBS in Wireless telecommunication environment (무선통신 환경에서의 LBS를 위한 지오코더 서비스 설계)

  • Han, Eun-Young;Choi, Hae-Ock
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2003.11a
    • /
    • pp.118-122
    • /
    • 2003
  • 본 논문은 무선통신 환경에서 다양한 위치정보의 응용서비스인 위치기반서비스의 공통적인 기능을 가지는 지오코더 서비스를 설계하는 것이다. 위치기반서비스는 망이나 GPS 등의 측위기술을 이용하여 휴대 단말 등 사용자의 위치정보를 제공하는 것으로 최근 부가가치가 기대되는 서비스이다. 지오코더 서비스는 주소 등의 요청에 대해 X, Y, Z 등으로 표현되는 지리적 위치정보를 제공하는 서비스와 역으로 지리적 위치정보를 포함하는 요청에 대해 주소 등을 포함한 정규화 된 정보를 제공하는 역지오코더 서비스로 정의할 수 있다. 국내의 많은 웹 기반의 GIS 서비스들이 지형지물 등에 의해 지리적인 위치를 탐색하는 지오코딩 기능이 구현되어 있으나, 지리 정보에 대한 각각 서로 다른 인터페이스들이 사용되고 있어, 확장된 활용성에 한계를 가지고 있다. 특히, 무선통신 환경의 발전에 따른 효율적인 지리적 위치정보의 활용을 통한 다양한 위치기반서비스의 개발 및 활성화를 위하여 국제동향을 고려한 정규화 된 지오코더 서비스의 개발이 요구된다. 본 논문에서는 지리적 위치정보를 포함하고 있는 지오코더 서비스를 위하여 기술규격 범위와 요구기능 정의, 서비스를 위한 데이터의 정규화 및 인터페이스를 설계하여, 국내 무선통신 환경에서 다양한 위치기반서비스의 활용성을 높이기 위한 서비스 시스템 방안을 마련하고자 한다. 또한, 주소 정의에 있어서 국내 주소체계를 충분히 분석하여 구축하였다. 이는 본 연구자가 LBS 표준화 포럼을 통하여 작성 중인 '지오코더서비스 인터페이스 기술규격'을 수용한 것이다.적으로 분석하고, 지형정보의 체계적 관리를 위해 가장 필요한 해안습지 지형분류도를 작성하기 위해 가장 기초적인 단계인 해안습지 지형분류체계에 대한 국내외의 연구성과를 비교하여 시안을 작성 표준화를 위한 첫 단계 시도를 소개하였다.분석 결과는 문장, 그림 및 도표, 장 끝의 질문, 학생의 학습 활동 수 등이 $0.4{\sim}1.5$ 사이의 값으로 학생 참여를 적절히 유도하는 발견 지향적 인 것으로 조사되었다. 그러나 장의 요약은 본문 내용을 반복하는 내용으로 구성되었다. 이와 같이 공통과학 과목은 새로운 현대 사회에 부응하는 교과 목표와 체계를 지향하고 있지만 아직도 통합과학으로서의 내용과 체계를 완전히 갖추고 있지 못할 뿐만 아니라 현재 사용되고 있는 7종의 교과서가 교육 목표를 충분히 반영하지 못하고 있다. 따라서 교사의 역할이 더욱더 중요하게 되었다.괴리가 작아진다. 이 결과에 따르면 위탁증거금의 징수는 그 제도의 취지에 부합되고 있다. 다만 제도운용상의 이유이거나 혹은 우리나라 주식시장의 투자자들이 비합리적인 투자형태를 보임에 따라 그 정책적 효과는 때로 역기능적인 결과로 초래하였다. 그럼에도 불구하고 이 연구결과를 통하여 최소한 주식시장(株式市場)에서 위탁증거금제도는 그 제도적 의의가 여전히 있다는 사실이 확인되었다. 또한 우리나라 주식시장에서 통상 과열투기 행위가 빈번히 일어나 주식시장을 교란시킴으로써 건전한 투자풍토조성에 저해된다는 저간의 우려가 매우 커왔으나 표본 기간동안에 대하여 실증분석을 한 결과 주식시장 전체적으로 볼 때 주가변

  • PDF

A Study on the Integration of Information Extraction Technology for Detecting Scientific Core Entities based on Large Resources (대용량 자원 기반 과학기술 핵심개체 탐지를 위한 정보추출기술 통합에 관한 연구)

  • Choi, Yun-Soo;Cheong, Chang-Hoo;Choi, Sung-Pil;You, Beom-Jong;Kim, Jae-Hoon
    • Journal of Information Management
    • /
    • v.40 no.4
    • /
    • pp.1-22
    • /
    • 2009
  • Large-scaled information extraction plays an important role in advanced information retrieval as well as question answering and summarization. Information extraction can be defined as a process of converting unstructured documents into formalized, tabular information, which consists of named-entity recognition, terminology extraction, coreference resolution and relation extraction. Since all the elementary technologies have been studied independently so far, it is not trivial to integrate all the necessary processes of information extraction due to the diversity of their input/output formation approaches and operating environments. As a result, it is difficult to handle scientific documents to extract both named-entities and technical terms at once. In this study, we define scientific as a set of 10 types of named entities and technical terminologies in a biomedical domain. in order to automatically extract these entities from scientific documents at once, we develop a framework for scientific core entity extraction which embraces all the pivotal language processors, named-entity recognizer, co-reference resolver and terminology extractor. Each module of the integrated system has been evaluated with various corpus as well as KEEC 2009. The system will be utilized for various information service areas such as information retrieval, question-answering(Q&A), document indexing, dictionary construction, and so on.

Exploring Feature Selection Methods for Effective Emotion Mining (효과적 이모션마이닝을 위한 속성선택 방법에 관한 연구)

  • Eo, Kyun Sun;Lee, Kun Chang
    • Journal of Digital Convergence
    • /
    • v.17 no.3
    • /
    • pp.107-117
    • /
    • 2019
  • In the era of SNS, many people relies on it to express their emotions about various kinds of products and services. Therefore, for the companies eagerly seeking to investigate how their products and services are perceived in the market, emotion mining tasks using dataset from SNSs become important much more than ever. Basically, emotion mining is a branch of sentiment analysis which is based on BOW (bag-of-words) and TF-IDF. However, there are few studies on the emotion mining which adopt feature selection (FS) methods to look for optimal set of features ensuring better results. In this sense, this study aims to propose FS methods to conduct emotion mining tasks more effectively with better outcomes. This study uses Twitter and SemEval2007 dataset for the sake of emotion mining experiments. We applied three FS methods such as CFS (Correlation based FS), IG (Information Gain), and ReliefF. Emotion mining results were obtained from applying the selected features to nine classifiers. When applying DT (decision tree) to Tweet dataset, accuracy increases with CFS, IG, and ReliefF methods. When applying LR (logistic regression) to SemEval2007 dataset, accuracy increases with ReliefF method.

Automatic Recognition and Normalization System of Korean Time Expression using the individual time units (시간의 단위별 처리를 이용한 자동화된 한국어 시간 표현 인식 및 정규화 시스템)

  • Seon, Choong-Nyoung;Kang, Sang-Woo;Seo, Jung-Yun
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.4
    • /
    • pp.447-458
    • /
    • 2010
  • Time expressions are a very important form of information in different types of data. Thus, the recognition of a time expression is an important factor in the field of information extraction. However, most previously designed systems consider only a specific domain, because time expressions do not have a regular form and frequently include different ellipsis phenomena. We present a two-level recognition method consisting of extraction and transformation phases to achieve generality and portability. In the extraction phase, time expressions are extracted by atomic time units for extensibility. Then, in the transformation phase, omitted information is restored using basis time and prior knowledge. Finally, every complete atomic time unit is transformed into a normalized form. The proposed system can be used as a general-purpose system, because it has a language- and domain-independent architecture. In addition, this system performs robustly in noisy data like SMS data, which include various errors. For SMS data, the accuracies of time-expression extraction and time-expression normalization by using the proposed system are 93.8% and 93.2%, respectively. On the basis of these experimental results, we conclude that the proposed system shows high performance in noisy data.

  • PDF

A Study of Relationship Derivation Technique using object extraction Technique (개체추출기법을 이용한 관계성 도출기법)

  • Kim, Jong-hee;Lee, Eun-seok;Kim, Jeong-su;Park, Jong-kook;Kim, Jong-bae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.309-311
    • /
    • 2014
  • Despite increasing demands for big data application based on the analysis of scattered unstructured data, few relevant studies have been reported. Accordingly, the present study suggests a technique enabling a sentence-based semantic analysis by extracting objects from collected web information and automatically analyzing the relationships between such objects with collective intelligence and language processing technology. To be specific, collected information is stored in DBMS in a structured form, and then morpheme and feature information is analyzed. Obtained morphemes are classified into objects of interest, marginal objects and objects of non-interest. Then, with an inter-object attribute recognition technique, the relationships between objects are analyzed in terms of the degree, scope and nature of such relationships. As a result, the analysis of relevance between the information was based on certain keywords and used an inter-object relationship extraction technique that can determine positivity and negativity. Also, the present study suggested a method to design a system fit for real-time large-capacity processing and applicable to high value-added services.

  • PDF

A Classification Model for Attack Mail Detection based on the Authorship Analysis (작성자 분석 기반의 공격 메일 탐지를 위한 분류 모델)

  • Hong, Sung-Sam;Shin, Gun-Yoon;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.18 no.6
    • /
    • pp.35-46
    • /
    • 2017
  • Recently, attackers using malicious code in cyber security have been increased by attaching malicious code to a mail and inducing the user to execute it. Especially, it is dangerous because it is easy to execute by attaching a document type file. The author analysis is a research area that is being studied in NLP (Neutral Language Process) and text mining, and it studies methods of analyzing authors by analyzing text sentences, texts, and documents in a specific language. In case of attack mail, it is created by the attacker. Therefore, by analyzing the contents of the mail and the attached document file and identifying the corresponding author, it is possible to discover more distinctive features from the normal mail and improve the detection accuracy. In this pager, we proposed IADA2(Intelligent Attack mail Detection based on Authorship Analysis) model for attack mail detection. The feature vector that can classify and detect attack mail from the features used in the existing machine learning based spam detection model and the features used in the author analysis of the document and the IADA2 detection model. We have improved the detection models of attack mails by simply detecting term features and extracted features that reflect the sequence characteristics of words by applying n-grams. Result of experiment show that the proposed method improves performance according to feature combinations, feature selection techniques, and appropriate models.