• Title/Summary/Keyword: Feature Based Summarization

Search Result 25, Processing Time 0.022 seconds

Aesthetic Feature-based Activity Summarization for Senior Life Logging (시니어 라이프 로깅을 위한 심미적 특징 기반의 행동 요약 시스템)

  • Kim, Seondae;Ryu, Il-Woong;Ryu, Jaesung;Mujtaba, Ghulam;Park, Eunsoo;Kim, Seunghwan;Ryu, Eun-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.25-28
    • /
    • 2019
  • 본 논문은 시니어 라이프 로깅을 위한 데이터베이스를 효과적으로 구축하기 위해 영상의 심미적 특징을 통한 행동 별 영상 요약을 소개한다. 실내의 TV 앞에서 오랜 시간을 보내는 시니어의 상태를 체크하기 위해 일반 카메라 또는 360 카메라를 통해 HD 급 화질 이상의 영상을 주기적으로 수집하고, 이를 머신러닝 또는 딥러닝 기반의 행동인식 시스템에 이용하기 위한 전처리 단계에 응용할 수 있는 방법을 서술한다. 이 연구에서는 영상 데이터에서 얻을 수 있는 색상을 이용한 HSV 히스토그램, 영상신호의 Jitter 를 줄이는 고정도, 움직임 에너지 등을 이용하여 짧은 시간 내에 행동별로 구분된 영상(샷, shot)을 자르고 요약하는 방법을 서술한다.

  • PDF

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

An Efficient Face Region Detection for Content-based Video Summarization (내용기반 비디오 요약을 위한 효율적인 얼굴 객체 검출)

  • Kim Jong-Sung;Lee Sun-Ta;Baek Joong-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.7C
    • /
    • pp.675-686
    • /
    • 2005
  • In this paper, we propose an efficient face region detection technique for the content-based video summarization. To segment video, shot changes are detected from a video sequence and key frames are selected from the shots. We select one frame that has the least difference between neighboring frames in each shot. The proposed face detection algorithm detects face region from selected key frames. And then, we provide user with summarized frames included face region that has an important meaning in dramas or movies. Using Bayes classification rule and statistical characteristic of the skin pixels, face regions are detected in the frames. After skin detection, we adopt the projection method to segment an image(frame) into face region and non-face region. The segmented regions are candidates of the face object and they include many false detected regions. So, we design a classifier to minimize false lesion using CART. From SGLD matrices, we extract the textual feature values such as Inertial, Inverse Difference, and Correlation. As a result of our experiment, proposed face detection algorithm shows a good performance for the key frames with a complex and variant background. And our system provides key frames included the face region for user as video summarized information.

Query-Based Text Summarization Using Cosine Similarity and NMF (NMF 와 코사인유사도를 이용한 질의 기반 문서요약)

  • Park Sun;Lee Ju-Hong;Ahn Chan-Min;Park Tae-Su;Song Jae-Won;Kim Deok-Hwan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.05a
    • /
    • pp.473-476
    • /
    • 2006
  • 인터넷의 발달로 인하여 정보의 양은 시간이 지날수록 폭발적으로 증가하고 있다. 이러한 방대한 정보로부터 정보검색시스템은 사용자에게 너무 많은 검색결과를 제시하여 사용자가 원하는 정보를 찾기 위해 너무 많은 시간을 소요하게 하는 정보의 과적재 문제가 있다. 질의 기반의 문서요약은 정보의 사용자가 원하는 정보의 검색시간을 줄임으로써 정보의 과적재 문제를 해결하는 방법으로서 점차 중요성이 증가하고 있다. 본 논문은 비음수 행렬 인수분해 (NMF, Non-negative Matrix Factorization)과 코사인 유사도를 이용하여 질의 기반의 문서를 요약하는 새로운 방법을 제안하였다. 제안된 방법은 질의와 문서 간에 사전학습이 필요 없다. 또한 문서를 그래프로 변형시키는 복잡한 처리 없이 NMF 에 의해 얻어진 의미 특징(semantic feature)과 의미 변수(semantic variable)로 문서의 고유 구조를 반영하여 요약의 정확도를 높일 수 있다. 마지막으로 단순한 방법으로 문장을 쉽게 요약할 수 있다.

  • PDF

Method of Extracting the Topic Sentence Considering Sentence Importance based on ELMo Embedding (ELMo 임베딩 기반 문장 중요도를 고려한 중심 문장 추출 방법)

  • Kim, Eun Hee;Lim, Myung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.39-46
    • /
    • 2021
  • This study is about a method of extracting a summary from a news article in consideration of the importance of each sentence constituting the article. We propose a method of calculating sentence importance by extracting the probabilities of topic sentence, similarity with article title and other sentences, and sentence position as characteristics that affect sentence importance. At this time, a hypothesis is established that the Topic Sentence will have a characteristic distinct from the general sentence, and a deep learning-based classification model is trained to obtain a topic sentence probability value for the input sentence. Also, using the pre-learned ELMo language model, the similarity between sentences is calculated based on the sentence vector value reflecting the context information and extracted as sentence characteristics. The topic sentence classification performance of the LSTM and BERT models was 93% accurate, 96.22% recall, and 89.5% precision, resulting in high analysis results. As a result of calculating the importance of each sentence by combining the extracted sentence characteristics, it was confirmed that the performance of extracting the topic sentence was improved by about 10% compared to the existing TextRank algorithm.