• Title/Summary/Keyword: Social Summarization

Search Result 16, Processing Time 0.02 seconds

A Study on the Determinant Factors of Newspaper Headlines : Focused on News Influence Variables, Editor's Role Orientation and Professionalism (신문기사 제목의 결정요인에 관한 연구 : 뉴스 영향변인.편집자의 역할지향성과 전문직업관을 중심으로)

  • Kang, Hyun-Jig
    • Journal of Digital Convergence
    • /
    • v.10 no.8
    • /
    • pp.347-365
    • /
    • 2012
  • This study was carried out to enlighten on the factors by which newspaper headlines are determined and to empirically explore how news influence variables, editor's role orientation and professionalism have impact on deciding headlines. It turned out from the survey of 345 journalists working in the editorial departments of 17 major national daily and economic newspapers that 7 determinant factors have impact on editors deciding headlines: creativity, standardized expression, fairness, consideration for a readers, reflection of company policy, summarization and intriguer. In addition, the determinant factors of headlines were analyzed to have correlation with news influence variables such as the government-sponsors, readers and colleagues, and company policy and the management. In the awareness of role, also, it was shown that the editors who considered social integration as important place weighted on the reflection of company policy, fairness and creativity; those editors who placed power monitoring on priority took fairness and creativity seriously; and the editors who believed that delivering information was important thought of fairness, consideration for a readers and summarization as important. In addition, the organization-oriented editors turned out compliant to a system, positive on the governmental policies and sought for social stability; those editors who put a premium on a sense of objective balance and neutrality showed a strong aspect of a professional in social reform, check against government and the social governance by the privileged.

Frequency Matrix Based Summaries of Negative and Positive Reviews

  • Almuhannad Sulaiman Alorfi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.3
    • /
    • pp.101-109
    • /
    • 2023
  • This paper discusses the use of sentiment analysis and text summarization techniques to extract valuable information from the large volume of user-generated content such as reviews, comments, and feedback on online platforms and social media. The paper highlights the effectiveness of sentiment analysis in identifying positive and negative reviews and the importance of summarizing such text to facilitate comprehension and convey essential findings to readers. The proposed work focuses on summarizing all positive and negative reviews to enhance product quality, and the performance of the generated summaries is measured using ROUGE scores. The results show promising outcomes for the developed methods in summarizing user-generated content.

The Optimal Process of Weapon Acquisition Management (I) -With Special Reference to the Cost/Effectiveness Model for the Selection of Weapon Acquisition System- (무기체계 획득관리의 최적화 (I) -무기체계 획득시스템의 선정을 위한 비용대효과분석모형을 중심으로-)

  • Lee Jin-Joo;Kwon Tae-Young;Joo Nam-Youn
    • Journal of the military operations research society of Korea
    • /
    • v.3 no.2
    • /
    • pp.49-77
    • /
    • 1977
  • Weapon systems are curcial instruments for the security of a nation and critical elements for the victory in a war. Since modern weapon systems tend to be capital-intensive with high precision and quality, they become more and more complex and diversified; their acquisition costs become huge; and their technological obsolescence becomes accelerated. Therefore, the systematic management of weapon acquisition process would be one of the most important defense tasks at the national level. To analyze such problems and find solutions, this paper has studied various aspects related to the efficient management of weapon system acquisition. After brief summarization of the general characteristics of weapon systems, their effectiveness, and developmental trend, the paper discusses the defense management policies and techniques for the weapon systems. Specifically, four alternative acquisition methods such as indigenous R & D, foreign purchase, co-production and joint-production are discussed and analyzed by systems approach. The systems analysis procedure to evaluate and select weapon acquisition method is as follows; 1) to analyze the merits and demerits of the alternative methods, 2) to screen unrealistic alternatives through the consideration of significant factors such as political, economic, military, technological, and social constraints, 3) to evaluate and select an optimal one among the remaining acquisition methods after the cost-effectivenss analysis. For the base of cost-effectivess analysis, cost analysis model as well as effectiveness analysis model of each acquisition method are developed.

  • PDF

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.

Topic Modeling of News Article about International Construction Market Using Latent Dirichlet Allocation (Latent Dirichlet Allocation 기법을 활용한 해외건설시장 뉴스기사의 토픽 모델링(Topic Modeling))

  • Moon, Seonghyeon;Chung, Sehwan;Chi, Seokho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.38 no.4
    • /
    • pp.595-599
    • /
    • 2018
  • Sufficient understanding of oversea construction market status is crucial to get profitability in the international construction project. Plenty of researchers have been considering the news article as a fine data source for figuring out the market condition, since the data includes market information such as political, economic, and social issue. Since the text data exists in unstructured format with huge size, various text-mining techniques were studied to reduce the unnecessary manpower, time, and cost to summarize the data. However, there are some limitations to extract the needed information from the news article because of the existence of various topics in the data. This research is aimed to overcome the problems and contribute to summarization of market status by performing topic modeling with Latent Dirichlet Allocation. With assuming that 10 topics existed in the corpus, the topics included projects for user convenience (topic-2), private supports to solve poverty problems in Africa (topic-4), and so on. By grouping the topics in the news articles, the results could improve extracting useful information and summarizing the market status.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.