• Title/Summary/Keyword: Web News

Search Result 248, Processing Time 0.023 seconds

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.

A Quantitative Analysis of Classification Classes and Classified Information Resources of Directory (디렉터리 서비스 분류항목 및 정보자원의 계량적 분석)

  • Kim, Sung-Won
    • Journal of Information Management
    • /
    • v.37 no.1
    • /
    • pp.83-103
    • /
    • 2006
  • This study analyzes the classification schemes and classified information resources of the directory services provided by major web portals to complement keyword-based retrieval. Specifically, this study intends to quantitatively analyze the topic categories, the information resources by subject, and the information resources classified by the topic categories of three directories, Yahoo, Naver, and Empas. The result of this analysis reveals some differences among directory services. Overall, these directories show different ratios of referred categories to original categories depending on the subject area, and the categories regarded as format-based show the highest proportion of referred categories. In terms of the total amount of classified information resources, Yahoo has the largest number of resources. The directories compared have different amounts of resources depending on the subject area. The quantitative analysis of resources classified by the specific category is performed on the class of 'News & Media'. The result reveals that Naver and Empas contain overly specified categories compared to Yahoo, as far as the number of information resources categorized is concerned. Comparing the depth of the categories assigned by the three directories to the same information resources, it is found that, on average, Yahoo assigns one-step further segmented divisions than the other two directories to the identical resources.

A Study on Strategic Management of Native Advertisement (네이티브 광고의 전략적 관리방안에 관한 연구)

  • Son, Jeyoung;Kang, Inwon
    • Management & Information Systems Review
    • /
    • v.38 no.1
    • /
    • pp.63-81
    • /
    • 2019
  • In order to overcome the disadvantages of banner ad, pop-up ad, interstitial ad, which are existing web advertisement forms, native ad is actively utilized. Native advertising is considered to be a useful advertising technique in that it can reduce users' rejection and attract attention. However, in recent years, there have been a lot of fake news and fake contents that have turned articles or video contents into advertisements. The purpose of this study is to understand how firms can coordinate and control native advertisements in a rational way. For this analysis, we conducted a survey of 308 social media users using quota sampling method. As a result of the verification, it was found that the more negative the perception of the evaluation of the advertisement, the less the level of persuasion about the advertisement and the negative impact on the website where the advertisement is exposed. In addition, this study examined the influence of the negative stimulus factors on the qualitative performance of the firm. As a result, it was found that source non-expert had the highest effect on skepticism on ad. Also, platform overflow has a direct effect on the evaluation of the website as well as the negative evaluation of the advertisement. Moreover, this study provides concrete implications for the subdivision market by verifying the differences between the paths according to the level of website involvement.

A study on the User Experience at Unmanned Checkout Counter Using Big Data Analysis (빅데이터 분석을 통한 무인계산대 사용자 경험에 관한 연구)

  • Kim, Ae-sook;Jung, Sun-mi;Ryu, Gi-hwan;Kim, Hee-young
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.2
    • /
    • pp.343-348
    • /
    • 2022
  • This study aims to analyze the user experience of unmanned checkout counters perceived by consumers using SNS big data. For this study, blogs, news, intellectuals, cafes, intellectuals (tips), and web documents were analyzed on Naver and Daum, and 'unmanned checkpoints' were used as keywords for data search. The data analysis period was selected as two years from January 1, 2020 to December 31, 2021. For data collection and analysis, frequency and matrix data were extracted through Textom, and network analysis and visualization analysis were conducted using the NetDraw function of the UCINET 6 program. As a result, the perception of the checkout counter was clustered into accessibility, usability, continuous use intention, and others according to the definition of consumers' experience factors. From a supplier's point of view, if unmanned checkpoints spread indiscriminately to solve the problem of raising the minimum wage and shortening working hours, a bigger employment problem will arise from a social point of view. In addition, institutionalization is needed to supply easy and convenient unmanned checkout counters for the elderly and younger generations, children, and foreigners who are not familiar with unmanned calculation.

Introducing SEABOT: Methodological Quests in Southeast Asian Studies

  • Keck, Stephen
    • SUVANNABHUMI
    • /
    • v.10 no.2
    • /
    • pp.181-213
    • /
    • 2018
  • How to study Southeast Asia (SEA)? The need to explore and identify methodologies for studying SEA are inherent in its multifaceted subject matter. At a minimum, the region's rich cultural diversity inhibits both the articulation of decisive defining characteristics and the training of scholars who can write with confidence beyond their specialisms. Consequently, the challenges of understanding the region remain and a consensus regarding the most effective approaches to studying its history, identity and future seem quite unlikely. Furthermore, "Area Studies" more generally, has proved to be a less attractive frame of reference for burgeoning scholarly trends. This paper will propose a new tool to help address these challenges. Even though the science of artificial intelligence (AI) is in its infancy, it has already yielded new approaches to many commercial, scientific and humanistic questions. At this point, AI has been used to produce news, generate better smart phones, deliver more entertainment choices, analyze earthquakes and write fiction. The time has come to explore the possibility that AI can be put at the service of the study of SEA. The paper intends to lay out what would be required to develop SEABOT. This instrument might exist as a robot on the web which might be called upon to make the study of SEA both broader and more comprehensive. The discussion will explore the financial resources, ownership and timeline needed to make SEABOT go from an idea to a reality. SEABOT would draw upon artificial neural networks (ANNs) to mine the region's "Big Data", while synthesizing the information to form new and useful perspectives on SEA. Overcoming significant language issues, applying multidisciplinary methods and drawing upon new yields of information should produce new questions and ways to conceptualize SEA. SEABOT could lead to findings which might not otherwise be achieved. SEABOT's work might well produce outcomes which could open up solutions to immediate regional problems, provide ASEAN planners with new resources and make it possible to eventually define and capitalize on SEA's "soft power". That is, new findings should provide the basis for ASEAN diplomats and policy-makers to develop new modalities of cultural diplomacy and improved governance. Last, SEABOT might also open up avenues to tell the SEA story in new distinctive ways. SEABOT is seen as a heuristic device to explore the results which this instrument might yield. More important the discussion will also raise the possibility that an AI-driven perspective on SEA may prove to be even more problematic than it is beneficial.

  • PDF

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

An Analysis of IT Trends Using Tweet Data (트윗 데이터를 활용한 IT 트렌드 분석)

  • Yi, Jin Baek;Lee, Choong Kwon;Cha, Kyung Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.143-159
    • /
    • 2015
  • Predicting IT trends has been a long and important subject for information systems research. IT trend prediction makes it possible to acknowledge emerging eras of innovation and allocate budgets to prepare against rapidly changing technological trends. Towards the end of each year, various domestic and global organizations predict and announce IT trends for the following year. For example, Gartner Predicts 10 top IT trend during the next year, and these predictions affect IT and industry leaders and organization's basic assumptions about technology and the future of IT, but the accuracy of these reports are difficult to verify. Social media data can be useful tool to verify the accuracy. As social media services have gained in popularity, it is used in a variety of ways, from posting about personal daily life to keeping up to date with news and trends. In the recent years, rates of social media activity in Korea have reached unprecedented levels. Hundreds of millions of users now participate in online social networks and communicate with colleague and friends their opinions and thoughts. In particular, Twitter is currently the major micro blog service, it has an important function named 'tweets' which is to report their current thoughts and actions, comments on news and engage in discussions. For an analysis on IT trends, we chose Tweet data because not only it produces massive unstructured textual data in real time but also it serves as an influential channel for opinion leading on technology. Previous studies found that the tweet data provides useful information and detects the trend of society effectively, these studies also identifies that Twitter can track the issue faster than the other media, newspapers. Therefore, this study investigates how frequently the predicted IT trends for the following year announced by public organizations are mentioned on social network services like Twitter. IT trend predictions for 2013, announced near the end of 2012 from two domestic organizations, the National IT Industry Promotion Agency (NIPA) and the National Information Society Agency (NIA), were used as a basis for this research. The present study analyzes the Twitter data generated from Seoul (Korea) compared with the predictions of the two organizations to analyze the differences. Thus, Twitter data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. To overcome these challenges, we used SAS IRS (Information Retrieval Studio) developed by SAS to capture the trend in real-time processing big stream datasets of Twitter. The system offers a framework for crawling, normalizing, analyzing, indexing and searching tweet data. As a result, we have crawled the entire Twitter sphere in Seoul area and obtained 21,589 tweets in 2013 to review how frequently the IT trend topics announced by the two organizations were mentioned by the people in Seoul. The results shows that most IT trend predicted by NIPA and NIA were all frequently mentioned in Twitter except some topics such as 'new types of security threat', 'green IT', 'next generation semiconductor' since these topics non generalized compound words so they can be mentioned in Twitter with other words. To answer whether the IT trend tweets from Korea is related to the following year's IT trends in real world, we compared Twitter's trending topics with those in Nara Market, Korea's online e-Procurement system which is a nationwide web-based procurement system, dealing with whole procurement process of all public organizations in Korea. The correlation analysis show that Tweet frequencies on IT trending topics predicted by NIPA and NIA are significantly correlated with frequencies on IT topics mentioned in project announcements by Nara market in 2012 and 2013. The main contribution of our research can be found in the following aspects: i) the IT topic predictions announced by NIPA and NIA can provide an effective guideline to IT professionals and researchers in Korea who are looking for verified IT topic trends in the following topic, ii) researchers can use Twitter to get some useful ideas to detect and predict dynamic trends of technological and social issues.

The development of resources for the application of 2020 Dietary Reference Intakes for Koreans (2020 한국인 영양소 섭취기준 활용 자료 개발)

  • Hwang, Ji-Yun;Kim, Yangha;Lee, Haeng Shin;Park, EunJu;Kim, Jeongseon;Shin, Sangah;Kim, Ki Nam;Bae, Yun Jung;Kim, Kirang;Woo, Taejung;Yoon, Mi Ock;Lee, Myoungsook
    • Journal of Nutrition and Health
    • /
    • v.55 no.1
    • /
    • pp.21-35
    • /
    • 2022
  • The recommended meal composition allows the general people to organize meals using the number of intakes of foods from each of six food groups (grains, meat·fish·eggs·beans, vegetables, fruits, milk·dairy products and oils·sugars) to meet Dietary Reference Intakes for Koreans (KDRIs) without calculating complex nutritional values. Through an integrated analysis of data from the 6th to 7th Korean National Health and Nutrition Examination Surveys (2013-2018), representative foods for each food group were selected, and the amounts of representative foods per person were derived based on energy. Based on the EER by age and gender from the KDRIs, a total of 12 kinds of diets were suggested by differentiating meal compositions by age (aged 1-2, 3-5, 6-11, 12-18, 19-64, 65-74 and ≥ 75 years) and gender. The 2020 Food Balance Wheel included the 6th food group of oils and sugars to raise public awareness and avoid confusion in the practical utilization of the model by industries or individuals in reducing the consistent increasing intakes of oils and sugars. To promote the everyday use of the Food Balance Wheel and recommended meal compositions among the general public, the poster of the Food Balance Wheel was created in five languages (Korean, English, Japanese, Vietnamese and Chinese) along with card news. A survey was conducted to provide a basis for categorizing nutritional problems by life cycles and developing customized web-based messages to the public. Based on survey results two types of card news were produced for the general public and youth. Additionally, the educational program was developed through a series of processes, such as prioritization of educational topics, setting educational goals for each stage, creation of a detailed educational system chart and teaching-learning plans for the development of educational materials and media.

An Analysis for Deriving New Convergent Service of Mobile Learning: The Case of Social Network Analysis and Association Rule (모바일 러닝에서의 신규 융합서비스 도출을 위한 분석: 사회연결망 분석과 연관성 분석 사례)

  • Baek, Heon;Kim, Jin Hwa;Kim, Yong Jin
    • Information Systems Review
    • /
    • v.15 no.3
    • /
    • pp.1-37
    • /
    • 2013
  • This study is conducted to explore the possibility of service convergence to promote mobile learning. This study has attempted to identify how mobile learning service is provided, which services among them are considered most popular, and which services are highly demanded by users. This study has also investigated the potential opportunities for service convergence of mobile service and e-learning. This research is then extended to examine the possibility of active convergence of common services in mobile services and e-learning. Important variables have been identified from related web pages of portal sites using social network analysis (SNA) and association rules. Due to the differences in number and type of variables on different web pages, SNA was used to deal with the difficulties of identifying the degree of complex connection. Association analysis has been used to identify association rules among variables. The study has revealed that most frequent services among common services of mobile services and e-learning were Games and SNS followed by Payment, Advertising, Mail, Event, Animation, Cloud, e-Book, Augmented Reality and Jobs. This study has also found that Search, News, GPS in mobile services were turned out to be very highly demanded while Simulation, Culture, Public Education were highly demanded in e-learning. In addition, It has been found that variables involving with high service convergence based on common variables of mobile and e-learning services were Games and SNS, Games and Sports, SNS and Advertising, Games and Event, SNS and e-Book, Games and Community in mobile services while Games, Animation, Counseling, e-Book, being preceding services Simulation, Speaking, Public Education, Attendance Management were turned out be highly convergent in e-learning services. Finally, this study has attempted to predict possibility of active service convergence focusing on Games, SNS, e-Book which were highly demanded common services in mobile and e-learning services. It is expected that this study can be used to suggest a strategic direction to promote mobile learning by converging mobile services and e-learning.

  • PDF

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.