• Title/Summary/Keyword: Fake News

Search Result 102, Processing Time 0.03 seconds

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.

COVID_19 fake news and real news discrimination system (코로나19 가짜뉴스와 진짜뉴스 판별 시스템)

  • Lee, Jimin;Lee, Jisun;Woo, Jiyoung
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.411-412
    • /
    • 2022
  • 본 논문에서는 코로나19 뉴스와 코로나19 가짜뉴스의 데이터셋을 활용하여 입력 받은 뉴스가 가짜뉴스일 확률을 예측한다. 가짜 뉴스 본문에는 코로나19, 대통령, 정부, 가짜, 언론 등의 키워드의 빈도가 높았다. 위의 키워드를 토대로 나이브 베이즈 모델링을 하여 이를 적용해 가짜 뉴스를 가려내는 웹페이지를 개발하였다.

  • PDF

Research Analysis in Automatic Fake News Detection (자동화기반의 가짜 뉴스 탐지를 위한 연구 분석)

  • Jwa, Hee-Jung;Oh, Dong-Suk;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.7
    • /
    • pp.15-21
    • /
    • 2019
  • Research in detecting fake information gained a lot of interest after the US presidential election in 2016. Information from unknown sources are produced in the shape of news, and its rapid spread is fueled by the interest of public drawn to stimulating and interesting issues. In addition, the wide use of mass communication platforms such as social network services makes this phenomenon worse. Poynter Institute created the International Fact Checking Network (IFCN) to provide guidelines for judging the facts of skilled professionals and releasing "Code of Ethics" for fact check agencies. However, this type of approach is costly because of the large number of experts required to test authenticity of each article. Therefore, research in automated fake news detection technology that can efficiently identify it is gaining more attention. In this paper, we investigate fake news detection systems and researches that are rapidly developing, mainly thanks to recent advances in deep learning technology. In addition, we also organize shared tasks and training corpus that are released in various forms, so that researchers can easily participate in this field, which deserves a lot of research effort.

FakedBits- Detecting Fake Information on Social Platforms using Multi-Modal Features

  • Dilip Kumar, Sharma;Bhuvanesh, Singh;Saurabh, Agarwal;Hyunsung, Kim;Raj, Sharma
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.1
    • /
    • pp.51-73
    • /
    • 2023
  • Social media play a significant role in communicating information across the globe, connecting with loved ones, getting the news, communicating ideas, etc. However, a group of people uses social media to spread fake information, which has a bad impact on society. Therefore, minimizing fake news and its detection are the two primary challenges that need to be addressed. This paper presents a multi-modal deep learning technique to address the above challenges. The proposed modal can use and process visual and textual features. Therefore, it has the ability to detect fake information from visual and textual data. We used EfficientNetB0 and a sentence transformer, respectively, for detecting counterfeit images and for textural learning. Feature embedding is performed at individual channels, whilst fusion is done at the last classification layer. The late fusion is applied intentionally to mitigate the noisy data that are generated by multi-modalities. Extensive experiments are conducted, and performance is evaluated against state-of-the-art methods. Three real-world benchmark datasets, such as MediaEval (Twitter), Weibo, and Fakeddit, are used for experimentation. Result reveals that the proposed modal outperformed the state-of-the-art methods and achieved an accuracy of 86.48%, 82.50%, and 88.80%, respectively, for MediaEval (Twitter), Weibo, and Fakeddit datasets.

Techno Populism and Algorithmic Manipulation of News in South Korea

  • Yoon, Sunny
    • Journal of Contemporary Eastern Asia
    • /
    • v.18 no.2
    • /
    • pp.33-48
    • /
    • 2019
  • The current Moon Jai-in administration in South Korea is facing serious challenges as a result of a scandal involving the manipulation of news online. Staff in Moon's camp are suspected of manipulating public opinion by creating millions of fake news comments online, contributing to Moon being elected president. This South Korean political scandal raises a number of theoretical issues with regard to new platform technologies and media manipulation. First, the incident exposes the technological limits of blocking manipulation of the news, partly because of the nature of social media and partly because of the nature of contemporary technology. Contemporary social media is often monopolistic in nature; with the majority of people are using the same platforms, and hence it is likely that they will be subject to forms of media manipulation. Second, the Korean case of news manipulation demonstrates a unique cultural aspect of Korean society. News comments and readers' replies have become a major channel of alternative news in Korea. This phenomenon is often designated as "reply journalism," since people are interested in reading the news replies of ordinary readers equally to reading news reports themselves. News replies are considered indicators of public opinion and are seen as affecting trias politica in Korean society. Third, the Korean incident of news manipulation implicates a new form of populism in the 21st century and the nature of democratic participation. This article aims to explicate key issues in media manipulation by including wider technological, cultural, and political aspects in the South Korean news media context.

User-Customized News Service by use of Social Network Analysis on Artificial Intelligence & Bigdata

  • KANG, Jangmook;LEE, Sangwon
    • International journal of advanced smart convergence
    • /
    • v.10 no.3
    • /
    • pp.131-142
    • /
    • 2021
  • Recently, there has been an active service that provides customized news to news subscribers. In this study, we intend to design a customized news service system through Deep Learning-based Social Network Service (SNS) activity analysis, applying real news and avoiding fake news. In other words, the core of this study is the study of delivery methods and delivery devices to provide customized news services based on analysis of users, SNS activities. First of all, this research method consists of a total of five steps. In the first stage, social network service site access records are received from user terminals, and in the second stage, SNS sites are searched based on SNS site access records received to obtain user profile information and user SNS activity information. In step 3, the user's propensity is analyzed based on user profile information and SNS activity information, and in step 4, user-tailored news is selected through news search based on user propensity analysis results. Finally, in step 5, custom news is sent to the user terminal. This study will be of great help to news service providers to increase the number of news subscribers.

Strategy Design to Protect Personal Information on Fake News based on Bigdata and Artificial Intelligence

  • Kang, Jangmook;Lee, Sangwon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.2
    • /
    • pp.59-66
    • /
    • 2019
  • The emergence of new IT technologies and convergence industries, such as artificial intelligence, bigdata and the Internet of Things, is another chance for South Korea, which has established itself as one of the world's top IT powerhouses. On the other hand, however, privacy concerns that may arise in the process of using such technologies raise the task of harmonizing the development of new industries and the protection of personal information at the same time. In response, the government clearly presented the criteria for deidentifiable measures of personal information and the scope of use of deidentifiable information needed to ensure that bigdata can be safely utilized within the framework of the current Personal Information Protection Act. It strives to promote corporate investment and industrial development by removing them and to ensure that the protection of the people's personal information and human rights is not neglected. This study discusses the strategy of deidentifying personal information protection based on the analysis of fake news. Using the strategies derived from this study, it is assumed that deidentification information that is appropriate for deidentification measures is not personal information and can therefore be used for analysis of big data. By doing so, deidentification information can be safely utilized and managed through administrative and technical safeguards to prevent re-identification, considering the possibility of re-identification due to technology development and data growth.

Harmful Disinformation in Southeast Asia: "Negative Campaigning", "Information Operations" and "Racist Propaganda" - Three Forms of Manipulative Political Communication in Malaysia, Myanmar, and Thailand

  • Radue, Melanie
    • Journal of Contemporary Eastern Asia
    • /
    • v.18 no.2
    • /
    • pp.68-89
    • /
    • 2019
  • When comparing media freedom in Malaysia, Myanmar, and Thailand, so-called "fake news" appears as threats to a deliberative (online) public sphere in these three diverse contexts. However, "racist propaganda", "information operations" and "negative campaigning" might be more accurate terms that explain these forms of systematic manipulative political communication. The three cases show forms of disinformation in under-researched contexts and thereby expand the often Western focused discourses on hate speech and fake news. Additionally, the analysis shows that harmful disinformation disseminated online originates from differing contextual trajectories and is not an "online phenomenon". Drawing on an analysis of connotative context factors, this explorative comparative study enables an understanding of different forms of harmful disinformation in Malaysia, Myanmar, and Thailand. The connotative context factors were inductively inferred from 32 expert interviews providing explanations for the formation of political communication (control) mechanisms.

Learning Algorithms in AI System and Services

  • Jeong, Young-Sik;Park, Jong Hyuk
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1029-1035
    • /
    • 2019
  • In recent years, artificial intelligence (AI) services have become one of the most essential parts to extend human capabilities in various fields such as face recognition for security, weather prediction, and so on. Various learning algorithms for existing AI services are utilized, such as classification, regression, and deep learning, to increase accuracy and efficiency for humans. Nonetheless, these services face many challenges such as fake news spread on social media, stock selection, and volatility delay in stock prediction systems and inaccurate movie-based recommendation systems. In this paper, various algorithms are presented to mitigate these issues in different systems and services. Convolutional neural network algorithms are used for detecting fake news in Korean language with a Word-Embedded model. It is based on k-clique and data mining and increased accuracy in personalized recommendation-based services stock selection and volatility delay in stock prediction. Other algorithms like multi-level fusion processing address problems of lack of real-time database.

A Study on Korean Fake news Detection Model Using Word Embedding (워드 임베딩을 활용한 한국어 가짜뉴스 탐지 모델에 관한 연구)

  • Shim, Jae-Seung;Lee, Jaejun;Jeong, Ii Tae;Ahn, Hyunchul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.199-202
    • /
    • 2020
  • 본 논문에서는 가짜뉴스 탐지 모델에 워드 임베딩 기법을 접목하여 성능을 향상시키는 방법을 제안한다. 기존의 한국어 가짜뉴스 탐지 연구는 희소 표현인 빈도-역문서 빈도(TF-IDF)를 활용한 탐지 모델들이 주를 이루었다. 하지만 이는 가짜뉴스 탐지의 관점에서 뉴스의 언어적 특성을 파악하는 데 한계가 존재하는데, 특히 문맥에서 드러나는 언어적 특성을 구조적으로 반영하지 못한다. 이에 밀집 표현 기반의 워드 임베딩 기법인 Word2vec을 활용한 텍스트 전처리를 통해 문맥 정보까지 반영한 가짜뉴스 탐지 모델을 본 연구의 제안 모델로 생성한 후 TF-IDF 기반의 가짜뉴스 탐지 모델을 비교 모델로 생성하여 두 모델 간의 비교를 통한 성능 검증을 수행하였다. 그 결과 Word2vec 기반의 제안모형이 더욱 우수하였음을 확인하였다.

  • PDF