• Title/Summary/Keyword: Convolutional

Search Result 2,139, Processing Time 0.034 seconds

Fake News Detection on YouTube Using Related Video Information (관련 동영상 정보를 활용한 YouTube 가짜뉴스 탐지 기법)

  • Junho Kim;Yongjun Shin;Hyunchul Ahn
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.19-36
    • /
    • 2023
  • As advances in information and communication technology have made it easier for anyone to produce and disseminate information, a new problem has emerged: fake news, which is false information intentionally shared to mislead people. Initially spread mainly through text, fake news has gradually evolved and is now distributed in multimedia formats. Since its founding in 2005, YouTube has become the world's leading video platform and is used by most people worldwide. However, it has also become a primary source of fake news, causing social problems. Various researchers have been working on detecting fake news on YouTube. There are content-based and background information-based approaches to fake news detection. Still, content-based approaches are dominant when looking at conventional fake news research and YouTube fake news detection research. This study proposes a fake news detection method based on background information rather than content-based fake news detection. In detail, we suggest detecting fake news by utilizing related video information from YouTube. Specifically, the method detects fake news through CNN, a deep learning network, from the vectorized information obtained from related videos and the original video using Doc2vec, an embedding technique. The empirical analysis shows that the proposed method has better prediction performance than the existing content-based approach to detecting fake news on YouTube. The proposed method in this study contributes to making our society safer and more reliable by preventing the spread of fake news on YouTube, which is highly contagious.

Multi-Object Goal Visual Navigation Based on Multimodal Context Fusion (멀티모달 맥락정보 융합에 기초한 다중 물체 목표 시각적 탐색 이동)

  • Jeong Hyun Choi;In Cheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.407-418
    • /
    • 2023
  • The Multi-Object Goal Visual Navigation(MultiOn) is a visual navigation task in which an agent must visit to multiple object goals in an unknown indoor environment in a given order. Existing models for the MultiOn task suffer from the limitation that they cannot utilize an integrated view of multimodal context because use only a unimodal context map. To overcome this limitation, in this paper, we propose a novel deep neural network-based agent model for MultiOn task. The proposed model, MCFMO, uses a multimodal context map, containing visual appearance features, semantic features of environmental objects, and goal object features. Moreover, the proposed model effectively fuses these three heterogeneous features into a global multimodal context map by using a point-wise convolutional neural network module. Lastly, the proposed model adopts an auxiliary task learning module to predict the observation status, goal direction and the goal distance, which can guide to learn the navigational policy efficiently. Conducting various quantitative and qualitative experiments using the Habitat-Matterport3D simulation environment and scene dataset, we demonstrate the superiority of the proposed model.

Development of Deep Learning Structure to Secure Visibility of Outdoor LED Display Board According to Weather Change (날씨 변화에 따른 실외 LED 전광판의 시인성 확보를 위한 딥러닝 구조 개발)

  • Sun-Gu Lee;Tae-Yoon Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.340-344
    • /
    • 2023
  • In this paper, we propose a study on the development of deep learning structure to secure visibility of outdoor LED display board according to weather change. The proposed technique secures the visibility of the outdoor LED display board by automatically adjusting the LED luminance according to the weather change using deep learning using an imaging device. In order to automatically adjust the LED luminance according to weather changes, a deep learning model that can classify the weather is created by learning it using a convolutional network after first going through a preprocessing process for the flattened background part image data. The applied deep learning network reduces the difference between the input value and the output value using the Residual learning function, inducing learning while taking the characteristics of the initial input value. Next, by using a controller that recognizes the weather and adjusts the luminance of the outdoor LED display board according to the weather change, the luminance is changed so that the luminance increases when the surrounding environment becomes bright, so that it can be seen clearly. In addition, when the surrounding environment becomes dark, the visibility is reduced due to scattering of light, so the brightness of the electronic display board is lowered so that it can be seen clearly. By applying the method proposed in this paper, the result of the certified measurement test of the luminance measurement according to the weather change of the LED sign board confirmed that the visibility of the outdoor LED sign board was secured according to the weather change.

Intrusion Detection Method Using Unsupervised Learning-Based Embedding and Autoencoder (비지도 학습 기반의 임베딩과 오토인코더를 사용한 침입 탐지 방법)

  • Junwoo Lee;Kangseok Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.355-364
    • /
    • 2023
  • As advanced cyber threats continue to increase in recent years, it is difficult to detect new types of cyber attacks with existing pattern or signature-based intrusion detection method. Therefore, research on anomaly detection methods using data learning-based artificial intelligence technology is increasing. In addition, supervised learning-based anomaly detection methods are difficult to use in real environments because they require sufficient labeled data for learning. Research on an unsupervised learning-based method that learns from normal data and detects an anomaly by finding a pattern in the data itself has been actively conducted. Therefore, this study aims to extract a latent vector that preserves useful sequence information from sequence log data and develop an anomaly detection learning model using the extracted latent vector. Word2Vec was used to create a dense vector representation corresponding to the characteristics of each sequence, and an unsupervised autoencoder was developed to extract latent vectors from sequence data expressed as dense vectors. The developed autoencoder model is a recurrent neural network GRU (Gated Recurrent Unit) based denoising autoencoder suitable for sequence data, a one-dimensional convolutional neural network-based autoencoder to solve the limited short-term memory problem that GRU can have, and an autoencoder combining GRU and one-dimensional convolution was used. The data used in the experiment is time-series-based NGIDS (Next Generation IDS Dataset) data, and as a result of the experiment, an autoencoder that combines GRU and one-dimensional convolution is better than a model using a GRU-based autoencoder or a one-dimensional convolution-based autoencoder. It was efficient in terms of learning time for extracting useful latent patterns from training data, and showed stable performance with smaller fluctuations in anomaly detection performance.

Assessment of the Object Detection Ability of Interproximal Caries on Primary Teeth in Periapical Radiographs Using Deep Learning Algorithms (유치의 치근단 방사선 사진에서 딥 러닝 알고리즘을 이용한 모델의 인접면 우식증 객체 탐지 능력의 평가)

  • Hongju Jeon;Seonmi Kim;Namki Choi
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.50 no.3
    • /
    • pp.263-276
    • /
    • 2023
  • The purpose of this study was to evaluate the performance of a model using You Only Look Once (YOLO) for object detection of proximal caries in periapical radiographs of children. A total of 2016 periapical radiographs in primary dentition were selected from the M6 database as a learning material group, of which 1143 were labeled as proximal caries by an experienced dentist using an annotation tool. After converting the annotations into a training dataset, YOLO was trained on the dataset using a single convolutional neural network (CNN) model. Accuracy, recall, specificity, precision, negative predictive value (NPV), F1-score, Precision-Recall curve, and AP (area under curve) were calculated for evaluation of the object detection model's performance in the 187 test datasets. The results showed that the CNN-based object detection model performed well in detecting proximal caries, with a diagnostic accuracy of 0.95, a recall of 0.94, a specificity of 0.97, a precision of 0.82, a NPV of 0.96, and an F1-score of 0.81. The AP was 0.83. This model could be a valuable tool for dentists in detecting carious lesions in periapical radiographs.

Deep learning algorithms for identifying 79 dental implant types (79종의 임플란트 식별을 위한 딥러닝 알고리즘)

  • Hyun-Jun, Kong;Jin-Yong, Yoo;Sang-Ho, Eom;Jun-Hyeok, Lee
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.38 no.4
    • /
    • pp.196-203
    • /
    • 2022
  • Purpose: This study aimed to evaluate the accuracy and clinical usability of an identification model using deep learning for 79 dental implant types. Materials and Methods: A total of 45396 implant fixture images were collected through panoramic radiographs of patients who received implant treatment from 2001 to 2020 at 30 dental clinics. The collected implant images were 79 types from 18 manufacturers. EfficientNet and Meta Pseudo Labels algorithms were used. For EfficientNet, EfficientNet-B0 and EfficientNet-B4 were used as submodels. For Meta Pseudo Labels, two models were applied according to the widen factor. Top 1 accuracy was measured for EfficientNet and top 1 and top 5 accuracy for Meta Pseudo Labels were measured. Results: EfficientNet-B0 and EfficientNet-B4 showed top 1 accuracy of 89.4. Meta Pseudo Labels 1 showed top 1 accuracy of 87.96, and Meta pseudo labels 2 with increased widen factor showed 88.35. In Top5 Accuracy, the score of Meta Pseudo Labels 1 was 97.90, which was 0.11% higher than 97.79 of Meta Pseudo Labels 2. Conclusion: All four deep learning algorithms used for implant identification in this study showed close to 90% accuracy. In order to increase the clinical applicability of deep learning for implant identification, it will be necessary to collect a wider amount of data and develop a fine-tuned algorithm for implant identification.

Development of Image Classification Model for Urban Park User Activity Using Deep Learning of Social Media Photo Posts (소셜미디어 사진 게시물의 딥러닝을 활용한 도시공원 이용자 활동 이미지 분류모델 개발)

  • Lee, Ju-Kyung;Son, Yong-Hoon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.50 no.6
    • /
    • pp.42-57
    • /
    • 2022
  • This study aims to create a basic model for classifying the activity photos that urban park users shared on social media using Deep Learning through Artificial Intelligence. Regarding the social media data, photos related to urban parks were collected through a Naver search, were collected, and used for the classification model. Based on the indicators of Naturalness, Potential Attraction, and Activity, which can be used to evaluate the characteristics of urban parks, 21 classification categories were created. Urban park photos shared on Naver were collected by category, and annotated datasets were created. A custom CNN model and a transfer learning model utilizing a CNN pre-trained on the collected photo datasets were designed and subsequently analyzed. As a result of the study, the Xception transfer learning model, which demonstrated the best performance, was selected as the urban park user activity image classification model and evaluated through several evaluation indicators. This study is meaningful in that it has built AI as an index that can evaluate the characteristics of urban parks by using user-shared photos on social media. The classification model using Deep Learning mitigates the limitations of manual classification, and it can efficiently classify large amounts of urban park photos. So, it can be said to be a useful method that can be used for the monitoring and management of city parks in the future.

Hate Speech Detection Using Modified Principal Component Analysis and Enhanced Convolution Neural Network on Twitter Dataset

  • Majed, Alowaidi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.1
    • /
    • pp.112-119
    • /
    • 2023
  • Traditionally used for networking computers and communications, the Internet has been evolving from the beginning. Internet is the backbone for many things on the web including social media. The concept of social networking which started in the early 1990s has also been growing with the internet. Social Networking Sites (SNSs) sprung and stayed back to an important element of internet usage mainly due to the services or provisions they allow on the web. Twitter and Facebook have become the primary means by which most individuals keep in touch with others and carry on substantive conversations. These sites allow the posting of photos, videos and support audio and video storage on the sites which can be shared amongst users. Although an attractive option, these provisions have also culminated in issues for these sites like posting offensive material. Though not always, users of SNSs have their share in promoting hate by their words or speeches which is difficult to be curtailed after being uploaded in the media. Hence, this article outlines a process for extracting user reviews from the Twitter corpus in order to identify instances of hate speech. Through the use of MPCA (Modified Principal Component Analysis) and ECNN, we are able to identify instances of hate speech in the text (Enhanced Convolutional Neural Network). With the use of NLP, a fully autonomous system for assessing syntax and meaning can be established (NLP). There is a strong emphasis on pre-processing, feature extraction, and classification. Cleansing the text by removing extra spaces, punctuation, and stop words is what normalization is all about. In the process of extracting features, these features that have already been processed are used. During the feature extraction process, the MPCA algorithm is used. It takes a set of related features and pulls out the ones that tell us the most about the dataset we give itThe proposed categorization method is then put forth as a means of detecting instances of hate speech or abusive language. It is argued that ECNN is superior to other methods for identifying hateful content online. It can take in massive amounts of data and quickly return accurate results, especially for larger datasets. As a result, the proposed MPCA+ECNN algorithm improves not only the F-measure values, but also the accuracy, precision, and recall.

A Deep Learning-based Real-time Deblurring Algorithm on HD Resolution (HD 해상도에서 실시간 구동이 가능한 딥러닝 기반 블러 제거 알고리즘)

  • Shim, Kyujin;Ko, Kangwook;Yoon, Sungjoon;Ha, Namkoo;Lee, Minseok;Jang, Hyunsung;Kwon, Kuyong;Kim, Eunjoon;Kim, Changick
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.3-12
    • /
    • 2022
  • Image deblurring aims to remove image blur, which can be generated while shooting the pictures by the movement of objects, camera shake, blurring of focus, and so forth. With the rise in popularity of smartphones, it is common to carry portable digital cameras daily, so image deblurring techniques have become more significant recently. Originally, image deblurring techniques have been studied using traditional optimization techniques. Then with the recent attention on deep learning, deblurring methods based on convolutional neural networks have been actively proposed. However, most of them have been developed while focusing on better performance. Therefore, it is not easy to use in real situations due to the speed of their algorithms. To tackle this problem, we propose a novel deep learning-based deblurring algorithm that can be operated in real-time on HD resolution. In addition, we improved the training and inference process and could increase the performance of our model without any significant effect on the speed and the speed without any significant effect on the performance. As a result, our algorithm achieves real-time performance by processing 33.74 frames per second at 1280×720 resolution. Furthermore, it shows excellent performance compared to its speed with a PSNR of 29.78 and SSIM of 0.9287 with the GoPro dataset.

Short-Term Precipitation Forecasting based on Deep Neural Network with Synthetic Weather Radar Data (기상레이더 강수 합성데이터를 활용한 심층신경망 기반 초단기 강수예측 기술 연구)

  • An, Sojung;Choi, Youn;Son, MyoungJae;Kim, Kwang-Ho;Jung, Sung-Hwa;Park, Young-Youn
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.43-45
    • /
    • 2021
  • The short-term quantitative precipitation prediction (QPF) system is important socially and economically to prevent damage from severe weather. Recently, many studies for short-term QPF model applying the Deep Neural Network (DNN) has been conducted. These studies require the sophisticated pre-processing because the mistreatment of various and vast meteorological data sets leads to lower performance of QPF. Especially, for more accurate prediction of the non-linear trends in precipitation, the dataset needs to be carefully handled based on the physical and dynamical understands the data. Thereby, this paper proposes the following approaches: i) refining and combining major factors (weather radar, terrain, air temperature, and so on) related to precipitation development in order to construct training data for pattern analysis of precipitation; ii) producing predicted precipitation fields based on Convolutional with ConvLSTM. The proposed algorithm was evaluated by rainfall events in 2020. It is outperformed in the magnitude and strength of precipitation, and clearly predicted non-linear pattern of precipitation. The algorithm can be useful as a forecasting tool for preventing severe weather.

  • PDF