• Title/Summary/Keyword: Deep Learning Dataset

Search Result 776, Processing Time 0.026 seconds

Image Segmentation of Fuzzy Deep Learning using Fuzzy Logic (퍼지 논리를 이용한 퍼지 딥러닝 영상 분할)

  • Jongjin Park
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.5
    • /
    • pp.71-76
    • /
    • 2023
  • In this paper, we propose a fuzzy U-Net, a fuzzy deep learning model that applies fuzzy logic to improve performance in image segmentation using deep learning. Fuzzy modules using fuzzy logic were combined with U-Net, a deep learning model that showed excellent performance in image segmentation, and various types of fuzzy modules were simulated. The fuzzy module of the proposed deep learning model learns intrinsic and complex rules between feature maps of images and corresponding segmentation results. To this end, the superiority of the proposed method was demonstrated by applying it to dental CBCT data. As a result of the simulation, it can be seen that the performance of the ADD-RELU fuzzy module structure of the model using the addition skip connection in the proposed fuzzy U-Net is 0.7928 for the test dataset and the best.

Leveraging Deep Learning and Farmland Fertility Algorithm for Automated Rice Pest Detection and Classification Model

  • Hussain. A;Balaji Srikaanth. P
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.959-979
    • /
    • 2024
  • Rice pest identification is essential in modern agriculture for the health of rice crops. As global rice consumption rises, yields and quality must be maintained. Various methodologies were employed to identify pests, encompassing sensor-based technologies, deep learning, and remote sensing models. Visual inspection by professionals and farmers remains essential, but integrating technology such as satellites, IoT-based sensors, and drones enhances efficiency and accuracy. A computer vision system processes images to detect pests automatically. It gives real-time data for proactive and targeted pest management. With this motive in mind, this research provides a novel farmland fertility algorithm with a deep learning-based automated rice pest detection and classification (FFADL-ARPDC) technique. The FFADL-ARPDC approach classifies rice pests from rice plant images. Before processing, FFADL-ARPDC removes noise and enhances contrast using bilateral filtering (BF). Additionally, rice crop images are processed using the NASNetLarge deep learning architecture to extract image features. The FFA is used for hyperparameter tweaking to optimise the model performance of the NASNetLarge, which aids in enhancing classification performance. Using an Elman recurrent neural network (ERNN), the model accurately categorises 14 types of pests. The FFADL-ARPDC approach is thoroughly evaluated using a benchmark dataset available in the public repository. With an accuracy of 97.58, the FFADL-ARPDC model exceeds existing pest detection methods.

Unsupervised feature learning for classification

  • Abdullaev, Mamur;Alikhanov, Jumabek;Ko, Seunghyun;Jo, Geun Sik
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2016.07a
    • /
    • pp.51-54
    • /
    • 2016
  • In computer vision especially in image processing, it has become popular to apply deep convolutional networks for supervised learning. Convolutional networks have shown a state of the art results in classification, object recognition, detection as well as semantic segmentation. However, supervised learning has two major disadvantages. One is it requires huge amount of labeled data to get high accuracy, the second one is to train so much data takes quite a bit long time. On the other hand, unsupervised learning can handle these problems more cheaper way. In this paper we show efficient way to learn features for classification in an unsupervised way. The network trained layer-wise, used backpropagation and our network learns features from unlabeled data. Our approach shows better results on Caltech-256 and STL-10 dataset.

  • PDF

Cloud Task Scheduling Based on Proximal Policy Optimization Algorithm for Lowering Energy Consumption of Data Center

  • Yang, Yongquan;He, Cuihua;Yin, Bo;Wei, Zhiqiang;Hong, Bowei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1877-1891
    • /
    • 2022
  • As a part of cloud computing technology, algorithms for cloud task scheduling place an important influence on the area of cloud computing in data centers. In our earlier work, we proposed DeepEnergyJS, which was designed based on the original version of the policy gradient and reinforcement learning algorithm. We verified its effectiveness through simulation experiments. In this study, we used the Proximal Policy Optimization (PPO) algorithm to update DeepEnergyJS to DeepEnergyJSV2.0. First, we verify the convergence of the PPO algorithm on the dataset of Alibaba Cluster Data V2018. Then we contrast it with reinforcement learning algorithm in terms of convergence rate, converged value, and stability. The results indicate that PPO performed better in training and test data sets compared with reinforcement learning algorithm, as well as other general heuristic algorithms, such as First Fit, Random, and Tetris. DeepEnergyJSV2.0 achieves better energy efficiency than DeepEnergyJS by about 7.814%.

Comparison of Fine-Tuned Convolutional Neural Networks for Clipart Style Classification

  • Lee, Seungbin;Kim, Hyungon;Seok, Hyekyoung;Nang, Jongho
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.9 no.4
    • /
    • pp.1-7
    • /
    • 2017
  • Clipart is artificial visual contents that are created using various tools such as Illustrator to highlight some information. Here, the style of the clipart plays a critical role in determining how it looks. However, previous studies on clipart are focused only on the object recognition [16], segmentation, and retrieval of clipart images using hand-craft image features. Recently, some clipart classification researches based on the style similarity using CNN have been proposed, however, they have used different CNN-models and experimented with different benchmark dataset so that it is very hard to compare their performances. This paper presents an experimental analysis of the clipart classification based on the style similarity with two well-known CNN-models (Inception Resnet V2 [13] and VGG-16 [14] and transfers learning with the same benchmark dataset (Microsoft Style Dataset 3.6K). From this experiment, we find out that the accuracy of Inception Resnet V2 is better than VGG for clipart style classification because of its deep nature and convolution map with various sizes in parallel. We also find out that the end-to-end training can improve the accuracy more than 20% in both CNN models.

Weather Recognition Based on 3C-CNN

  • Tan, Ling;Xuan, Dawei;Xia, Jingming;Wang, Chao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3567-3582
    • /
    • 2020
  • Human activities are often affected by weather conditions. Automatic weather recognition is meaningful to traffic alerting, driving assistance, and intelligent traffic. With the boost of deep learning and AI, deep convolutional neural networks (CNN) are utilized to identify weather situations. In this paper, a three-channel convolutional neural network (3C-CNN) model is proposed on the basis of ResNet50.The model extracts global weather features from the whole image through the ResNet50 branch, and extracts the sky and ground features from the top and bottom regions by two CNN5 branches. Then the global features and the local features are merged by the Concat function. Finally, the weather image is classified by Softmax classifier and the identification result is output. In addition, a medium-scale dataset containing 6,185 outdoor weather images named WeatherDataset-6 is established. 3C-CNN is used to train and test both on the Two-class Weather Images and WeatherDataset-6. The experimental results show that 3C-CNN achieves best on both datasets, with the average recognition accuracy up to 94.35% and 95.81% respectively, which is superior to other classic convolutional neural networks such as AlexNet, VGG16, and ResNet50. It is prospected that our method can also work well for images taken at night with further improvement.

Bias & Hate Speech Detection Using Deep Learning: Multi-channel CNN Modeling with Attention (딥러닝 기술을 활용한 차별 및 혐오 표현 탐지 : 어텐션 기반 다중 채널 CNN 모델링)

  • Lee, Wonseok;Lee, Hyunsang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.12
    • /
    • pp.1595-1603
    • /
    • 2020
  • Online defamation incidents such as Internet news comments on portal sites, SNS, and community sites are increasing in recent years. Bias and hate expressions threaten online service users in various forms, such as invasion of privacy and personal attacks, and defamation issues. In the past few years, academia and industry have been approaching in various ways to solve this problem The purpose of this study is to build a dataset and experiment with deep learning classification modeling for detecting various bias expressions as well as hate expressions. The dataset was annotated 7 labels that 10 personnel cross-checked. In this study, each of the 7 classes in a dataset of about 137,111 Korean internet news comments is binary classified and analyzed through deep learning techniques. The Proposed technique used in this study is multi-channel CNN model with attention. As a result of the experiment, the weighted average f1 score was 70.32% of performance.

Comparative evaluation of deep learning-based building extraction techniques using aerial images (항공영상을 이용한 딥러닝 기반 건물객체 추출 기법들의 비교평가)

  • Mo, Jun Sang;Seong, Seon Kyeong;Choi, Jae Wan
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.157-165
    • /
    • 2021
  • Recently, as the spatial resolution of satellite and aerial images has improved, various studies using remotely sensed data with high spatial resolution have been conducted. In particular, since the building extraction is essential for creating digital thematic maps, high accuracy of building extraction result is required. In this manuscript, building extraction models were generated using SegNet, U-Net, FC-DenseNet, and HRNetV2, which are representative semantic segmentation models in deep learning techniques, and then the evaluation of building extraction results was performed. Training dataset for building extraction were generated by using aerial orthophotos including various buildings, and evaluation was conducted in three areas. First, the model performance was evaluated through the region adjacent to the training dataset. In addition, the applicability of the model was evaluated through the region different from the training dataset. As a result, the f1-score of HRNetV2 represented the best values in terms of model performance and applicability. Through this study, the possibility of creating and modifying the building layer in the digital map was confirmed.

A Dataset of Ground Vehicle Targets from Satellite SAR Images and Its Application to Detection and Instance Segmentation (위성 SAR 영상의 지상차량 표적 데이터 셋 및 탐지와 객체분할로의 적용)

  • Park, Ji-Hoon;Choi, Yeo-Reum;Chae, Dae-Young;Lim, Ho;Yoo, Ji Hee
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.25 no.1
    • /
    • pp.30-44
    • /
    • 2022
  • The advent of deep learning-based algorithms has facilitated researches on target detection from synthetic aperture radar(SAR) imagery. While most of them concentrate on detection tasks for ships with open SAR ship datasets and for aircraft from SAR scenes of airports, there is relatively scarce researches on the detection of SAR ground vehicle targets where several adverse factors such as high false alarm rates, low signal-to-clutter ratios, and multiple targets in close proximity are predicted to degrade the performances. In this paper, a dataset of ground vehicle targets acquired from TerraSAR-X(TSX) satellite SAR images is presented. Then, both detection and instance segmentation are simultaneously carried out on this dataset based on the deep learning-based Mask R-CNN. Finally, this paper shows the future research directions to further improve the performances of detecting the SAR ground vehicle targets.

Language-based Classification of Words using Deep Learning (딥러닝을 이용한 언어별 단어 분류 기법)

  • Zacharia, Nyambegera Duke;Dahouda, Mwamba Kasongo;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.411-414
    • /
    • 2021
  • One of the elements of technology that has become extremely critical within the field of education today is Deep learning. It has been especially used in the area of natural language processing, with some word-representation vectors playing a critical role. However, some of the low-resource languages, such as Swahili, which is spoken in East and Central Africa, do not fall into this category. Natural Language Processing is a field of artificial intelligence where systems and computational algorithms are built that can automatically understand, analyze, manipulate, and potentially generate human language. After coming to discover that some African languages fail to have a proper representation within language processing, even going so far as to describe them as lower resource languages because of inadequate data for NLP, we decided to study the Swahili language. As it stands currently, language modeling using neural networks requires adequate data to guarantee quality word representation, which is important for natural language processing (NLP) tasks. Most African languages have no data for such processing. The main aim of this project is to recognize and focus on the classification of words in English, Swahili, and Korean with a particular emphasis on the low-resource Swahili language. Finally, we are going to create our own dataset and reprocess the data using Python Script, formulate the syllabic alphabet, and finally develop an English, Swahili, and Korean word analogy dataset.