• Title/Summary/Keyword: deep Learning

Search Result 5,763, Processing Time 0.034 seconds

Multi-type Image Noise Classification by Using Deep Learning

  • Waqar Ahmed;Zahid Hussain Khand;Sajid Khan;Ghulam Mujtaba;Muhammad Asif Khan;Ahmad Waqas
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.7
    • /
    • pp.143-147
    • /
    • 2024
  • Image noise classification is a classical problem in the field of image processing, machine learning, deep learning and computer vision. In this paper, image noise classification is performed using deep learning. Keras deep learning library of TensorFlow is used for this purpose. 6900 images images are selected from the Kaggle database for the classification purpose. Dataset for labeled noisy images of multiple type was generated with the help of Matlab from a dataset of non-noisy images. Labeled dataset comprised of Salt & Pepper, Gaussian and Sinusoidal noise. Different training and tests sets were partitioned to train and test the model for image classification. In deep neural networks CNN (Convolutional Neural Network) is used due to its in-depth and hidden patterns and features learning in the images to be classified. This deep learning of features and patterns in images make CNN outperform the other classical methods in many classification problems.

Predicting bond strength of corroded reinforcement by deep learning

  • Tanyildizi, Harun
    • Computers and Concrete
    • /
    • v.29 no.3
    • /
    • pp.145-159
    • /
    • 2022
  • In this study, the extreme learning machine and deep learning models were devised to estimate the bond strength of corroded reinforcement in concrete. The six inputs and one output were used in this study. The compressive strength, concrete cover, bond length, steel type, diameter of steel bar, and corrosion level were selected as the input variables. The results of bond strength were used as the output variable. Moreover, the Analysis of variance (Anova) was used to find the effect of input variables on the bond strength of corroded reinforcement in concrete. The prediction results were compared to the experimental results and each other. The extreme learning machine and the deep learning models estimated the bond strength by 99.81% and 99.99% accuracy, respectively. This study found that the deep learning model can be estimated the bond strength of corroded reinforcement with higher accuracy than the extreme learning machine model. The Anova results found that the corrosion level was found to be the input variable that most affects the bond strength of corroded reinforcement in concrete.

A Comparative Analysis of Deep Learning Frameworks for Image Learning (이미지 학습을 위한 딥러닝 프레임워크 비교분석)

  • jong-min Kim;Dong-Hwi Lee
    • Convergence Security Journal
    • /
    • v.22 no.4
    • /
    • pp.129-133
    • /
    • 2022
  • Deep learning frameworks are still evolving, and there are various frameworks. Typical deep learning frameworks include TensorFlow, PyTorch, and Keras. The Deepram framework utilizes optimization models in image classification through image learning. In this paper, we use the TensorFlow and PyTorch frameworks, which are most widely used in the deep learning image recognition field, to proceed with image learning, and compare and analyze the results derived in this process to know the optimized framework. was made.

Deep Learning Description Language for Referring to Analysis Model Based on Trusted Deep Learning (신뢰성있는 딥러닝 기반 분석 모델을 참조하기 위한 딥러닝 기술 언어)

  • Mun, Jong Hyeok;Kim, Do Hyung;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.4
    • /
    • pp.133-142
    • /
    • 2021
  • With the recent advancements of deep learning, companies such as smart home, healthcare, and intelligent transportation systems are utilizing its functionality to provide high-quality services for vehicle detection, emergency situation detection, and controlling energy consumption. To provide reliable services in such sensitive systems, deep learning models are required to have high accuracy. In order to develop a deep learning model for analyzing previously mentioned services, developers should utilize the state of the art deep learning models that have already been verified for higher accuracy. The developers can verify the accuracy of the referenced model by validating the model on the dataset. For this validation, the developer needs structural information to document and apply deep learning models, including metadata such as learning dataset, network architecture, and development environments. In this paper, we propose a description language that represents the network architecture of the deep learning model along with its metadata that are necessary to develop a deep learning model. Through the proposed description language, developers can easily verify the accuracy of the referenced deep learning model. Our experiments demonstrate the application scenario of a deep learning description document that focuses on the license plate recognition for the detection of illegally parked vehicles.

Digital Twin and Visual Object Tracking using Deep Reinforcement Learning (심층 강화학습을 이용한 디지털트윈 및 시각적 객체 추적)

  • Park, Jin Hyeok;Farkhodov, Khurshedjon;Choi, Piljoo;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.145-156
    • /
    • 2022
  • Nowadays, the complexity of object tracking models among hardware applications has become a more in-demand duty to complete in various indeterminable environment tracking situations with multifunctional algorithm skills. In this paper, we propose a virtual city environment using AirSim (Aerial Informatics and Robotics Simulation - AirSim, CityEnvironment) and use the DQN (Deep Q-Learning) model of deep reinforcement learning model in the virtual environment. The proposed object tracking DQN network observes the environment using a deep reinforcement learning model that receives continuous images taken by a virtual environment simulation system as input to control the operation of a virtual drone. The deep reinforcement learning model is pre-trained using various existing continuous image sets. Since the existing various continuous image sets are image data of real environments and objects, it is implemented in 3D to track virtual environments and moving objects in them.

Comparison Analysis of Deep Learning-based Image Compression Approaches (딥 러닝 기반 이미지 압축 기법의 성능 비교 분석)

  • Yong-Hwan Lee;Heung-Jun Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.1
    • /
    • pp.129-133
    • /
    • 2023
  • Image compression is a fundamental technique in the field of digital image processing, which will help to decrease the storage space and to transmit the files efficiently. Recently many deep learning techniques have been proposed to promise results on image compression field. Since many image compression techniques have artifact problems, this paper has compared two deep learning approaches to verify their performance experimentally to solve the problems. One of the approaches is a deep autoencoder technique, and another is a deep convolutional neural network (CNN). For those results in the performance of peak signal-to-noise and root mean square error, this paper shows that deep autoencoder method has more advantages than deep CNN approach.

  • PDF

Deep Learning Structure Suitable for Embedded System for Flame Detection (불꽃 감지를 위한 임베디드 시스템에 적합한 딥러닝 구조)

  • Ra, Seung-Tak;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.23 no.1
    • /
    • pp.112-119
    • /
    • 2019
  • In this paper, we propose a deep learning structure suitable for embedded system. The flame detection process of the proposed deep learning structure consists of four steps : flame area detection using flame color model, flame image classification using deep learning structure for flame color specialization, $N{\times}N$ cell separation in detected flame area, flame image classification using deep learning structure for flame shape specialization. First, only the color of the flame is extracted from the input image and then labeled to detect the flame area. Second, area of flame detected is the input of a deep learning structure specialized in flame color and is classified as flame image only if the probability of flame class at the output is greater than 75%. Third, divide the detected flame region of the images classified as flame images less than 75% in the preceding section into $N{\times}N$ units. Fourthly, small cells divided into $N{\times}N$ units are inserted into the input of a deep learning structure specialized to the shape of the flame and each cell is judged to be flame proof and classified as flame images if more than 50% of cells are classified as flame images. To verify the effectiveness of the proposed deep learning structure, we experimented with a flame database of ImageNet. Experimental results show that the proposed deep learning structure has an average resource occupancy rate of 29.86% and an 8 second fast flame detection time. The flame detection rate averaged 0.95% lower compared to the existing deep learning structure, but this was the result of light construction of the deep learning structure for application to embedded systems. Therefore, the deep learning structure for flame detection proposed in this paper has been proved suitable for the application of embedded system.

Bagging deep convolutional autoencoders trained with a mixture of real data and GAN-generated data

  • Hu, Cong;Wu, Xiao-Jun;Shu, Zhen-Qiu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5427-5445
    • /
    • 2019
  • While deep neural networks have achieved remarkable performance in representation learning, a huge amount of labeled training data are usually required by supervised deep models such as convolutional neural networks. In this paper, we propose a new representation learning method, namely generative adversarial networks (GAN) based bagging deep convolutional autoencoders (GAN-BDCAE), which can map data to diverse hierarchical representations in an unsupervised fashion. To boost the size of training data, to train deep model and to aggregate diverse learning machines are the three principal avenues towards increasing the capabilities of representation learning of neural networks. We focus on combining those three techniques. To this aim, we adopt GAN for realistic unlabeled sample generation and bagging deep convolutional autoencoders (BDCAE) for robust feature learning. The proposed method improves the discriminative ability of learned feature embedding for solving subsequent pattern recognition problems. We evaluate our approach on three standard benchmarks and demonstrate the superiority of the proposed method compared to traditional unsupervised learning methods.

Image analysis technology with deep learning for monitoring the tidal flat ecosystem -Focused on monitoring the Ocypode stimpsoni Ortmann, 1897 in the Sindu-ri tidal flat - (갯벌 생태계 모니터링을 위한 딥러닝 기반의 영상 분석 기술 연구 - 신두리 갯벌 달랑게 모니터링을 중심으로 -)

  • Kim, Dong-Woo;Lee, Sang-Hyuk;Yu, Jae-Jin;Son, Seung-Woo
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.24 no.6
    • /
    • pp.89-96
    • /
    • 2021
  • In this study, a deep-learning image analysis model was established and validated for AI-based monitoring of the tidal flat ecosystem for marine protected creatures Ocypode stimpsoni and their habitat. The data in the study was constructed using an unmanned aerial vehicle, and the U-net model was applied for the deep learning model. The accuracy of deep learning model learning results was about 0.76 and about 0.8 each for the Ocypode stimpsoni and their burrow whose accuracy was higher. Analyzing the distribution of crabs and burrows by putting orthomosaic images of the entire study area to the learned deep learning model, it was confirmed that 1,943 Ocypode stimpsoni and 2,807 burrow were distributed in the study area. Through this study, the possibility of using the deep learning image analysis technology for monitoring the tidal ecosystem was confirmed. And it is expected that it can be used in the tidal ecosystem monitoring field by expanding the monitoring sites and target species in the future.

Automated ground penetrating radar B-scan detection enhanced by data augmentation techniques

  • Donghwi Kim;Jihoon Kim;Heejung Youn
    • Geomechanics and Engineering
    • /
    • v.38 no.1
    • /
    • pp.29-44
    • /
    • 2024
  • This research investigates the effectiveness of data augmentation techniques in the automated analysis of B-scan images from ground-penetrating radar (GPR) using deep learning. In spite of the growing interest in automating GPR data analysis and advancements in deep learning for image classification and object detection, many deep learning-based GPR data analysis studies have been limited by the availability of large, diverse GPR datasets. Data augmentation techniques are widely used in deep learning to improve model performance. In this study, we applied four data augmentation techniques (geometric transformation, color-space transformation, noise injection, and applying kernel filter) to the GPR datasets obtained from a testbed. A deep learning model for GPR data analysis was developed using three models (Faster R-CNN ResNet, SSD ResNet, and EfficientDet) based on transfer learning. It was found that data augmentation significantly enhances model performance across all cases, with the mAP and AR for the Faster R-CNN ResNet model increasing by approximately 4%, achieving a maximum mAP (Intersection over Union = 0.5:1.0) of 87.5% and maximum AR of 90.5%. These results highlight the importance of data augmentation in improving the robustness and accuracy of deep learning models for GPR B-scan analysis. The enhanced detection capabilities achieved through these techniques contribute to more reliable subsurface investigations in geotechnical engineering.