• Title/Summary/Keyword: convolutional network

Search Result 1,671, Processing Time 0.025 seconds

Comparison of Convolutional Neural Network Models for Image Super Resolution

  • Jian, Chen;Yu, Songhyun;Jeong, Jechang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.63-66
    • /
    • 2018
  • Recently, a convolutional neural network (CNN) models at single image super-resolution have been very successful. Residual learning improves training stability and network performance in CNN. In this paper, we compare four convolutional neural network models for super-resolution (SR) to learn nonlinear mapping from low-resolution (LR) input image to high-resolution (HR) target image. Four models include general CNN model, global residual learning CNN model, local residual learning CNN model, and the CNN model with global and local residual learning. Experiment results show that the results are greatly affected by how skip connections are connected at the basic CNN network, and network trained with only global residual learning generates highest performance among four models at objective and subjective evaluations.

  • PDF

Breast Cancer Classification Using Convolutional Neural Network

  • Alshanbari, Eman;Alamri, Hanaa;Alzahrani, Walaa;Alghamdi, Manal
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.101-106
    • /
    • 2021
  • Breast cancer is the number one cause of deaths from cancer in women, knowing the type of breast cancer in the early stages can help us to prevent the dangers of the next stage. The performance of the deep learning depends on large number of labeled data, this paper presented convolutional neural network for classification breast cancer from images to benign or malignant. our network contains 11 layers and ends with softmax for the output, the experiments result using public BreakHis dataset, and the proposed methods outperformed the state-of-the-art methods.

Performance Improvement Method of Convolutional Neural Network Using Combined Parametric Activation Functions (결합된 파라메트릭 활성함수를 이용한 합성곱 신경망의 성능 향상)

  • Ko, Young Min;Li, Peng Hang;Ko, Sun Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.9
    • /
    • pp.371-380
    • /
    • 2022
  • Convolutional neural networks are widely used to manipulate data arranged in a grid, such as images. A general convolutional neural network consists of a convolutional layers and a fully connected layers, and each layer contains a nonlinear activation functions. This paper proposes a combined parametric activation function to improve the performance of convolutional neural networks. The combined parametric activation function is created by adding the parametric activation functions to which parameters that convert the scale and location of the activation function are applied. Various nonlinear intervals can be created according to parameters that convert multiple scales and locations, and parameters can be learned in the direction of minimizing the loss function calculated by the given input data. As a result of testing the performance of the convolutional neural network using the combined parametric activation function on the MNIST, Fashion MNIST, CIFAR10 and CIFAR100 classification problems, it was confirmed that it had better performance than other activation functions.

Facial Expression Classification Using Deep Convolutional Neural Network

  • Choi, In-kyu;Ahn, Ha-eun;Yoo, Jisang
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.485-492
    • /
    • 2018
  • In this paper, we propose facial expression recognition using CNN (Convolutional Neural Network), one of the deep learning technologies. The proposed structure has general classification performance for any environment or subject. For this purpose, we collect a variety of databases and organize the database into six expression classes such as 'expressionless', 'happy', 'sad', 'angry', 'surprised' and 'disgusted'. Pre-processing and data augmentation techniques are applied to improve training efficiency and classification performance. In the existing CNN structure, the optimal structure that best expresses the features of six facial expressions is found by adjusting the number of feature maps of the convolutional layer and the number of nodes of fully-connected layer. The experimental results show good classification performance compared to the state-of-the-arts in experiments of the cross validation and the cross database. Also, compared to other conventional models, it is confirmed that the proposed structure is superior in classification performance with less execution time.

Traffic Light Recognition Using a Deep Convolutional Neural Network (심층 합성곱 신경망을 이용한 교통신호등 인식)

  • Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.11
    • /
    • pp.1244-1253
    • /
    • 2018
  • The color of traffic light is sensitive to various illumination conditions. Especially it loses the hue information when oversaturation happens on the lighting area. This paper proposes a traffic light recognition method robust to these illumination variations. The method consists of two steps of traffic light detection and recognition. It just uses the intensity and saturation in the first step of traffic light detection. It delays the use of hue information until it reaches to the second step of recognizing the signal of traffic light. We utilized a deep learning technique in the second step. We designed a deep convolutional neural network(DCNN) which is composed of three convolutional networks and two fully connected networks. 12 video clips were used to evaluate the performance of the proposed method. Experimental results show the performance of traffic light detection reporting the precision of 93.9%, the recall of 91.6%, and the recognition accuracy of 89.4%. Considering that the maximum distance between the camera and traffic lights is 70m, the results shows that the proposed method is effective.

Deep learning convolutional neural network algorithms for the early detection and diagnosis of dental caries on periapical radiographs: A systematic review

  • Musri, Nabilla;Christie, Brenda;Ichwan, Solachuddin Jauhari Arief;Cahyanto, Arief
    • Imaging Science in Dentistry
    • /
    • v.51 no.3
    • /
    • pp.237-242
    • /
    • 2021
  • Purpose: The aim of this study was to analyse and review deep learning convolutional neural networks for detecting and diagnosing early-stage dental caries on periapical radiographs. Materials and Methods: In order to conduct this review, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA) guidelines were followed. Studies published from 2015 to 2021 under the keywords(deep convolutional neural network) AND (caries), (deep learning caries) AND (convolutional neural network) AND (caries) were systematically reviewed. Results: When dental caries is improperly diagnosed, the lesion may eventually invade the enamel, dentin, and pulp tissue, leading to loss of tooth function. Rapid and precise detection and diagnosis are vital for implementing appropriate prevention and treatment of dental caries. Radiography and intraoral images are considered to play a vital role in detecting dental caries; nevertheless, studies have shown that 20% of suspicious areas are mistakenly diagnosed as dental caries using this technique; hence, diagnosis via radiography alone without an objective assessment is inaccurate. Identifying caries with a deep convolutional neural network-based detector enables the operator to distinguish changes in the location and morphological features of dental caries lesions. Deep learning algorithms have broader and more profound layers and are continually being developed, remarkably enhancing their precision in detecting and segmenting objects. Conclusion: Clinical applications of deep learning convolutional neural networks in the dental field have shown significant accuracy in detecting and diagnosing dental caries, and these models hold promise in supporting dental practitioners to improve patient outcomes.

Crack Detection in Tunnel Using Convolutional Encoder-Decoder Network (컨볼루셔널 인코더-디코더 네트워크를 이용한 터널에서의 균열 검출)

  • Han, Bok Gyu;Yang, Hyeon Seok;Lee, Jong Min;Moon, Young Shik
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.6
    • /
    • pp.80-89
    • /
    • 2017
  • The classical approaches to detect cracks are performed by experienced inspection professionals by annotating the crack patterns manually. Because of each inspector's personal subjective experience, it is hard to guarantee objectiveness. To solve this issue, automated crack detection methods have been proposed however the methods are sensitive to image noise. Depending on the quality of image obtained, the image noise affect overall performance. In this paper, we propose crack detection method using a convolutional encoder-decoder network to overcome these weaknesses. Performance of which is significantly improved in terms of the recall, precision rate and F-measure than the previous methods.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

SDCN: Synchronized Depthwise Separable Convolutional Neural Network for Single Image Super-Resolution

  • Muhammad, Wazir;Hussain, Ayaz;Shah, Syed Ali Raza;Shah, Jalal;Bhutto, Zuhaibuddin;Thaheem, Imdadullah;Ali, Shamshad;Masrour, Salman
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.11
    • /
    • pp.17-22
    • /
    • 2021
  • Recently, image super-resolution techniques used in convolutional neural networks (CNN) have led to remarkable performance in the research area of digital image processing applications and computer vision tasks. Convolutional layers stacked on top of each other can design a more complex network architecture, but they also use more memory in terms of the number of parameters and introduce the vanishing gradient problem during training. Furthermore, earlier approaches of single image super-resolution used interpolation technique as a pre-processing stage to upscale the low-resolution image into HR image. The design of these approaches is simple, but not effective and insert the newer unwanted pixels (noises) in the reconstructed HR image. In this paper, authors are propose a novel single image super-resolution architecture based on synchronized depthwise separable convolution with Dense Skip Connection Block (DSCB). In addition, unlike existing SR methods that only rely on single path, but our proposed method used the synchronizes path for generating the SISR image. Extensive quantitative and qualitative experiments show that our method (SDCN) achieves promising improvements than other state-of-the-art methods.

Performance Comparison of Convolution Neural Network by Weight Initialization and Parameter Update Method1 (가중치 초기화 및 매개변수 갱신 방법에 따른 컨벌루션 신경망의 성능 비교)

  • Park, Sung-Wook;Kim, Do-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.4
    • /
    • pp.441-449
    • /
    • 2018
  • Deep learning has been used for various processing centered on image recognition. One core algorithms of the deep learning, convolutional neural network is an deep neural network that specialized in image recognition. In this paper, we use a convolutional neural network to classify forest insects and propose an optimization method. Experiments were carried out by combining two weight initialization and six parameter update methods. As a result, the Xavier-SGD method showed the highest performance with an accuracy of 82.53% in the 12 different combinations of experiments. Through this, the latest learning algorithms, which complement the disadvantages of the previous parameter update method, we conclude that it can not lead to higher performance than existing methods in all application environments.