• Title/Summary/Keyword: Depthwise Separable convolution

Search Result 9, Processing Time 0.022 seconds

Multithreaded and Overlapped Systolic Array for Depthwise Separable Convolution (깊이별 분리 합성곱을 위한 다중 스레드 오버랩 시스톨릭 어레이)

  • Jongho Yoon;Seunggyu Lee;Seokhyeong Kang
    • Transactions on Semiconductor Engineering
    • /
    • v.2 no.1
    • /
    • pp.1-8
    • /
    • 2024
  • When processing depthwise separable convolution, low utilization of processing elements (PEs) is one of the challenges of systolic array (SA). In this study, we propose a new SA architecture to maximize throughput in depthwise convolution. Moreover, the proposed SA performs subsequent pointwise convolution on the idle PEs during depthwise convolution computation to increase the utilization. After the computation, we utilize unused PEs to boost the remaining pointwise convolution. Consequently, the proposed 128x128 SA achieves a 4.05x and 1.75x speed improvement and reduces the energy consumption by 66.7 % and 25.4 %, respectively, compared to the basic SA and RiSA in MobileNetV3.

SDCN: Synchronized Depthwise Separable Convolutional Neural Network for Single Image Super-Resolution

  • Muhammad, Wazir;Hussain, Ayaz;Shah, Syed Ali Raza;Shah, Jalal;Bhutto, Zuhaibuddin;Thaheem, Imdadullah;Ali, Shamshad;Masrour, Salman
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.11
    • /
    • pp.17-22
    • /
    • 2021
  • Recently, image super-resolution techniques used in convolutional neural networks (CNN) have led to remarkable performance in the research area of digital image processing applications and computer vision tasks. Convolutional layers stacked on top of each other can design a more complex network architecture, but they also use more memory in terms of the number of parameters and introduce the vanishing gradient problem during training. Furthermore, earlier approaches of single image super-resolution used interpolation technique as a pre-processing stage to upscale the low-resolution image into HR image. The design of these approaches is simple, but not effective and insert the newer unwanted pixels (noises) in the reconstructed HR image. In this paper, authors are propose a novel single image super-resolution architecture based on synchronized depthwise separable convolution with Dense Skip Connection Block (DSCB). In addition, unlike existing SR methods that only rely on single path, but our proposed method used the synchronizes path for generating the SISR image. Extensive quantitative and qualitative experiments show that our method (SDCN) achieves promising improvements than other state-of-the-art methods.

LiDAR Sensor based Object Classification System for Delivery Robot Applications (배달 로봇 응용을 위한 LiDAR 센서 기반 객체 분류 시스템)

  • Woo-Jin Park;Jeong-Gyu Lee;Chae-woon Park;Yunho Jung
    • Journal of IKEEE
    • /
    • v.28 no.3
    • /
    • pp.375-381
    • /
    • 2024
  • In this paper, we propose a lightweight object classification system using a LiDAR sensor for delivery service robots. The 3D point cloud data is encoded into a 2D pseudo image using a Pillar Feature Network (PFN), and then passed through a lightweight classification network designed based on Depthwise Separable Convolutional Neural Networks (DS-CNN). The implementation results show that the designed classification network has 9.08K parameters and 3.49M Multiply-Accumulate (MAC) operations, while supporting a classification accuracy of 94.94%.

Light weight architecture for acoustic scene classification (음향 장면 분류를 위한 경량화 모형 연구)

  • Lim, Soyoung;Kwak, Il-Youp
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.6
    • /
    • pp.979-993
    • /
    • 2021
  • Acoustic scene classification (ASC) categorizes an audio file based on the environment in which it has been recorded. This has long been studied in the detection and classification of acoustic scenes and events (DCASE). In this study, we considered the problem that ASC faces in real-world applications that the model used should have low-complexity. We compared several models that apply light-weight techniques. First, a base CNN model was proposed using log mel-spectrogram, deltas, and delta-deltas features. Second, depthwise separable convolution, linear bottleneck inverted residual block was applied to the convolutional layer, and Quantization was applied to the models to develop a low-complexity model. The model considering low-complexity was similar or slightly inferior to the performance of the base model, but the model size was significantly reduced from 503 KB to 42.76 KB.

Saliency-Assisted Collaborative Learning Network for Road Scene Semantic Segmentation

  • Haifeng Sima;Yushuang Xu;Minmin Du;Meng Gao;Jing Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.861-880
    • /
    • 2023
  • Semantic segmentation of road scene is the key technology of autonomous driving, and the improvement of convolutional neural network architecture promotes the improvement of model segmentation performance. The existing convolutional neural network has the simplification of learning knowledge and the complexity of the model. To address this issue, we proposed a road scene semantic segmentation algorithm based on multi-task collaborative learning. Firstly, a depthwise separable convolution atrous spatial pyramid pooling is proposed to reduce model complexity. Secondly, a collaborative learning framework is proposed involved with saliency detection, and the joint loss function is defined using homoscedastic uncertainty to meet the new learning model. Experiments are conducted on the road and nature scenes datasets. The proposed method achieves 70.94% and 64.90% mIoU on Cityscapes and PASCAL VOC 2012 datasets, respectively. Qualitatively, Compared to methods with excellent performance, the method proposed in this paper has significant advantages in the segmentation of fine targets and boundaries.

A novel MobileNet with selective depth multiplier to compromise complexity and accuracy

  • Chan Yung Kim;Kwi Seob Um;Seo Weon Heo
    • ETRI Journal
    • /
    • v.45 no.4
    • /
    • pp.666-677
    • /
    • 2023
  • In the last few years, convolutional neural networks (CNNs) have demonstrated good performance while solving various computer vision problems. However, since CNNs exhibit high computational complexity, signal processing is performed on the server side. To reduce the computational complexity of CNNs for edge computing, a lightweight algorithm, such as a MobileNet, is proposed. Although MobileNet is lighter than other CNN models, it commonly achieves lower classification accuracy. Hence, to find a balance between complexity and accuracy, additional hyperparameters for adjusting the size of the model have recently been proposed. However, significantly increasing the number of parameters makes models dense and unsuitable for devices with limited computational resources. In this study, we propose a novel MobileNet architecture, in which the number of parameters is adaptively increased according to the importance of feature maps. We show that our proposed network achieves better classification accuracy with fewer parameters than the conventional MobileNet.

Deep Learning-Based Plant Health State Classification Using Image Data (영상 데이터를 이용한 딥러닝 기반 작물 건강 상태 분류 연구)

  • Ali Asgher Syed;Jaehawn Lee;Alvaro Fuentes;Sook Yoon;Dong Sun Park
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.4
    • /
    • pp.43-53
    • /
    • 2024
  • Tomatoes are rich in nutrients like lycopene, β-carotene, and vitamin C. However, they often suffer from biological and environmental stressors, resulting in significant yield losses. Traditional manual plant health assessments are error-prone and inefficient for large-scale production. To address this need, we collected a comprehensive dataset covering the entire life span of tomato plants, annotated across 5 health states from 1 to 5. Our study introduces an Attention-Enhanced DS-ResNet architecture with Channel-wise attention and Grouped convolution, refined with new training techniques. Our model achieved an overall accuracy of 80.2% using 5-fold cross-validation, showcasing its robustness in precisely classifying the health states of tomato plants.

A hybrid deep neural network compression approach enabling edge intelligence for data anomaly detection in smart structural health monitoring systems

  • Tarutal Ghosh Mondal;Jau-Yu Chou;Yuguang Fu;Jianxiao Mao
    • Smart Structures and Systems
    • /
    • v.32 no.3
    • /
    • pp.179-193
    • /
    • 2023
  • This study explores an alternative to the existing centralized process for data anomaly detection in modern Internet of Things (IoT)-based structural health monitoring (SHM) systems. An edge intelligence framework is proposed for the early detection and classification of various data anomalies facilitating quality enhancement of acquired data before transmitting to a central system. State-of-the-art deep neural network pruning techniques are investigated and compared aiming to significantly reduce the network size so that it can run efficiently on resource-constrained edge devices such as wireless smart sensors. Further, depthwise separable convolution (DSC) is invoked, the integration of which with advanced structural pruning methods exhibited superior compression capability. Last but not least, quantization-aware training (QAT) is adopted for faster processing and lower memory and power consumption. The proposed edge intelligence framework will eventually lead to reduced network overload and latency. This will enable intelligent self-adaptation strategies to be employed to timely deal with a faulty sensor, minimizing the wasteful use of power, memory, and other resources in wireless smart sensors, increasing efficiency, and reducing maintenance costs for modern smart SHM systems. This study presents a theoretical foundation for the proposed framework, the validation of which through actual field trials is a scope for future work.

Automatic Detection of Dead Trees Based on Lightweight YOLOv4 and UAV Imagery

  • Yuanhang Jin;Maolin Xu;Jiayuan Zheng
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.614-630
    • /
    • 2023
  • Dead trees significantly impact forest production and the ecological environment and pose constraints to the sustainable development of forests. A lightweight YOLOv4 dead tree detection algorithm based on unmanned aerial vehicle images is proposed to address current limitations in dead tree detection that rely mainly on inefficient, unsafe and easy-to-miss manual inspections. An improved logarithmic transformation method was developed in data pre-processing to display tree features in the shadows. For the model structure, the original CSPDarkNet-53 backbone feature extraction network was replaced by MobileNetV3. Some of the standard convolutional blocks in the original extraction network were replaced by depthwise separable convolution blocks. The new ReLU6 activation function replaced the original LeakyReLU activation function to make the network more robust for low-precision computations. The K-means++ clustering method was also integrated to generate anchor boxes that are more suitable for the dataset. The experimental results show that the improved algorithm achieved an accuracy of 97.33%, higher than other methods. The detection speed of the proposed approach is higher than that of YOLOv4, improving the efficiency and accuracy of the detection process.