• Title/Summary/Keyword: convolution layer

Search Result 138, Processing Time 0.027 seconds

Tactile Sensor-based Object Recognition Method Robust to Gripping Conditions Using Fast Fourier Convolution Algorithm (고속 푸리에 합성곱을 이용한 파지 조건에 강인한 촉각센서 기반 물체 인식 방법)

  • Huh, Hyunsuk;Kim, Jeong-Jung;Koh, Doo-Yoel;Kim, Chang-Hyun;Lee, Seungchul
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.365-372
    • /
    • 2022
  • The accurate object recognition is important for the precise and accurate manipulation. To enhance the recognition performance, we can use various types of sensors. In general, acquired data from sensors have a high sampling rate. So, in the past, the RNN-based model is commonly used to handle and analyze the time-series sensor data. However, the RNN-based model has limitations of excessive parameters. CNN-based model also can be used to analyze time-series input data. However, CNN-based model also has limitations of the small receptive field in early layers. For this reason, when we use a CNN-based model, model architecture should be deeper and heavier to extract useful global features. Thus, traditional methods like RN N -based and CN N -based model needs huge amount of learning parameters. Recently studied result shows that Fast Fourier Convolution (FFC) can overcome the limitations of traditional methods. This operator can extract global features from the first hidden layer, so it can be effectively used for feature extracting of sensor data that have a high sampling rate. In this paper, we propose the algorithm to recognize objects using tactile sensor data and the FFC model. The data was acquired from 11 types of objects to verify our posed model. We collected pressure, current, position data when the gripper grasps the objects by random force. As a result, the accuracy is enhanced from 84.66% to 91.43% when we use the proposed FFC-based model instead of the traditional model.

A Real-Time Hardware Design of CNN for Vehicle Detection (차량 검출용 CNN 분류기의 실시간 처리를 위한 하드웨어 설계)

  • Bang, Ji-Won;Jeong, Yong-Jin
    • Journal of IKEEE
    • /
    • v.20 no.4
    • /
    • pp.351-360
    • /
    • 2016
  • Recently, machine learning algorithms, especially deep learning-based algorithms, have been receiving attention due to its high classification performance. Among the algorithms, Convolutional Neural Network(CNN) is known to be efficient for image processing tasks used for Advanced Driver Assistance Systems(ADAS). However, it is difficult to achieve real-time processing for CNN in vehicle embedded software environment due to the repeated operations contained in each layer of CNN. In this paper, we propose a hardware accelerator which enhances the execution time of CNN by parallelizing the repeated operations such as convolution. Xilinx ZC706 evaluation board is used to verify the performance of the proposed accelerator. For $36{\times}36$ input images, the hardware execution time of CNN is 2.812ms in 100MHz clock frequency and shows that our hardware can be executed in real-time.

Memory data layout and DMA transfer technique research For efficient data transfer of CNN accelerator (CNN 가속기의 효율적인 데이터 전송을 위한 메모리 데이터 레이아웃 및 DMA 전송기법 연구)

  • Cho, Seok-Jae;Park, Sungkyung;Park, Chester Sungchung
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.559-569
    • /
    • 2020
  • One of the deep-running algorithms, CNN's artificial intelligence application uses off-chip memory to store data on the Convolution Layer. DMA can reduce processor load at every data transfer. It can also reduce application performance degradation by varying the order in which data from the Convolution layer is transmitted to the global buffer of the accelerator. For basic layouts with continuous memory addresses, SG-DMA showed about 3.4 times performance improvement in pre-setting DMA compared to using ordinaly DMA, and for Ideal layouts with discontinuous memory addresses, the ordinal DMA was about 1396 cycles faster than SG-DMA. Experiments have shown that a combination of memory data layout and DMA can reduce the DMA preset load by about 86 percent.

Research on a handwritten character recognition algorithm based on an extended nonlinear kernel residual network

  • Rao, Zheheng;Zeng, Chunyan;Wu, Minghu;Wang, Zhifeng;Zhao, Nan;Liu, Min;Wan, Xiangkui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.413-435
    • /
    • 2018
  • Although the accuracy of handwritten character recognition based on deep networks has been shown to be superior to that of the traditional method, the use of an overly deep network significantly increases time consumption during parameter training. For this reason, this paper took the training time and recognition accuracy into consideration and proposed a novel handwritten character recognition algorithm with newly designed network structure, which is based on an extended nonlinear kernel residual network. This network is a non-extremely deep network, and its main design is as follows:(1) Design of an unsupervised apriori algorithm for intra-class clustering, making the subsequent network training more pertinent; (2) presentation of an intermediate convolution model with a pre-processed width level of 2;(3) presentation of a composite residual structure that designs a multi-level quick link; and (4) addition of a Dropout layer after the parameter optimization. The algorithm shows superior results on MNIST and SVHN dataset, which are two character benchmark recognition datasets, and achieves better recognition accuracy and higher recognition efficiency than other deep structures with the same number of layers.

AANet: Adjacency auxiliary network for salient object detection

  • Li, Xialu;Cui, Ziguan;Gan, Zongliang;Tang, Guijin;Liu, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3729-3749
    • /
    • 2021
  • At present, deep convolution network-based salient object detection (SOD) has achieved impressive performance. However, it is still a challenging problem to make full use of the multi-scale information of the extracted features and which appropriate feature fusion method is adopted to process feature mapping. In this paper, we propose a new adjacency auxiliary network (AANet) based on multi-scale feature fusion for SOD. Firstly, we design the parallel connection feature enhancement module (PFEM) for each layer of feature extraction, which improves the feature density by connecting different dilated convolution branches in parallel, and add channel attention flow to fully extract the context information of features. Then the adjacent layer features with close degree of abstraction but different characteristic properties are fused through the adjacent auxiliary module (AAM) to eliminate the ambiguity and noise of the features. Besides, in order to refine the features effectively to get more accurate object boundaries, we design adjacency decoder (AAM_D) based on adjacency auxiliary module (AAM), which concatenates the features of adjacent layers, extracts their spatial attention, and then combines them with the output of AAM. The outputs of AAM_D features with semantic information and spatial detail obtained from each feature are used as salient prediction maps for multi-level feature joint supervising. Experiment results on six benchmark SOD datasets demonstrate that the proposed method outperforms similar previous methods.

Deconvolution Pixel Layer Based Semantic Segmentation for Street View Images (디컨볼루션 픽셀층 기반의 도로 이미지의 의미론적 분할)

  • Wahid, Abdul;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.515-518
    • /
    • 2019
  • Semantic segmentation has remained as a challenging problem in the field of computer vision. Given the immense power of Convolution Neural Network (CNN) models, many complex problems have been solved in computer vision. Semantic segmentation is the challenge of classifying several pixels of an image into one category. With the help of convolution neural networks, we have witnessed prolific results over the time. We propose a convolutional neural network model which uses Fully CNN with deconvolutional pixel layers. The goal is to create a hierarchy of features while the fully convolutional model does the primary learning and later deconvolutional model visually segments the target image. The proposed approach creates a direct link among the several adjacent pixels in the resulting feature maps. It also preserves the spatial features such as corners and edges in images and hence adding more accuracy to the resulting outputs. We test our algorithm on Karlsruhe Institute of Technology and Toyota Technologies Institute (KITTI) street view data set. Our method achieves an mIoU accuracy of 92.04 %.

Prediction of Wind Power Generation using Deep Learnning (딥러닝을 이용한 풍력 발전량 예측)

  • Choi, Jeong-Gon;Choi, Hyo-Sang
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.2
    • /
    • pp.329-338
    • /
    • 2021
  • This study predicts the amount of wind power generation for rational operation plan of wind power generation and capacity calculation of ESS. For forecasting, we present a method of predicting wind power generation by combining a physical approach and a statistical approach. The factors of wind power generation are analyzed and variables are selected. By collecting historical data of the selected variables, the amount of wind power generation is predicted using deep learning. The model used is a hybrid model that combines a bidirectional long short term memory (LSTM) and a convolution neural network (CNN) algorithm. To compare the prediction performance, this model is compared with the model and the error which consist of the MLP(:Multi Layer Perceptron) algorithm, The results is presented to evaluate the prediction performance.

Residual Learning Based CNN for Gesture Recognition in Robot Interaction

  • Han, Hua
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.385-398
    • /
    • 2021
  • The complexity of deep learning models affects the real-time performance of gesture recognition, thereby limiting the application of gesture recognition algorithms in actual scenarios. Hence, a residual learning neural network based on a deep convolutional neural network is proposed. First, small convolution kernels are used to extract the local details of gesture images. Subsequently, a shallow residual structure is built to share weights, thereby avoiding gradient disappearance or gradient explosion as the network layer deepens; consequently, the difficulty of model optimisation is simplified. Additional convolutional neural networks are used to accelerate the refinement of deep abstract features based on the spatial importance of the gesture feature distribution. Finally, a fully connected cascade softmax classifier is used to complete the gesture recognition. Compared with the dense connection multiplexing feature information network, the proposed algorithm is optimised in feature multiplexing to avoid performance fluctuations caused by feature redundancy. Experimental results from the ISOGD gesture dataset and Gesture dataset prove that the proposed algorithm affords a fast convergence speed and high accuracy.

An Efficient FPGA Based TDC Accelerator for Deconvolutional Neural Networks (효율적인 DCNN 연산을 위한 FPGA 기반 TDC 가속기)

  • Jang, Hyerim;Moon, Byungin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.457-458
    • /
    • 2021
  • 딥러닝 알고리즘 중 DCNN(DeConvolutional Neural Network)은 이미지 업스케일링과 생성·복원 등 다양한 분야에서 뛰어난 성능을 보여주고 있다. DCNN은 많은 양의 데이터를 병렬로 처리할 수 있기 때문에 하드웨어로 설계하는 것이 유용하다. 최근 DCNN의 하드웨어 구조 연구에서는 overlapping sum 문제를 해결하기 위해 deconvolution 필터를 convolution 필터로 변환하는 TDC(Transforming the Deconvolutional layer into the Convolutional layer) 알고리즘이 제안되었다. 하지만 TDC를 CPU(Central Processing Unit)로 수행하기 때문에 연산의 최적화가 어려우며, 외부 메모리를 사용하기에 추가적인 전력이 소모된다. 이에 본 논문에서는 저전력으로 구동할 수 있는 FPGA 기반 TDC 하드웨어 구조를 제안한다. 제안하는 하드웨어 구조는 자원 사용량이 적어 저전력으로 구동 가능할 뿐만 아니라, 병렬 처리 구조로 설계되어 빠른 연산 처리 속도를 보인다.

Visual Classification of Wood Knots Using k-Nearest Neighbor and Convolutional Neural Network (k-Nearest Neighbor와 Convolutional Neural Network에 의한 제재목 표면 옹이 종류의 화상 분류)

  • Kim, Hyunbin;Kim, Mingyu;Park, Yonggun;Yang, Sang-Yun;Chung, Hyunwoo;Kwon, Ohkyung;Yeo, Hwanmyeong
    • Journal of the Korean Wood Science and Technology
    • /
    • v.47 no.2
    • /
    • pp.229-238
    • /
    • 2019
  • Various wood defects occur during tree growing or wood processing. Thus, to use wood practically, it is necessary to objectively assess their quality based on the usage requirement by accurately classifying their defects. However, manual visual grading and species classification may result in differences due to subjective decisions; therefore, computer-vision-based image analysis is required for the objective evaluation of wood quality and the speeding up of wood production. In this study, the SIFT+k-NN and CNN models were used to implement a model that automatically classifies knots and analyze its accuracy. Toward this end, a total of 1,172 knot images in various shapes from five domestic conifers were used for learning and validation. For the SIFT+k-NN model, SIFT technology was used to extract properties from the knot images and k-NN was used for the classification, resulting in the classification with an accuracy of up to 60.53% when k-index was 17. The CNN model comprised 8 convolution layers and 3 hidden layers, and its maximum accuracy was 88.09% after 1205 epoch, which was higher than that of the SIFT+k-NN model. Moreover, if there is a large difference in the number of images by knot types, the SIFT+k-NN tended to show a learning biased toward the knot type with a higher number of images, whereas the CNN model did not show a drastic bias regardless of the difference in the number of images. Therefore, the CNN model showed better performance in knot classification. It is determined that the wood knot classification by the CNN model will show a sufficient accuracy in its practical applicability.