• Title/Summary/Keyword: Pooling Algorithm

Search Result 33, Processing Time 0.075 seconds

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Optimal Band Selection Techniques for Hyperspectral Image Pixel Classification using Pooling Operations & PSNR (초분광 이미지 픽셀 분류를 위한 풀링 연산과 PSNR을 이용한 최적 밴드 선택 기법)

  • Chang, Duhyeuk;Jung, Byeonghyeon;Heo, Junyoung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.5
    • /
    • pp.141-147
    • /
    • 2021
  • In this paper, in order to improve the utilization of hyperspectral large-capacity data feature information by reducing complex computations by dimension reduction of neural network inputs in embedded systems, the band selection algorithm is applied in each subset. Among feature extraction and feature selection techniques, the feature selection aim to improve the optimal number of bands suitable for datasets, regardless of wavelength range, and the time and performance, more than others algorithms. Through this experiment, although the time required was reduced by 1/3 to 1/9 times compared to the others band selection technique, meaningful results were improved by more than 4% in terms of performance through the K-neighbor classifier. Although it is difficult to utilize real-time hyperspectral data analysis now, it has confirmed the possibility of improvement.

A POLLED DISPATCHING STRATEGY FOR AUTOMATED GUIDED VEHICLES IN PORT CONTAINER TERMINALS

  • Bae, Jong Wook;Kim, Kap Hwan
    • Management Science and Financial Engineering
    • /
    • v.6 no.2
    • /
    • pp.47-67
    • /
    • 2000
  • It is discussed how to assign delivery tasks to automated guided vehicles (AGVs) for multiple container cranes in automated container terminals. The primary goal of dispatching AGVs is to complete all the lading and discharging operations as early as possible, and the secondary goal is to minimize the total travel distance of AGVs. It is assumed that AGVs are not dedicated to a specific container crane but shared among multiple cranes. A mathematical formulation is developed and a heuristic algorithm is suggested to obtain a near optimal solution with a reasonable amount of computational time. The single-cycle and the dual-cycle operations in both the seaside and the landside operations are analyzed. The effects of pooling AGVs for multiple container cranes on the performance of an entire AGV system are also analyze through a numerical experiment.

  • PDF

A Deep Learning-Based Image Semantic Segmentation Algorithm

  • Chaoqun, Shen;Zhongliang, Sun
    • Journal of Information Processing Systems
    • /
    • v.19 no.1
    • /
    • pp.98-108
    • /
    • 2023
  • This paper is an attempt to design segmentation method based on fully convolutional networks (FCN) and attention mechanism. The first five layers of the Visual Geometry Group (VGG) 16 network serve as the coding part in the semantic segmentation network structure with the convolutional layer used to replace pooling to reduce loss of image feature extraction information. The up-sampling and deconvolution unit of the FCN is then used as the decoding part in the semantic segmentation network. In the deconvolution process, the skip structure is used to fuse different levels of information and the attention mechanism is incorporated to reduce accuracy loss. Finally, the segmentation results are obtained through pixel layer classification. The results show that our method outperforms the comparison methods in mean pixel accuracy (MPA) and mean intersection over union (MIOU).

Saliency-Assisted Collaborative Learning Network for Road Scene Semantic Segmentation

  • Haifeng Sima;Yushuang Xu;Minmin Du;Meng Gao;Jing Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.861-880
    • /
    • 2023
  • Semantic segmentation of road scene is the key technology of autonomous driving, and the improvement of convolutional neural network architecture promotes the improvement of model segmentation performance. The existing convolutional neural network has the simplification of learning knowledge and the complexity of the model. To address this issue, we proposed a road scene semantic segmentation algorithm based on multi-task collaborative learning. Firstly, a depthwise separable convolution atrous spatial pyramid pooling is proposed to reduce model complexity. Secondly, a collaborative learning framework is proposed involved with saliency detection, and the joint loss function is defined using homoscedastic uncertainty to meet the new learning model. Experiments are conducted on the road and nature scenes datasets. The proposed method achieves 70.94% and 64.90% mIoU on Cityscapes and PASCAL VOC 2012 datasets, respectively. Qualitatively, Compared to methods with excellent performance, the method proposed in this paper has significant advantages in the segmentation of fine targets and boundaries.

Image Classification using Deep Learning Algorithm and 2D Lidar Sensor (딥러닝 알고리즘과 2D Lidar 센서를 이용한 이미지 분류)

  • Lee, Junho;Chang, Hyuk-Jun
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1302-1308
    • /
    • 2019
  • This paper presents an approach for classifying image made by acquired position data from a 2D Lidar sensor with a convolutional neural network (CNN). Lidar sensor has been widely used for unmanned devices owing to advantages in term of data accuracy, robustness against geometry distortion and light variations. A CNN algorithm consists of one or more convolutional and pooling layers and has shown a satisfactory performance for image classification. In this paper, different types of CNN architectures based on training methods, Gradient Descent(GD) and Levenberg-arquardt(LM), are implemented. The LM method has two types based on the frequency of approximating Hessian matrix, one of the factors to update training parameters. Simulation results of the LM algorithms show better classification performance of the image data than that of the GD algorithm. In addition, the LM algorithm with more frequent Hessian matrix approximation shows a smaller error than the other type of LM algorithm.

Quality grading of Hanwoo (Korean native cattle breed) sub-images using convolutional neural network

  • Kwon, Kyung-Do;Lee, Ahyeong;Lim, Jongkuk;Cho, Soohyun;Lee, Wanghee;Cho, Byoung-Kwan;Seo, Youngwook
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.4
    • /
    • pp.1109-1122
    • /
    • 2020
  • The aim of this study was to develop a marbling classification and prediction model using small parts of sirloin images based on a deep learning algorithm, namely, a convolutional neural network (CNN). Samples were purchased from a commercial slaughterhouse in Korea, images for each grade were acquired, and the total images (n = 500) were assigned according to their grade number: 1++, 1+, 1, and both 2 & 3. The image acquisition system consists of a DSLR camera with a polarization filter to remove diffusive reflectance and two light sources (55 W). To correct the distorted original images, a radial correction algorithm was implemented. Color images of sirloins of Hanwoo (mixed with feeder cattle, steer, and calf) were divided and sub-images with image sizes of 161 × 161 were made to train the marbling prediction model. In this study, the convolutional neural network (CNN) has four convolution layers and yields prediction results in accordance with marbling grades (1++, 1+, 1, and 2&3). Every single layer uses a rectified linear unit (ReLU) function as an activation function and max-pooling is used for extracting the edge between fat and muscle and reducing the variance of the data. Prediction accuracy was measured using an accuracy and kappa coefficient from a confusion matrix. We summed the prediction of sub-images and determined the total average prediction accuracy. Training accuracy was 100% and the test accuracy was 86%, indicating comparably good performance using the CNN. This study provides classification potential for predicting the marbling grade using color images and a convolutional neural network algorithm.

Semantic Image Segmentation Combining Image-level and Pixel-level Classification (영상수준과 픽셀수준 분류를 결합한 영상 의미분할)

  • Kim, Seon Kuk;Lee, Chil Woo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1425-1430
    • /
    • 2018
  • In this paper, we propose a CNN based deep learning algorithm for semantic segmentation of images. In order to improve the accuracy of semantic segmentation, we combined pixel level object classification and image level object classification. The image level object classification is used to accurately detect the characteristics of an image, and the pixel level object classification is used to indicate which object area is included in each pixel. The proposed network structure consists of three parts in total. A part for extracting the features of the image, a part for outputting the final result in the resolution size of the original image, and a part for performing the image level object classification. Loss functions exist for image level and pixel level classification, respectively. Image-level object classification uses KL-Divergence and pixel level object classification uses cross-entropy. In addition, it combines the layer of the resolution of the network extracting the features and the network of the resolution to secure the position information of the lost feature and the information of the boundary of the object due to the pooling operation.

CutPaste-Based Anomaly Detection Model using Multi Scale Feature Extraction in Time Series Streaming Data

  • Jeon, Byeong-Uk;Chung, Kyungyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.8
    • /
    • pp.2787-2800
    • /
    • 2022
  • The aging society increases emergency situations of the elderly living alone and a variety of social crimes. In order to prevent them, techniques to detect emergency situations through voice are actively researched. This study proposes CutPaste-based anomaly detection model using multi-scale feature extraction in time series streaming data. In the proposed method, an audio file is converted into a spectrogram. In this way, it is possible to use an algorithm for image data, such as CNN. After that, mutli-scale feature extraction is applied. Three images drawn from Adaptive Pooling layer that has different-sized kernels are merged. In consideration of various types of anomaly, including point anomaly, contextual anomaly, and collective anomaly, the limitations of a conventional anomaly model are improved. Finally, CutPaste-based anomaly detection is conducted. Since the model is trained through self-supervised learning, it is possible to detect a diversity of emergency situations as anomaly without labeling. Therefore, the proposed model overcomes the limitations of a conventional model that classifies only labelled emergency situations. Also, the proposed model is evaluated to have better performance than a conventional anomaly detection model.

A Network Intrusion Security Detection Method Using BiLSTM-CNN in Big Data Environment

  • Hong Wang
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.688-701
    • /
    • 2023
  • The conventional methods of network intrusion detection system (NIDS) cannot measure the trend of intrusiondetection targets effectively, which lead to low detection accuracy. In this study, a NIDS method which based on a deep neural network in a big-data environment is proposed. Firstly, the entire framework of the NIDS model is constructed in two stages. Feature reduction and anomaly probability output are used at the core of the two stages. Subsequently, a convolutional neural network, which encompasses a down sampling layer and a characteristic extractor consist of a convolution layer, the correlation of inputs is realized by introducing bidirectional long short-term memory. Finally, after the convolution layer, a pooling layer is added to sample the required features according to different sampling rules, which promotes the overall performance of the NIDS model. The proposed NIDS method and three other methods are compared, and it is broken down under the conditions of the two databases through simulation experiments. The results demonstrate that the proposed model is superior to the other three methods of NIDS in two databases, in terms of precision, accuracy, F1- score, and recall, which are 91.64%, 93.35%, 92.25%, and 91.87%, respectively. The proposed algorithm is significant for improving the accuracy of NIDS.