• Title/Summary/Keyword: Convolutional

Search Result 2,195, Processing Time 0.023 seconds

Correcting Misclassified Image Features with Convolutional Coding

  • Mun, Ye-Ji;Kim, Nayoung;Lee, Jieun;Kang, Je-Won
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.11-14
    • /
    • 2018
  • The aim of this study is to rectify the misclassified image features and enhance the performance of image classification tasks by incorporating a channel- coding technique, widely used in telecommunication. Specifically, the proposed algorithm employs the error - correcting mechanism of convolutional coding combined with the convolutional neural networks (CNNs) that are the state - of- the- arts image classifier s. We develop an encoder and a decoder to employ the error - correcting capability of the convolutional coding. In the encoder, the label values of the image data are converted to convolutional codes that are used as target outputs of the CNN, and the network is trained to minimize the Euclidean distance between the target output codes and the actual output codes. In order to correct misclassified features, the outputs of the network are decoded through the trellis structure with Viterbi algorithm before determining the final prediction. This paper demonstrates that the proposed architecture advances the performance of the neural networks compared to the traditional one- hot encoding method.

  • PDF

Comparative Study of Ship Image Classification using Feedforward Neural Network and Convolutional Neural Network

  • Dae-Ki Kang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.221-227
    • /
    • 2024
  • In autonomous navigation systems, the need for fast and accurate image processing using deep learning and advanced sensor technologies is paramount. These systems rely heavily on the ability to process and interpret visual data swiftly and precisely to ensure safe and efficient navigation. Despite the critical importance of such capabilities, there has been a noticeable lack of research specifically focused on ship image classification for maritime applications. This gap highlights the necessity for more in-depth studies in this domain. In this paper, we aim to address this gap by presenting a comprehensive comparative study of ship image classification using two distinct neural network models: the Feedforward Neural Network (FNN) and the Convolutional Neural Network (CNN). Our study involves the application of both models to the task of classifying ship images, utilizing a dataset specifically prepared for this purpose. Through our analysis, we found that the Convolutional Neural Network demonstrates significantly more effective performance in accurately classifying ship images compared to the Feedforward Neural Network. The findings from this research are significant as they can contribute to the advancement of core source technologies for maritime autonomous navigation systems. By leveraging the superior image classification capabilities of convolutional neural networks, we can enhance the accuracy and reliability of these systems. This improvement is crucial for the development of more efficient and safer autonomous maritime operations, ultimately contributing to the broader field of autonomous transportation technology.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Design of New Channel Codes for Speed Up Coding Procedure (코딩 속도향상을 위한 채널 코드의 설계)

  • 공형윤;이창희
    • Proceedings of the IEEK Conference
    • /
    • 2000.06a
    • /
    • pp.5-8
    • /
    • 2000
  • In this paper, we present a new cぉnet coding method, so called MLC (Multi-Level Codes), for error detection and correction in digital wireless communication. MLC coding method we the same coding procedure wed in the convolutional coding but it is distinguished from the existing convolutional coding in point of generating the code word by using multi-level information data (M-ary signal) and in point of speed of coding procedure Through computer simulation, we analyze the performance of the coding method suggested here compared to convolutional coding method in case of modulo-operation and in case of non-binary coding Procedure respectively under various channel environments.

  • PDF

A Channel Coding of Variable Rate with Interleaver Punctured Serially Concatenated Convolutional Codes (IP-SCCC에 의한 가변 부호율의 채널 부호화)

  • 이연문;조경식;정차근
    • Proceedings of the IEEK Conference
    • /
    • 2000.06a
    • /
    • pp.17-20
    • /
    • 2000
  • This paper addresses a novel algorithm for variable rate channel coding with interleaver punctured convolutional code for wireless communication. In other to increase the coding performance and achieve the variable channel coding rate, serially concatenated convolutional coding scheme will be applied. In this paper, we characterize the effect of interleaver puncturing on the effectiveness of the proposed scheme some simulation results are presented, in which the channel model of additive Gaussian noise is assumed.

  • PDF

DC-free error correcting codes based on convolutional codes (길쌈부호를 이용한 무직류 오류정정부호)

  • 이수인;김정구;주언경
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.32A no.5
    • /
    • pp.24-30
    • /
    • 1995
  • A new class of DC-error correcting codes based on convolutional codes is proposed with its performance analysis. The proposed codes can be encoded and decoded using the conventional convolutional encoders and decoders with slight modifications. And the codes have null point at DC and capable of correcting errors. The DC-free error correcting codes are especially well suited for applications in high-speed channels.

  • PDF

MSE of Dual-k Convolutional Codes for an AWGN Channel with Rayleigh Fading (Rayleigh Fading AWGN채널에 대한 Dual-K길쌈부호의 평균자승오차)

  • 문상재
    • Proceedings of the Korean Institute of Communication Sciences Conference
    • /
    • 1986.04a
    • /
    • pp.1-3
    • /
    • 1986
  • We are concerned with transmitting numerical source data of {0, 1, 2, ..., 2k-1} through a channel coding system. The rate 1/v dual-k convolutional code with the orthogonal MFSK modulation and the Viterbl decoding is employed for the implementation of the channel coding system. The mean square error of the dual-k convolutional code is evaluated for the numerical source transmitted over an additive white Gaussian noise channel with Rayleigh fading.

  • PDF

Design of watermarking processor based on convolutional neural network (Convolutional Neural Network 기반의 워터마킹 프로세서의 설계)

  • Lee, Jae-Eun;Seo, Young-Ho;Kim, Dong-Wook
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.106-107
    • /
    • 2020
  • 본 논문에서는 촬영과 동시에 유통되는 생방송 영상의 실시간 지적재산권 보호를 위한 Convolutional Neural Network를 기반으로 하는 워터마킹 프로세서의 구조를 제안한다. 제안하는 워터마킹 프로세서는 전처리 네트워크와 삽입 네트워크를 최적화하여 ASIC 칩으로 제작한다. 이는 영상을 입력으로 하는 딥 러닝 분야에서 많이 사용되는 CNN을 기반으로 하기 때문에 일반적인 딥 러닝 가속기 설계로 간주된다.

  • PDF

Introduction to convolutional neural network using Keras; an understanding from a statistician

  • Lee, Hagyeong;Song, Jongwoo
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.6
    • /
    • pp.591-610
    • /
    • 2019
  • Deep Learning is one of the machine learning methods to find features from a huge data using non-linear transformation. It is now commonly used for supervised learning in many fields. In particular, Convolutional Neural Network (CNN) is the best technique for the image classification since 2012. For users who consider deep learning models for real-world applications, Keras is a popular API for neural networks written in Python and also can be used in R. We try examine the parameter estimation procedures of Deep Neural Network and structures of CNN models from basics to advanced techniques. We also try to figure out some crucial steps in CNN that can improve image classification performance in the CIFAR10 dataset using Keras. We found that several stacks of convolutional layers and batch normalization could improve prediction performance. We also compared image classification performances with other machine learning methods, including K-Nearest Neighbors (K-NN), Random Forest, and XGBoost, in both MNIST and CIFAR10 dataset.

Analysis Performance of Convolutional Code and Turbo code Using The Semi-Random Interleaver (길쌈부호와 세미 랜덤 인터리버를 사용한 터보코드의 성능분석)

  • 홍성원
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.6
    • /
    • pp.1184-1189
    • /
    • 2001
  • In this paper was analyzed the performance of turbo code using semi-random interleaver which proposed a reference numbers 11. Which was analyzed comparison the performance of between the current mobile communication system had been used the viterbe decoding algorithm of convolutional code and turbo codes when fixed constraint length. The result was defined that the performance of turbo code rose a $E_{b/}$ $N_{o}$=4.7[㏈] than convolutional code, when convolutional code and turbo code was fixed by BER = 10$^{-4}$ and constraint length K 5.5.5.

  • PDF