• Title/Summary/Keyword: SOFTMAX

Search Result 71, Processing Time 0.025 seconds

Adaptive Obstacle Avoidance Algorithm using Classification of 2D LiDAR Data (2차원 라이다 센서 데이터 분류를 이용한 적응형 장애물 회피 알고리즘)

  • Lee, Nara;Kwon, Soonhwan;Ryu, Hyejeong
    • Journal of Sensor Science and Technology
    • /
    • v.29 no.5
    • /
    • pp.348-353
    • /
    • 2020
  • This paper presents an adaptive method to avoid obstacles in various environmental settings, using a two-dimensional (2D) LiDAR sensor for mobile robots. While the conventional reaction based smooth nearness diagram (SND) algorithms use a fixed safety distance criterion, the proposed algorithm autonomously changes the safety criterion considering the obstacle density around a robot. The fixed safety criterion for the whole SND obstacle avoidance process can induce inefficient motion controls in terms of the travel distance and action smoothness. We applied a multinomial logistic regression algorithm, softmax regression, to classify 2D LiDAR point clouds into seven obstacle structure classes. The trained model was used to recognize a current obstacle density situation using newly obtained 2D LiDAR data. Through the classification, the robot adaptively modifies the safety distance criterion according to the change in its environment. We experimentally verified that the motion controls generated by the proposed adaptive algorithm were smoother and more efficient compared to those of the conventional SND algorithms.

A Study on the Characteristics of a series of Autoencoder for Recognizing Numbers used in CAPTCHA (CAPTCHA에 사용되는 숫자데이터를 자동으로 판독하기 위한 Autoencoder 모델들의 특성 연구)

  • Jeon, Jae-seung;Moon, Jong-sub
    • Journal of Internet Computing and Services
    • /
    • v.18 no.6
    • /
    • pp.25-34
    • /
    • 2017
  • Autoencoder is a type of deep learning method where input layer and output layer are the same, and effectively extracts and restores characteristics of input vector using constraints of hidden layer. In this paper, we propose methods of Autoencoders to remove a natural background image which is a noise to the CAPTCHA and recover only a numerical images by applying various autoencoder models to a region where one number of CAPTCHA images and a natural background are mixed. The suitability of the reconstructed image is verified by using the softmax function with the output of the autoencoder as an input. And also, we compared the proposed methods with the other method and showed that our methods are superior than others.

Reinforcement learning packet scheduling using UCB (UCB를 이용한 강화학습 패킷 스케줄링)

  • Kim, Dong-Hyun;Kim, Min-Woo;Lee, Byung-Jun;Kim, Kyung-Tae;Youn, Hee-Yong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.45-46
    • /
    • 2019
  • 본 논문에서는 Upper Confidence Bound (UCB)를 이용한 효율적인 패킷 스케줄링 기법을 제안한다. 기존 e-greedy 등 강화학습의 보상을 극대화 할 수 있는 행동을 선택하는 것과 다르게, 제안된 UCB를 이용한 강화학습 패킷 스케줄링 기법은 각 상태에서 행동을 선택한 횟수를 추가적으로 고려한다. 이는 보다 효율적인 강화학습의 탐구(Exploration)를 가능케 한다. 본 논문에서는 컴퓨터 시뮬레이션을 통하여 제안하는 UCB를 이용한 강화학습 패킷 스케줄링 기법이 기존의 e-greedy 및 softmax를 기반으로 한 패킷 스케줄링 기법에 비해 정확도 측면에서 향상된 정확도를 보인다.

  • PDF

Facial Manipulation Detection with Transformer-based Discriminative Features Learning Vision (트랜스포머 기반 판별 특징 학습 비전을 통한 얼굴 조작 감지)

  • Van-Nhan Tran;Minsu Kim;Philjoo Choi;Suk-Hwan Lee;Hoanh-Su Le;Ki-Ryong Kwon
    • Annual Conference of KIPS
    • /
    • 2023.11a
    • /
    • pp.540-542
    • /
    • 2023
  • Due to the serious issues posed by facial manipulation technologies, many researchers are becoming increasingly interested in the identification of face forgeries. The majority of existing face forgery detection methods leverage powerful data adaptation ability of neural network to derive distinguishing traits. These deep learning-based detection methods frequently treat the detection of fake faces as a binary classification problem and employ softmax loss to track CNN network training. However, acquired traits observed by softmax loss are insufficient for discriminating. To get over these limitations, in this study, we introduce a novel discriminative feature learning based on Vision Transformer architecture. Additionally, a separation-center loss is created to simply compress intra-class variation of original faces while enhancing inter-class differences in the embedding space.

Development of Deep Learning Structure to Improve Quality of Polygonal Containers (다각형 용기의 품질 향상을 위한 딥러닝 구조 개발)

  • Yoon, Suk-Moon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.493-500
    • /
    • 2021
  • In this paper, we propose the development of deep learning structure to improve quality of polygonal containers. The deep learning structure consists of a convolution layer, a bottleneck layer, a fully connect layer, and a softmax layer. The convolution layer is a layer that obtains a feature image by performing a convolution 3x3 operation on the input image or the feature image of the previous layer with several feature filters. The bottleneck layer selects only the optimal features among the features on the feature image extracted through the convolution layer, reduces the channel to a convolution 1x1 ReLU, and performs a convolution 3x3 ReLU. The global average pooling operation performed after going through the bottleneck layer reduces the size of the feature image by selecting only the optimal features among the features of the feature image extracted through the convolution layer. The fully connect layer outputs the output data through 6 fully connect layers. The softmax layer multiplies and multiplies the value between the value of the input layer node and the target node to be calculated, and converts it into a value between 0 and 1 through an activation function. After the learning is completed, the recognition process classifies non-circular glass bottles by performing image acquisition using a camera, measuring position detection, and non-circular glass bottle classification using deep learning as in the learning process. In order to evaluate the performance of the deep learning structure to improve quality of polygonal containers, as a result of an experiment at an authorized testing institute, it was calculated to be at the same level as the world's highest level with 99% good/defective discrimination accuracy. Inspection time averaged 1.7 seconds, which was calculated within the operating time standards of production processes using non-circular machine vision systems. Therefore, the effectiveness of the performance of the deep learning structure to improve quality of polygonal containers proposed in this paper was proven.

Multi-labeled Domain Detection Using CNN (CNN을 이용한 발화 주제 다중 분류)

  • Choi, Kyoungho;Kim, Kyungduk;Kim, Yonghe;Kang, Inho
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.56-59
    • /
    • 2017
  • CNN(Convolutional Neural Network)을 이용하여 발화 주제 다중 분류 task를 multi-labeling 방법과, cluster 방법을 이용하여 수행하고, 각 방법론에 MSE(Mean Square Error), softmax cross-entropy, sigmoid cross-entropy를 적용하여 성능을 평가하였다. Network는 음절 단위로 tokenize하고, 품사정보를 각 token의 추가한 sequence와, Naver DB를 통하여 얻은 named entity 정보를 입력으로 사용한다. 실험결과 cluster 방법으로 문제를 변형하고, sigmoid를 output layer의 activation function으로 사용하고 cross entropy cost function을 이용하여 network를 학습시켰을 때 F1 0.9873으로 가장 좋은 성능을 보였다.

  • PDF

Deep Neural Networks Learning based on Multiple Loss Functions for Both Person and Vehicles Re-Identification (사람과 자동차 재인식이 가능한 다중 손실함수 기반 심층 신경망 학습)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.891-902
    • /
    • 2020
  • The Re-Identification(Re-ID) is one of the most popular researches in the field of computer vision due to a variety of applications. To achieve a high-level re-identification performance, recently other methods have developed the deep learning based networks that are specialized for only person or vehicle. However, most of the current methods are difficult to be used in real-world applications that require re-identification of both person and vehicle at the same time. To overcome this limitation, this paper proposes a deep neural network learning method that combines triplet and softmax loss to improve performance and re-identify people and vehicles simultaneously. It's possible to learn the detailed difference between the identities(IDs) by combining the softmax loss with the triplet loss. In addition, weights are devised to avoid bias in one-side loss when combining. We used Market-1501 and DukeMTMC-reID datasets, which are frequently used to evaluate person re-identification experiments. Moreover, the vehicle re-identification experiment was evaluated by using VeRi-776 and VehicleID datasets. Since the proposed method does not designed for a neural network specialized for a specific object, it can re-identify simultaneously both person and vehicle. To demonstrate this, an experiment was performed by using a person and vehicle re-identification dataset together.

Comparison of Gradient Descent for Deep Learning (딥러닝을 위한 경사하강법 비교)

  • Kang, Min-Jae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.2
    • /
    • pp.189-194
    • /
    • 2020
  • This paper analyzes the gradient descent method, which is the one most used for learning neural networks. Learning means updating a parameter so the loss function is at its minimum. The loss function quantifies the difference between actual and predicted values. The gradient descent method uses the slope of the loss function to update the parameter to minimize error, and is currently used in libraries that provide the best deep learning algorithms. However, these algorithms are provided in the form of a black box, making it difficult to identify the advantages and disadvantages of various gradient descent methods. This paper analyzes the characteristics of the stochastic gradient descent method, the momentum method, the AdaGrad method, and the Adadelta method, which are currently used gradient descent methods. The experimental data used a modified National Institute of Standards and Technology (MNIST) data set that is widely used to verify neural networks. The hidden layer consists of two layers: the first with 500 neurons, and the second with 300. The activation function of the output layer is the softmax function, and the rectified linear unit function is used for the remaining input and hidden layers. The loss function uses cross-entropy error.

Deep Learning based HEVC Double Compression Detection (딥러닝 기술 기반 HEVC로 압축된 영상의 이중 압축 검출 기술)

  • Uddin, Kutub;Yang, Yoonmo;Oh, Byung Tae
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1134-1142
    • /
    • 2019
  • Detection of double compression is one of the most efficient ways of remarking the validity of videos. Many methods have been introduced to detect HEVC double compression with different coding parameters. However, HEVC double compression detection under the same coding environments is still a challenging task in video forensic. In this paper, we introduce a novel method based on the frame partitioning information in intra prediction mode for detecting double compression in with the same coding environments. We propose to extract statistical feature and Deep Convolution Neural Network (DCNN) feature from the difference of partitioning picture including Coding Unit (CU) and Transform Unit (TU) information. Finally, a softmax layer is integrated to perform the classification of the videos into single and double compression by combing the statistical and the DCNN features. Experimental results show the effectiveness of the statistical and the DCNN features with an average accuracy of 87.5% for WVGA and 84.1% for HD dataset.

SIFT Image Feature Extraction based on Deep Learning (딥 러닝 기반의 SIFT 이미지 특징 추출)

  • Lee, Jae-Eun;Moon, Won-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.234-242
    • /
    • 2019
  • In this paper, we propose a deep neural network which extracts SIFT feature points by determining whether the center pixel of a cropped image is a SIFT feature point. The data set of this network consists of a DIV2K dataset cut into $33{\times}33$ size and uses RGB image unlike SIFT which uses black and white image. The ground truth consists of the RobHess SIFT features extracted by setting the octave (scale) to 0, the sigma to 1.6, and the intervals to 3. Based on the VGG-16, we construct an increasingly deep network of 13 to 23 and 33 convolution layers, and experiment with changing the method of increasing the image scale. The result of using the sigmoid function as the activation function of the output layer is compared with the result using the softmax function. Experimental results show that the proposed network not only has more than 99% extraction accuracy but also has high extraction repeatability for distorted images.