• Title/Summary/Keyword: Pretrained

Search Result 96, Processing Time 0.031 seconds

Feature Extraction on a Periocular Region and Person Authentication Using a ResNet Model (ResNet 모델을 이용한 눈 주변 영역의 특징 추출 및 개인 인증)

  • Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.12
    • /
    • pp.1347-1355
    • /
    • 2019
  • Deep learning approach based on convolution neural network (CNN) has extensively studied in the field of computer vision. However, periocular feature extraction using CNN was not well studied because it is practically impossible to collect large volume of biometric data. This study uses the ResNet model which was trained with the ImageNet dataset. To overcome the problem of insufficient training data, we focused on the training of multi-layer perception (MLP) having simple structure rather than training the CNN having complex structure. It first extracts features using the pretrained ResNet model and reduces the feature dimension by principle component analysis (PCA), then trains a MLP classifier. Experimental results with the public periocular dataset UBIPr show that the proposed method is effective in person authentication using periocular region. Especially it has the advantage which can be directly applied for other biometric traits.

A Study on Modified MLP Learning using Pretrained RBM (RBM 선행학습을 이용한 개선 MLP 학습에 관한 연구)

  • Kim, Tae-Hun;Lee, Yill-Byung
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06c
    • /
    • pp.380-384
    • /
    • 2007
  • MLP(Multi-Layer Perceptron)를 이용한 학습은 간단한 구조에도 비선형 분류가 가능하다는 장점을 가지고 있다. 하지만 오류역전파 알고리즘을 사용함으로써 시간의 소모가 크고 원치 않는 결과값으로의 수렴가능성을 배제할 수 없다는 단점을 가지고 있다. 이는 초기설정의 의존도가 높기 때문에 발생하는 문제들로 좋은 결과값에 근접한 곳으로 초기화가 이루어지면 좋은 학습 성능을 보이지만 반대로 좋은 결과값으로부터 멀리 떨어진 곳으로 신경망의 초기화가 이루어지면 학습 성능이 현저히 낮아지는 현상을 보인다. 본 논문에서는 MLP 전체의 층을 대상으로 하는 본 학습이 이루어지기 전에 RBM(Restricted Boltzmann Machine)을 이용, 층간 선행학습을 행하고 그 결과로 얻어지는 가중치와 바이어스를 본 MLP 학습의 초기화 데이터로 사용하는 개선 MLP 학습 알고리즘을 제안한다. 이 방법을 사용함으로써 MLP 학습 속도향상은 물론 원치 않는 지역해로의 수렴까지 방지할 수 있어 전체적인 학습 성능향상이 가능하게 된다.

  • PDF

On-line model compensation using noise masking effect for robust speech recognition (잡음 차폐를 이용한 온라인 모델 보상)

  • Jung Gue-Jun;Cho Hoon-Young;Oh Yung-Hwan
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.215-218
    • /
    • 2003
  • In this paper we apply PMC (parallel model combination) to speech recognition system online. As a representative of model based noise compensation techniques, PMC compensates environmental mismatch by combining pretrained clean speech models and real-time estimated noise information. This is very effective approach for compensating extreme environmental mismatch but is inadequate to use in on-line system for heavy computational cost. To reduce the computational cost and to apply PMC online, we use a noise masking effect - the energy in a frequency band is dominated either by clean speech energy or by noise energy - in the process of model compensation. Experiments on artificially produced noisy speech data confirm that the proposed technique is fast and effective for the on-line model compensation.

  • PDF

Simple and effective neural coreference resolution for Korean language

  • Park, Cheoneum;Lim, Joonho;Ryu, Jihee;Kim, Hyunki;Lee, Changki
    • ETRI Journal
    • /
    • v.43 no.6
    • /
    • pp.1038-1048
    • /
    • 2021
  • We propose an end-to-end neural coreference resolution for the Korean language that uses an attention mechanism to point to the same entity. Because Korean is a head-final language, we focused on a method that uses a pointer network based on the head. The key idea is to consider all nouns in the document as candidates based on the head-final characteristics of the Korean language and learn distributions over the referenced entity positions for each noun. Given the recent success of applications using bidirectional encoder representation from transformer (BERT) in natural language-processing tasks, we employed BERT in the proposed model to create word representations based on contextual information. The experimental results indicated that the proposed model achieved state-of-the-art performance in Korean language coreference resolution.

Ensemble Learning Based on Tumor Internal and External Imaging Patch to Predict the Recurrence of Non-small Cell Lung Cancer Patients in Chest CT Image (흉부 CT 영상에서 비소세포폐암 환자의 재발 예측을 위한 종양 내외부 영상 패치 기반 앙상블 학습)

  • Lee, Ye-Sel;Cho, A-Hyun;Hong, Helen
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.373-381
    • /
    • 2021
  • In this paper, we propose a classification model based on convolutional neural network(CNN) for predicting 2-year recurrence in non-small cell lung cancer(NSCLC) patients using preoperative chest CT images. Based on the region of interest(ROI) defined as the tumor internal and external area, the input images consist of an intratumoral patch, a peritumoral patch and a peritumoral texture patch focusing on the texture information of the peritumoral patch. Each patch is trained through AlexNet pretrained on ImageNet to explore the usefulness and performance of various patches. Additionally, ensemble learning of network trained with each patch analyzes the performance of different patch combination. Compared with all results, the ensemble model with intratumoral and peritumoral patches achieved the best performance (ACC=98.28%, Sensitivity=100%, NPV=100%).

DG-based SPO tuple recognition using self-attention M-Bi-LSTM

  • Jung, Joon-young
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.438-449
    • /
    • 2022
  • This study proposes a dependency grammar-based self-attention multilayered bidirectional long short-term memory (DG-M-Bi-LSTM) model for subject-predicate-object (SPO) tuple recognition from natural language (NL) sentences. To add recent knowledge to the knowledge base autonomously, it is essential to extract knowledge from numerous NL data. Therefore, this study proposes a high-accuracy SPO tuple recognition model that requires a small amount of learning data to extract knowledge from NL sentences. The accuracy of SPO tuple recognition using DG-M-Bi-LSTM is compared with that using NL-based self-attention multilayered bidirectional LSTM, DG-based bidirectional encoder representations from transformers (BERT), and NL-based BERT to evaluate its effectiveness. The DG-M-Bi-LSTM model achieves the best results in terms of recognition accuracy for extracting SPO tuples from NL sentences even if it has fewer deep neural network (DNN) parameters than BERT. In particular, its accuracy is better than that of BERT when the learning data are limited. Additionally, its pretrained DNN parameters can be applied to other domains because it learns the structural relations in NL sentences.

Dense Retrieval using Pretrained RoBERTa with Augmented Query (증강된 질문을 이용한 RoBERTa 기반 Dense Passage Retrieval)

  • Jun-Bum Park;Beomseok Hong;Wonseok Choi;Youngsub Han;Byoung-Ki Jeon;Seung-Hoon Na
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.141-145
    • /
    • 2022
  • 다중 문서 기반 대화 시스템에서 응답 시스템은 올바른 답변을 생성하기 위해서 여러 개의 문서 중 질문과 가장 관련 있는 문서를 검색하는 것부터 시작해야 한다. DialDoc 2022 Shared Task[1]를 비롯한 최근의 연구들은 대화 시스템의 문서 검색 과정을 위해 Dense Passage Retrieval(DPR)[2] 모델을 사용하고 있으며 검색기의 성능 개선을 위해 Re-ranking과 Hard negative sampling 같은 방법들이 연구되고 있다. 본 논문에서는 문서에 기반하는 대화 데이터의 양이 적거나 제한될 경우, 주어진 데이터를 효율적으로 활용해 보고자 검색기를 생성 모델을 이용하여 문서의 엔티티를 기반으로 질문을 생성하고 기존 데이터에 증강하는 방법을 제시했으며 실험의 결과로 MRR metric의 경우 0.96 ~ 1.56의 성능 향상을, R@1 metric의 경우 1.2 ~ 1.57의 성능 향상을 확인하였다.

  • PDF

Real-Time Arbitrary Face Swapping System For Video Influencers Utilizing Arbitrary Generated Face Image Selection

  • Jihyeon Lee;Seunghoo Lee;Hongju Nam;Suk-Ho Lee
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.2
    • /
    • pp.31-38
    • /
    • 2023
  • This paper introduces a real-time face swapping system that enables video influencers to swap their faces with arbitrary generated face images of their choice. The system is implemented as a Django-based server that uses a REST request to communicate with the generative model,specifically the pretrained stable diffusion model. Once generated, the generated image is displayed on the front page so that the influencer can decide whether to use the generated face or not, by clicking on the accept button on the front page. If they choose to use it, both their face and the generated face are sent to the landmark extraction module to extract the landmarks, which are then used to swap the faces. To minimize the fluctuation of landmarks over time that can cause instability or jitter in the output, a temporal filtering step is added. Furthermore, to increase the processing speed the system works on a reduced set of the extracted landmarks.

HeavyRoBERTa: Pretrained Language Model for Heavy Industry (HeavyRoBERTa: 중공업 특화 사전 학습 언어 모델)

  • Lee, Jeong-Doo;Na, Seung-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.602-604
    • /
    • 2021
  • 최근 자연어 처리 분야에서 사전 학습된 언어 모델은 다양한 응용 태스크에 적용되어 성능을 향상시켰다. 하지만 일반적인 말뭉치로 사전 학습된 언어 모델의 경우 중공업 분야처럼 전문적인 분야의 응용 태스크에서 좋은 성능을 나타내지 못한다. 때문에 본 논문에서는 이러한 문제점을 해결하기 위해 중공업 말뭉치를 이용한 RoBERTa 기반의 중공업 분야에 특화된 언어 모델 HeavyRoBERTa를 제안하고 이를 통해 중공업 말뭉치 상에서 Perplexity와 zero-shot 유의어 추출 태스크에서 성능을 개선시켰다.

  • PDF

Korean Dependency Parsing using Pretrained Language Model and Specific-Abstraction Encoder (사전 학습 모델과 Specific-Abstraction 인코더를 사용한 한국어 의존 구문 분석)

  • Kim, Bongsu;Whang, Taesun;Kim, Jungwook;Lee, Saebyeok
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.98-102
    • /
    • 2020
  • 의존 구문 분석은 입력된 문장 내의 어절 간의 의존 관계를 예측하기 위한 자연어처리 태스크이다. 최근에는 BERT와 같은 사전학습 모델기반의 의존 구문 분석 모델이 높은 성능을 보이고 있다. 본 논문에서는 추가적인 성능 개선을 위해 ALBERT, ELECTRA 언어 모델을 형태소 분석과 BPE를 적용해 학습한 후, 인코딩 과정에 사용하였다. 또한 의존소 어절과 지배소 어절의 특징을 specific하게 추상화 하기 위해 두 개의 트랜스포머 인코더 스택을 추가한 의존 구문 분석 모델을 제안한다. 실험결과 제안한 모델이 세종 코퍼스에 대해 UAS 94.77 LAS 94.06의 성능을 보였다.

  • PDF