• Title/Summary/Keyword: 딥러닝 융합연구

Search Result 451, Processing Time 0.027 seconds

Multi-Dimensional Emotion Recognition Model of Counseling Chatbot (상담 챗봇의 다차원 감정 인식 모델)

  • Lim, Myung Jin;Yi, Moung Ho;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.10 no.4
    • /
    • pp.21-27
    • /
    • 2021
  • Recently, the importance of counseling is increasing due to the Corona Blue caused by COVID-19. Also, with the increase of non-face-to-face services, researches on chatbots that have changed the counseling media are being actively conducted. In non-face-to-face counseling through chatbot, it is most important to accurately understand the client's emotions. However, since there is a limit to recognizing emotions only in sentences written by the client, it is necessary to recognize the dimensional emotions embedded in the sentences for more accurate emotion recognition. Therefore, in this paper, the vector and sentence VAD (Valence, Arousal, Dominance) generated by learning the Word2Vec model after correcting the original data according to the characteristics of the data are learned using a deep learning algorithm to learn the multi-dimensional We propose an emotion recognition model. As a result of comparing three deep learning models as a method to verify the usefulness of the proposed model, R-squared showed the best performance with 0.8484 when the attention model is used.

Deep Video Stabilization via Optical Flow in Unstable Scenes (동영상 안정화를 위한 옵티컬 플로우의 비지도 학습 방법)

  • Bohee Lee;Kwangsu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.115-127
    • /
    • 2023
  • Video stabilization is one of the camera technologies that the importance is gradually increasing as the personal media market has recently become huge. For deep learning-based video stabilization, existing methods collect pairs of video datas before and after stabilization, but it takes a lot of time and effort to create synchronized datas. Recently, to solve this problem, unsupervised learning method using only unstable video data has been proposed. In this paper, we propose a network structure that learns the stabilized trajectory only with the unstable video image without the pair of unstable and stable video pair using the Convolutional Auto Encoder structure, one of the unsupervised learning methods. Optical flow data is used as network input and output, and optical flow data was mapped into grid units to simplify the network and minimize noise. In addition, to generate a stabilized trajectory with an unsupervised learning method, we define the loss function that smoothing the input optical flow data. And through comparison of the results, we confirmed that the network is learned as intended by the loss function.

A Model of Recursive Hierarchical Nested Triangle for Convergence from Lower-layer Sibling Practices (하위 훈련 성과 융합을 위한 순환적 계층 재귀 모델)

  • Moon, Hyo-Jung
    • Journal of Digital Contents Society
    • /
    • v.19 no.2
    • /
    • pp.415-423
    • /
    • 2018
  • In recent years, Computer-based learning, such as machine learning and deep learning in the computer field, is attracting attention. They start learning from the lowest level and propagate the result to the highest level to calculate the final result. Research literature has shown that systematic learning and growth can yield good results. However, systematic models based on systematic models are hard to find, compared to various and extensive research attempts. To this end, this paper proposes the first TNT(Transitive Nested Triangle)model, which is a growth and fusion model that can be used in various aspects. This model can be said to be a recursive model in which each function formed through geometric forms an organic hierarchical relationship, and the result is used again as they grow and converge to the top. That is, it is an analytical method called 'Horizontal Sibling Merges and Upward Convergence'. This model is applicable to various aspects. In this study, we focus on explaining the TNT model.

Efficient Deep Learning Approaches for Active Fire Detection Using Himawari-8 Geostationary Satellite Images (Himawari-8 정지궤도 위성 영상을 활용한 딥러닝 기반 산불 탐지의 효율적 방안 제시)

  • Sihyun Lee;Yoojin Kang;Taejun Sung;Jungho Im
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.979-995
    • /
    • 2023
  • As wildfires are difficult to predict, real-time monitoring is crucial for a timely response. Geostationary satellite images are very useful for active fire detection because they can monitor a vast area with high temporal resolution (e.g., 2 min). Existing satellite-based active fire detection algorithms detect thermal outliers using threshold values based on the statistical analysis of brightness temperature. However, the difficulty in establishing suitable thresholds for such threshold-based methods hinders their ability to detect fires with low intensity and achieve generalized performance. In light of these challenges, machine learning has emerged as a potential-solution. Until now, relatively simple techniques such as random forest, Vanilla convolutional neural network (CNN), and U-net have been applied for active fire detection. Therefore, this study proposed an active fire detection algorithm using state-of-the-art (SOTA) deep learning techniques using data from the Advanced Himawari Imager and evaluated it over East Asia and Australia. The SOTA model was developed by applying EfficientNet and lion optimizer, and the results were compared with the model using the Vanilla CNN structure. EfficientNet outperformed CNN with F1-scores of 0.88 and 0.83 in East Asia and Australia, respectively. The performance was better after using weighted loss, equal sampling, and image augmentation techniques to fix data imbalance issues compared to before the techniques were used, resulting in F1-scores of 0.92 in East Asia and 0.84 in Australia. It is anticipated that timely responses facilitated by the SOTA deep learning-based approach for active fire detection will effectively mitigate the damage caused by wildfires.

Robust RGB image-based gait analysis in various environment (다양한 환경에 강건한 RGB 영상 기반 보행 분석)

  • Ahn, Ji-min;Jeung, Gyeo-wun;Shin, Dong-in;Won, Geon;Park, Jong-beom
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.441-443
    • /
    • 2018
  • This paper deals with the analysis of leg motion using RGB image. We used RGB image as gait analysis element by using BMC(Background Model Challenge) method and by using combining object recognition segmentation algorithm and attitude detection algorithm. It is considered that gait analysis incorporating image can be used as a parameter for classification of gait pattern recognition and abnormal gait.

  • PDF

Development of Illegal parking prevention system using Image Recognition (영상인식 기술을 이용한 불법 주차 방지 시스템 개발)

  • Lee, Tae-Hun;Lee, Min-Gyo;Kim, Jae-Yoon;Yoo, Hongseok
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.293-294
    • /
    • 2019
  • 본 논문에서는 전기차 파생 IT융합 서비스의 일환으로 원활한 전기차 충전을 지원하기 위한 불법 주차 방지 시스템을 제안한다. 국내 전기차 관련 법에 따르면 전기차 충전소 앞에 전기차가 아닌 일반 차량이 불법으로 주차를 하게 되면 과태료를 내게 되어 있다. 따라서, 제안한 시스템에서는 전기차가 아닌 일반차가 주차를 하면 경광등 작동시켜 운전자에게 경고한다. 제안한 시스템에서는 딥러닝 기반의 영상인식 SW를 적용하였다. 다양한 조도 환경에서 인식 성공률을 분석하였고 어두운 저녁에는 주변 광량에 따라 인식이 잘 이뤄지지 않는 것을 확인하였다. 향후 추가 LED를 더해 광량의 부족함에 따른 인식률 저하를 개선하는 연구를 진행할 계획이다.

  • PDF

Privacy Protection using Adversarial AI Attack Techniques (적대적 AI 공격 기법을 활용한 프라이버시 보호)

  • Beom-Gi Lee;Hyun-A Noh;Yubin Choi;Seo-Young Lee;Gyuyoung Lee
    • Annual Conference of KIPS
    • /
    • 2023.11a
    • /
    • pp.912-913
    • /
    • 2023
  • 이미지 처리에 관한 인공지능 모델의 발전에 따라 개인정보 유출 문제가 가속화되고 있다. 인공지능은 다방면으로 삶에 편리함을 제공하지만, 딥러닝 기술은 적대적 예제에 취약성을 보이기 때문에, 개인은 보안에 취약한 대상이 된다. 본 연구는 ResNet18 신경망 모델에 얼굴이미지를 학습시킨 후, Shadow Attack을 사용하여 입력 이미지에 대한 AI 분류 정확도를 의도적으로 저하시켜, 허가받지 않은 이미지의 인식율을 낮출 수 있도록 구현하였으며 그 성능을 실험을 통해 입증하였다.

Development of a YOLOv8-Based Sashimi Image Recognition Mobile Application (YOLOv8 기반의 회 이미지 인식 모바일 애플리케이션 개발)

  • Jane Park;Youngseob Lim;Minhee Kang;Injun Kim;Yongju Cho
    • Annual Conference of KIPS
    • /
    • 2024.10a
    • /
    • pp.416-417
    • /
    • 2024
  • 본 연구에서는 YOLOv8 모델을 활용해 다양한 회의 종류를 인식할 수 있는 모바일 애플리케이션을 개발하였다. 완성된 애플리케이션은 사용자가 모둠회 사진을 촬영하면, 학습된 딥러닝 모델이 이미지를 처리하여 해당 회의 종류를 인식한다. 본 논문에서는 애플리케이션의 시스템 설계와 구현 과정, 성능 평가 결과를 제시하며, 사용자가 실시간으로 인식 결과를 확인할 수 있는 기능을 중점적으로 다룬다.

Efficient Data Preprocessing Scheme for Audio Deep Learning in Solar-Powered IoT Edge Computing Environment (태양 에너지 수집형 IoT 엣지 컴퓨팅 환경에서 효율적인 오디오 딥러닝을 위한 데이터 전처리 기법)

  • Yeon-Tae Yoo;Chang-Han Lee;Seok-Mun Heo;Na-Kyung You;Ki-Hoon Kim;Chan-Seo Lee;Dong-Kun Noh
    • Annual Conference of KIPS
    • /
    • 2023.05a
    • /
    • pp.81-83
    • /
    • 2023
  • 태양 에너지 수집형 IoT 기기는 주기적으로 재충전되는 태양 에너지의 특성상, 에너지 소모를 최소화하기보다는 수집된 에너지를 최대한 유용하게 사용하는 것이 중요하다. 한편, 데이터 기밀성과 프라이버시, 응답속도, 비용 등의 이유로 클라우드가 아닌 데이터 소스 근처에서 머신러닝을 수행하는 엣지 AI에 대한 연구도 활발한데, 그 중 하나는 여러 IoT 장치들이 수집한 오디오 데이터를 활용하여, 다양한 AI 응용들을 IoT 엣지 컴퓨팅 환경에서 제공하는 것이다. 그러나, 이와 관련된 많은 연구에서, IoT 기기들은 에너지의 제약으로 인하여, 엣지 서버(IoT 서버)로의 센싱 데이터 전송만을 수행하고, 데이터 전처리를 포함한 모든 AI 과정은 엣지 서버에서 수행한다. 이 경우, 엣지 서버의 과부하 문제 뿐 아니라, 학습 및 추론에 불필요한 데이터까지도 서버에 그대로 전송되므로 네트워크 과부하 문제도 야기한다. 또한, 이를 해결하고자, 데이터 전처리 과정을 각 IoT 기기에 모두 맡긴다면, 기기의 에너지 부족으로 정전시간이 증가하는 또 다른 문제가 발생한다. 본 논문에서는 각 IoT 기기의 에너지 상태에 따라 데이터 전처리 여부를 결정함으로써, 기기들의 정전시간 증가 문제를 완화시키면서 서버 집중형 엣지 AI 환경의 문제들(엣지 서버 및 네트워크 과부하)을 완화시키고자 한다. 제안기법에서 IoT 장치는 기기가 기본적으로 동작하는 데 필요한 에너지 외의 여분의 에너지 양을 예측하고, 이 여분의 에너지가 있는 경우에만 이를 사용하여 기기에서 전처리 과정, 즉 수집 대상 소리 판별과 잡음 제거 과정을 거친 후 서버에 전송함으로써, IoT기기의 정전시간에 영향을 주지 않으면서, 에너지 적응적으로 데이터 전처리 위치(IoT기기 또는 엣지 서버)를 결정하여 수행한다.

A Study on Lightweight CNN-based Interpolation Method for Satellite Images (위성 영상을 위한 경량화된 CNN 기반의 보간 기술 연구)

  • Kim, Hyun-ho;Seo, Doochun;Jung, JaeHeon;Kim, Yongwoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.2
    • /
    • pp.167-177
    • /
    • 2022
  • In order to obtain satellite image products using the image transmitted to the ground station after capturing the satellite images, many image pre/post-processing steps are involved. During the pre/post-processing, when converting from level 1R images to level 1G images, geometric correction is essential. An interpolation method necessary for geometric correction is inevitably used, and the quality of the level 1G images is determined according to the accuracy of the interpolation method. Also, it is crucial to speed up the interpolation algorithm by the level processor. In this paper, we proposed a lightweight CNN-based interpolation method required for geometric correction when converting from level 1R to level 1G. The proposed method doubles the resolution of satellite images and constructs a deep learning network with a lightweight deep convolutional neural network for fast processing speed. In addition, a feature map fusion method capable of improving the image quality of multispectral (MS) bands using panchromatic (PAN) band information was proposed. The images obtained through the proposed interpolation method improved by about 0.4 dB for the PAN image and about 4.9 dB for the MS image in the quantitative peak signal-to-noise ratio (PSNR) index compared to the existing deep learning-based interpolation methods. In addition, it was confirmed that the time required to acquire an image that is twice the resolution of the 36,500×36,500 input image based on the PAN image size is improved by about 1.6 times compared to the existing deep learning-based interpolation method.