• Title/Summary/Keyword: 이미지 데이터 셋

Search Result 302, Processing Time 0.032 seconds

An Efficient Text Detection Model using Bidirectional Feature Fusion (양방향 특징 결합을 이용한 효율적 문자 탐지 모델)

  • Lim, Seong-Taek;Choi, Hoeryeon;Lee, Hong-Chul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.67-68
    • /
    • 2021
  • 기존 객체탐지는 경계 상자 회귀방식을 적용하였지만, 문자는 왜곡과 변형이 심한 특성을 가진 객체로 U-net 구조의 이미지 분할 방식을 사용하는 경우가 많다. 따라서 최근 문자 탐지는 통계적 모델에 비해 높은 정확도를 보이는 심층 신경망 기반의 모델 연구가 많이 진행되고 있다. 본 연구에서는 이미지 분할을 통한 양방향 특징 결합 기법을 사용한 문자 탐지 모델을 제안한다. 이미지 분할 방식은 메모리의 효율이 떨어지기 때문에 이를 극복하고자 특징 추출 단계에서 경량화된 네트워크를 적용하였다. 또한, 객체 탐지에서 큰 성과를 보인 양방향 특징 결합 모듈을 U-net 구조에 추가하여 추출된 특징이 효과적으로 결합 되는 결과를 얻었다. 제안하는 모델의 문자 탐지 성능은 합성 문자 데이터셋을 이용한 실험을 통해 기존의 U-net 구조의 이미지 분할 방식보다 향상되었음을 확인하였다.

  • PDF

Deep Clustering Based on Vision Transformer(ViT) for Images (이미지에 대한 비전 트랜스포머(ViT) 기반 딥 클러스터링)

  • Hyesoo Shin;Sara Yu;Ki Yong Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.363-365
    • /
    • 2023
  • 본 논문에서는 어텐션(Attention) 메커니즘을 이미지 처리에 적용한 연구가 진행되면서 등장한 비전 트랜스포머 (Vision Transformer, ViT)의 한계를 극복하기 위해 ViT 기반의 딥 클러스터링(Deep Clustering) 기법을 제안한다. ViT는 완전히 트랜스포머(Transformer)만을 사용하여 입력 이미지의 패치(patch)들을 벡터로 변환하여 학습하는 모델로, 합성곱 신경망(Convolutional Neural Network, CNN)을 사용하지 않으므로 입력 이미지의 크기에 대한 제한이 없으며 높은 성능을 보인다. 그러나 작은 데이터셋에서는 학습이 어렵다는 단점이 있다. 제안하는 딥 클러스터링 기법은 처음에는 입력 이미지를 임베딩 모델에 통과시켜 임베딩 벡터를 추출하여 클러스터링을 수행한 뒤, 클러스터링 결과를 임베딩 벡터에 반영하도록 업데이트하여 클러스터링을 개선하고, 이를 반복하는 방식이다. 이를 통해 ViT 모델의 일반적인 패턴 파악 능력을 개선하고 더욱 정확한 클러스터링 결과를 얻을 수 있다는 것을 실험을 통해 확인하였다.

Training of a Siamese Network to Build a Tracker without Using Tracking Labels (샴 네트워크를 사용하여 추적 레이블을 사용하지 않는 다중 객체 검출 및 추적기 학습에 관한 연구)

  • Kang, Jungyu;Song, Yoo-Seung;Min, Kyoung-Wook;Choi, Jeong Dan
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.5
    • /
    • pp.274-286
    • /
    • 2022
  • Multi-object tracking has been studied for a long time under computer vision and plays a critical role in applications such as autonomous driving and driving assistance. Multi-object tracking techniques generally consist of a detector that detects objects and a tracker that tracks the detected objects. Various publicly available datasets allow us to train a detector model without much effort. However, there are relatively few publicly available datasets for training a tracker model, and configuring own tracker datasets takes a long time compared to configuring detector datasets. Hence, the detector is often developed separately with a tracker module. However, the separated tracker should be adjusted whenever the former detector model is changed. This study proposes a system that can train a model that performs detection and tracking simultaneously using only the detector training datasets. In particular, a Siam network with augmentation is used to compose the detector and tracker. Experiments are conducted on public datasets to verify that the proposed algorithm can formulate a real-time multi-object tracker comparable to the state-of-the-art tracker models.

The Automated Scoring of Kinematics Graph Answers through the Design and Application of a Convolutional Neural Network-Based Scoring Model (합성곱 신경망 기반 채점 모델 설계 및 적용을 통한 운동학 그래프 답안 자동 채점)

  • Jae-Sang Han;Hyun-Joo Kim
    • Journal of The Korean Association For Science Education
    • /
    • v.43 no.3
    • /
    • pp.237-251
    • /
    • 2023
  • This study explores the possibility of automated scoring for scientific graph answers by designing an automated scoring model using convolutional neural networks and applying it to students' kinematics graph answers. The researchers prepared 2,200 answers, which were divided into 2,000 training data and 200 validation data. Additionally, 202 student answers were divided into 100 training data and 102 test data. First, in the process of designing an automated scoring model and validating its performance, the automated scoring model was optimized for graph image classification using the answer dataset prepared by the researchers. Next, the automated scoring model was trained using various types of training datasets, and it was used to score the student test dataset. The performance of the automated scoring model has been improved as the amount of training data increased in amount and diversity. Finally, compared to human scoring, the accuracy was 97.06%, the kappa coefficient was 0.957, and the weighted kappa coefficient was 0.968. On the other hand, in the case of answer types that were not included in the training data, the s coring was almos t identical among human s corers however, the automated scoring model performed inaccurately.

Adversarial Learning-Based Image Correction Methodology for Deep Learning Analysis of Heterogeneous Images (이질적 이미지의 딥러닝 분석을 위한 적대적 학습기반 이미지 보정 방법론)

  • Kim, Junwoo;Kim, Namgyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.457-464
    • /
    • 2021
  • The advent of the big data era has enabled the rapid development of deep learning that learns rules by itself from data. In particular, the performance of CNN algorithms has reached the level of self-adjusting the source data itself. However, the existing image processing method only deals with the image data itself, and does not sufficiently consider the heterogeneous environment in which the image is generated. Images generated in a heterogeneous environment may have the same information, but their features may be expressed differently depending on the photographing environment. This means that not only the different environmental information of each image but also the same information are represented by different features, which may degrade the performance of the image analysis model. Therefore, in this paper, we propose a method to improve the performance of the image color constancy model based on Adversarial Learning that uses image data generated in a heterogeneous environment simultaneously. Specifically, the proposed methodology operates with the interaction of the 'Domain Discriminator' that predicts the environment in which the image was taken and the 'Illumination Estimator' that predicts the lighting value. As a result of conducting an experiment on 7,022 images taken in heterogeneous environments to evaluate the performance of the proposed methodology, the proposed methodology showed superior performance in terms of Angular Error compared to the existing methods.

A Study on data pre-processing for rainfall estimation from CCTV videos (CCTV 영상 기반 강수량 산정을 위한 데이터 전처리 방안 연구)

  • Byun, Jongyun;Jun, Changhyun;Lee, Jinwook;Kim, Hyeonjun;Cha, Hoyoung
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.167-167
    • /
    • 2022
  • 최근 빅데이터에 관련된 연구에 있어 데이터의 품질관리에 대한 논의가 꾸준히 이뤄져 오고 있다. 특히 이미지 처리 및 분석에 활용되어온 딥러닝 기술의 경우, 분류 작업 및 패턴인식 등으로부터 데이터의 특징을 추출함으로써 비지도학습(Unsupervised Learning)을 가능하게 한다는 장점이 있음에도 불구하고 빅데이터를 다루는 과정에 있어 용량, 다양성, 속도 및 신뢰성 측면에서의 한계가 있었다. 본 연구에서는 CCTV 영상을 활용한 강수량 산정 모델 개발에 있어 예측 정확도 향상 및 성능 개선을 도모할 수 있는 데이터 전처리 방법을 제안하였다. 서울 근린 AWS 4개소 지역(김포장기, 하남덕풍, 강동, 성남) 및 중앙대학교 지점 내 CCTV를 설치한 후, 최대 9개월의 영상을 확보하여 강수량 산정을 위한 딥러닝 모델을 개발하였다. 배경분리, 조도조정, 영역설정, 데이터증진, 이상데이터 분류 등이 가능한 알고리즘을 개발함으로써 데이터셋 자체에 대한 전처리 작업을 수행한 후, 이에 대한 결과를 기존 관측자료와 비교·분석하였다. 본 연구에서 제안한 전처리 방법들을 적용한 결과, 강수량 산정 모델의 예측 정확도를 평가하는 지표로 선정한 평균 제곱근 편차(Root Mean Square Error; RMSE)가 약 30% 감소함을 확인하였다. 본 연구의 결과로부터 CCTV 영상 데이터를 활용한 강수량 산정의 가능성을 확인할 수 있었으며 특히, 딥러닝 모델 개발시 필요한 적정 전처리 방법들에 대한 기준을 제시할 수 있을 것으로 판단된다.

  • PDF

Recent advances in few-shot learning for image domain: a survey (이미지 분석을 위한 퓨샷 학습의 최신 연구동향)

  • Ho-Sik Seok
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.537-547
    • /
    • 2023
  • In many domains, lack of data inhibits adoption of advanced machine learning models. Recently, Few-Shot Learning (FSL) has been actively studied to tackle this problem. Utilizing prior knowledge obtained through observations on related domains, FSL achieved significant performance with only a few samples. In this paper, we present a survey on FSL in terms of data augmentation, embedding and metric learning, and meta-learning. In addition to interesting researches, we also introduce major benchmark datasets. FSL is widely adopted in various domains, but we focus on image analysis in this paper.

A Licence Plate Recognition System using Hadoop (하둡을 이용한 번호판 인식 시스템)

  • Park, Jin-Woo;Park, Ho-Hyun
    • Journal of IKEEE
    • /
    • v.21 no.2
    • /
    • pp.142-145
    • /
    • 2017
  • Currently, a trend in image processing is high-quality and high-resolution. The size and amount of image data are increasing exponentially because of the development of information and communication technology. Thus, license plate recognition with a single processor cannot handle the increasing data. This paper proposes a number plate recognition system using a distributed processing framework, Hadoop. Using SequenceFile format in Hadoop, each mapper performs a license plate recognition with a number of image data in a data block Experimental results show that license plate recognition performance with 16 data nodes accomplishes speedup of maximum 14.7 times comparing with one data node. In large dataset, the recognition performance is robust even if the number of data nodes increases gradually.

A Comparative Study on Artificial in Intelligence Model Performance between Image and Video Recognition in the Fire Detection Area (화재 탐지 영역의 이미지와 동영상 인식 사이 인공지능 모델 성능 비교 연구)

  • Jeong Rok Lee;Dae Woong Lee;Sae Hyun Jeong;Sang Jeong
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.4
    • /
    • pp.968-975
    • /
    • 2023
  • Purpose: We would like to confirm that the false positive rate of flames/smoke is high when detecting fires. Propose a method and dataset to recognize and classify fire situations to reduce the false detection rate. Method: Using the video as learning data, the characteristics of the fire situation were extracted and applied to the classification model. For evaluation, the model performance of Yolov8 and Slowfast were compared and analyzed using the fire dataset conducted by the National Information Society Agency (NIA). Result: YOLO's detection performance varies sensitively depending on the influence of the background, and it was unable to properly detect fires even when the fire scale was too large or too small. Since SlowFast learns the time axis of the video, we confirmed that detects fire excellently even in situations where the shape of an atypical object cannot be clearly inferred because the surrounding area is blurry or bright. Conclusion: It was confirmed that the fire detection rate was more appropriate when using a video-based artificial intelligence detection model rather than using image data.

AMD Identification from OCT Volume Data Acquired from Heterogeneous OCT Machines using Deep Convolutional Neural Network (이종의 OCT 기기로부터 생성된 볼륨 데이터로부터 심층 컨볼루션 신경망을 이용한 AMD 진단)

  • Kwon, Oh-Heum;Jung, Yoo Jin;Kwon, Ki-Ryong;Song, Ha-Joo
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.124-136
    • /
    • 2018
  • There have been active research activities to use neural networks to analyze OCT images and make medical decisions. One requirement for these approaches to be promising solutions is that the trained network must be generalized to new devices without a substantial loss of performance. In this paper, we use a deep convolutional neural network to distinguish AMD from normal patients. The network was trained using a data set generated from an OCT device. We observed a significant performance degradation when it was applied to a new data set obtained from a different OCT device. To overcome this performance degradation, we propose an image normalization method which performs segmentation of OCT images to identify the retina area and aligns images so that the retina region lies horizontally in the image. We experimentally evaluated the performance of the proposed method. The experiment confirmed a significant performance improvement of our approach.