• Title/Summary/Keyword: 검출 모델

Search Result 1,728, Processing Time 0.028 seconds

YOLO, EAST : Comparison of Scene Text Detection Performance, Using a Neural Network Model (YOLO, EAST: 신경망 모델을 이용한 문자열 위치 검출 성능 비교)

  • Park, Chan Yong;Lim, Young Min;Jeong, Seung Dae;Cho, Young Heuk;Lee, Byeong Chul;Lee, Gyu Hyun;Kim, Jin Wook
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.115-124
    • /
    • 2022
  • In this paper, YOLO and EAST models are tested to analyze their performance in text area detecting for real-world and normal text images. The earl ier YOLO models which include YOLOv3 have been known to underperform in detecting text areas for given images, but the recently released YOLOv4 and YOLOv5 achieved promising performances to detect text area included in various images. Experimental results show that both of YOLO v4 and v5 models are expected to be widely used for text detection in the filed of scene text recognition in the future.

Alarm program through image processing based on Machine Learning (ML 기반의 영상처리를 통한 알람 프로그램)

  • Kim, Deok-Min;Chung, Hyun-Woo;Park, Goo-Man
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.304-307
    • /
    • 2021
  • ML(machine learning) 기술을 활용하여 실용적인 측면에서 일반 사용자들이 바라보고 사용할 수 있도록 다양한 연구 개발이 이루어지고 있다. 특히 최근 개인 사용자의 personal computer와 mobile device의 processing unit의 연산 처리 속도가 두드러지게 빨라지고 있어 ML이 더 생활에 밀접해지고 있는 추세라고 볼 수 있다. 현재 ML시장에서 다양한 솔루션 및 어플리케이션을 제공하는 툴이나 라이브러리가 대거 공개되고 있는데 그 중에서도 Google에서 개발하여 배포한 'Mediapipe'를 사용하였다. Mediapipe는 현재 'android', 'IOS', 'C++', 'Python', 'JS', 'Coral' 등의 환경에서 개발을 지원하고 있으며 더욱 다양한 환경을 지원할 예정이다. 이에 본 팀은 앞서 설명한 Mediapipe 프레임워크를 기반으로 Machine Learning을 사용한 image processing를 통해 일반 사용자들에게 편의성을 제공할 수 있는 알람 프로그램을 연구 및 개발하였다. Mediapipe에서 신체를 landmark로 검출하게 되는데 이를 scikit-learn 머신러닝 라이브러리를 사용하여 특정 자세를 학습시키고 모델화하여 알람 프로그램에 특정 기능에 조건으로 사용될 수 있게 하였다. scikit-learn은 아나콘다 등과 같은 개발환경 패키지에서 간단하게 이용 가능한데 이 아나콘다는 데이터 분석이나 그래프 그리기 등, 파이썬에 자주 사용되는 라이브러리를 포함한 개발환경이라고 할 수 있다. 하여 본 팀은 ML기반의 영상처리 알람 프로그램을 제작하는데에 있어 이러한 사항들을 파이썬 환경에서 기본적으로 포함되어 제공하는 tkinter GUI툴을 사용하고 추가적으로 인텔에서 개발한 실시간 컴퓨터 비전을 목적으로 한 프로그래밍 라이브러리 OpenCV와 여러 항목을 사용하여 환경을 구축할 수 있도록 연구·개발하였다.

  • PDF

development of face mask detector (딥러닝 기반 마스크 미 착용자 검출 기술)

  • Lee, Hanseong;Hwang, Chanwoong;Kim, Jongbeom;Jang, Dohyeon;Lee, Hyejin;Im, Dongju;Jung, Soonki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.270-272
    • /
    • 2020
  • 본 논문은 코로나 방역의 자동화를 위한 Deep learning 기술 적용에 대해 연구한다. 2020년에 가장 중요한 이슈 중 하나인 COVID-19와 그 방역에 대해 많은 사람들이 IT분야에서 떠오르고 있는 artificial intelligence(AI)에 주목하고 있다. COVID-19로 인해 마스크 착용이 선택이 아닌 필수가 되며, 이를 통제하기 위한 모델이 필요한 상황이다. AI, 그 중에서도 Deep learning의 Object detection 기술을 일상생활 곳곳에 존재하는 영상 장치들에 적용하여 합리적인 비용으로 방역의 실시간 자동화를 구현할 수 있다. 이번 논문에서는 인터넷에 공개되어 있는 사물인식 오픈소스를 활용하여 이를 구현하기 위한 연구를 진행하였다. 또 이를 위한 Dataset 확보에 대한 조사도 진행하였다.

  • PDF

Design of Household Trash Collection Robot using Deep Learning Object Recognition (딥러닝 객체 인식을 이용한 가정용 쓰레기 수거 로봇 설계)

  • Ju-hyeon Lee;Dong-myung Kim;Byeong-chan Choi;Woo-jin Kim;Kyu-ho Lee;Jae-wook Shin;Tae-sang Yun;Kwang Sik Youn;Ok-Kyoon Ha
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.113-114
    • /
    • 2023
  • 가정용 생활 쓰레기 수거 작업은 야간이나 이른 새벽에 이루어지고 있어 환경미화원의 안전사고와 수거 차량으로 인한 소음 문제가 빈번하게 발생한다. 본 논문에서는 딥러닝 기반의 영상 인식을 활용하여 종량제 봉투를 인식하고 수거가 가능한 생활 쓰레기 수거 로봇의 설계를 제시한다. 제시하는 생활 쓰레기 수거 로봇은 지정 구역을 자율주행하며 로봇에 장착된 카메라를 이용해 학습된 모델을 기반으로 가정용 쓰레기 종량제 봉투를 검출한다. 이를 통해 처리 대상으로 지정된 종량제 봉투와 로봇 팔 사이의 거리를 카메라를 활용하여 얻은 깊이 정보와 2차원 좌표를 토대로 목표 위치를 예측해 로봇 팔의 관절을 제어하여 봉투를 수거한다. 해당 로봇은 생활 쓰레기 수거 작업 과정에서 환경미화원을 보조하여 미화원의 안전 확보와 소음 저감을 위한 기기로 활용될 수 있다.

  • PDF

Construction of LiDAR Dataset for Autonomous Driving Considering Domestic Environments and Design of Effective 3D Object Detection Model (국내 주행환경을 고려한 자율주행 라이다 데이터 셋 구축 및 효과적인 3D 객체 검출 모델 설계)

  • Jin-Hee Lee;Jae-Keun Lee;Joohyun Lee;Je-Seok Kim;Soon Kwon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.5
    • /
    • pp.203-208
    • /
    • 2023
  • Recently, with the growing interest in the field of autonomous driving, many researchers have been focusing on developing autonomous driving software platforms. In particular, we have concentrated on developing 3D object detection models that can improve real-time performance. In this paper, we introduce a self-constructed 3D LiDAR dataset specific to domestic environments and propose a VariFocal-based CenterPoint for the 3D object detection model, with improved performance over the previous models. Furthermore, we present experimental results comparing the performance of the 3D object detection modules using our self-built and public dataset. As the results show, our model, which was trained on a large amount of self-constructed dataset, successfully solves the issue of failing to detect large vehicles and small objects such as motorcycles and pedestrians, which the previous models had difficulty detecting. Consequently, the proposed model shows a performance improvement of about 1.0 mAP over the previous model.

Domain Adaptive Fruit Detection Method based on a Vision-Language Model for Harvest Automation (작물 수확 자동화를 위한 시각 언어 모델 기반의 환경적응형 과수 검출 기술)

  • Changwoo Nam;Jimin Song;Yongsik Jin;Sang Jun Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.2
    • /
    • pp.73-81
    • /
    • 2024
  • Recently, mobile manipulators have been utilized in agriculture industry for weed removal and harvest automation. This paper proposes a domain adaptive fruit detection method for harvest automation, by utilizing OWL-ViT model which is an open-vocabulary object detection model. The vision-language model can detect objects based on text prompt, and therefore, it can be extended to detect objects of undefined categories. In the development of deep learning models for real-world problems, constructing a large-scale labeled dataset is a time-consuming task and heavily relies on human effort. To reduce the labor-intensive workload, we utilized a large-scale public dataset as a source domain data and employed a domain adaptation method. Adversarial learning was conducted between a domain discriminator and feature extractor to reduce the gap between the distribution of feature vectors from the source domain and our target domain data. We collected a target domain dataset in a real-like environment and conducted experiments to demonstrate the effectiveness of the proposed method. In experiments, the domain adaptation method improved the AP50 metric from 38.88% to 78.59% for detecting objects within the range of 2m, and we achieved 81.7% of manipulation success rate.

Applications of Artificial Intelligence in Mammography from a Development and Validation Perspective (유방촬영술에서 인공지능의 적용: 알고리즘 개발 및 평가 관점)

  • Ki Hwan Kim;Sang Hyup Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.82 no.1
    • /
    • pp.12-28
    • /
    • 2021
  • Mammography is the primary imaging modality for breast cancer detection; however, a high level of expertise is needed for its interpretation. To overcome this difficulty, artificial intelligence (AI) algorithms for breast cancer detection have recently been investigated. In this review, we describe the characteristics of AI algorithms compared to conventional computer-aided diagnosis software and share our thoughts on the best methods to develop and validate the algorithms. Additionally, several AI algorithms have introduced for triaging screening mammograms, breast density assessment, and prediction of breast cancer risk have been introduced. Finally, we emphasize the need for interest and guidance from radiologists regarding AI research in mammography, considering the possibility that AI will be introduced shortly into clinical practice.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.

A Real-time Hand Pose Recognition Method with Hidden Finger Prediction (은닉된 손가락 예측이 가능한 실시간 손 포즈 인식 방법)

  • Na, Min-Young;Choi, Jae-In;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.12 no.5
    • /
    • pp.79-88
    • /
    • 2012
  • In this paper, we present a real-time hand pose recognition method to provide an intuitive user interface through hand poses or movements without a keyboard and a mouse. For this, the areas of right and left hands are segmented from the depth camera image, and noise removal is performed. Then, the rotation angle and the centroid point of each hand area are calculated. Subsequently, a circle is expanded at regular intervals from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing. Lastly, the matching between the hand information calculated previously and the hand model of previous frame is performed, and the hand model is recognized to update the hand model for the next frame. This method enables users to predict the hidden fingers through the hand model information of the previous frame using temporal coherence in consecutive frames. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 95% and the performance indicated over 32 fps. The proposed method can be used as a contactless input interface in presentation, advertisement, education, and game applications.

Effect of Electrical Stimulation using ABR and ECochG Analysis based on Jastreboff Tinnitus Mocel (Jastreboff 이명 모델에서의 ABR과 ECochG 신호분석을 통한 전기자극의 효과)

  • 임재중;김경식;김남균;전병훈
    • Journal of Biomedical Engineering Research
    • /
    • v.20 no.4
    • /
    • pp.471-477
    • /
    • 1999
  • Many researches have been performed whether electrical stimulation could be used for diagnosis and treatment on the auditory system impairment. Unfortunately, there were no standard methods or theoretical background for choosing stimulus conditions because of the lack of understanding on the transmission of electrical stimulation through the auditory pathway. This research was conducted to observe the effect of electrical stimulation on the tinnitus-induced animals. Nine guniea pigs were used for the experment and divided into two groups, five animals for the experimental group(A) and four animals for the control group(B). Experimental conditions were divided into four steps, before tinnitus induction and 1, 6, 12 hours after tinnitus induction using salicylate based on the Jastreboff model. In each experimental condition, ABR and ECochG were obtained, and autocorrelation coefficients were calculated from normalized waveforms based on rms values. Sum of all the autocorrelation coefficients was extracted as a parameter to observe the changes between before and after the electrical stimulation. As a result, ABR parameter values were rapidly increased 6 hours after tinnitus induction, the gradually returned back to the initial state. On the other hand, when electrical stimulation was applied, parameter values did not change compared with the initial sate. Parameter values of ECochG showed that the effect of electrical stimulation appeared 12 hours after the tinnitus induction. It was concluded that an electrical stimulation to the tinnitus-induced model changes the correlation coefficients of ABR and ECochG waveforms.

  • PDF