• Title/Summary/Keyword: 검출 모델

Search Result 1,728, Processing Time 0.029 seconds

Collision Detection Simulation Using LOD In Virtual Environment of Geometrically Multi and Complex Bodies (복잡한 다수의 기하학적 물체들로 이루어진 가상 환경에서 단계적인 상세를 이용한 충돌 검출 시뮬레이션)

  • 이경명;한창호
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10c
    • /
    • pp.630-632
    • /
    • 1998
  • 큰 규모의 가상 환경에서 물체들은 사실적인 이미지효과와 함께 충돌 검출과 같은 다이내믹한 효과를 동반하여 더욱더 실제 물체와 같이 느껴지도록 해야한다. 특히, 가상세계에서의 충돌 검출은 실시간으로 계산되어야 한다. 가상환경을 이루는 기하학적 데이터의 양이 엄청나게 증가하면서 충돌 검출은 하나의 병목으로 극복되어야 할 문제로 여겨져 왔다. 본 논문에서는 많은 수의 복잡한, 3차원의 기하학적 모델들로 구성된 가상환경에서 단계적인 상세를 이용하여 가상 물체를 사실적으로 표현하면서도 어느 정도의 충돌 검출 병목을 해결하는 절충적인 방법을 제안하고 많은 양의 기하학적 데이터를 포함하는 토끼 모델을 사용하여 단계적인 상세를 이용한 가상환경을 구현하였다.

  • PDF

Method for improving hair detection for hair loss diagnosis in Phototrichogram (모발 정밀검사에서 탈모 진단을 위한 머리카락 검출 개선 방법)

  • Bomin Kim;Byung-Cheol Park;Sang-Il Choi
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.89-90
    • /
    • 2023
  • 본 논문은 모발 정밀검사(Phototrichogram)를 통해 일정 간격을 두고 촬영된 환자의 모발 두피 사진을 이용하여 머리카락 검출 및 개수 변화 추이에 따른 환자의 탈모 진단에 도움을 줄 방법을 제안한다. 기존의 탈모 진단을 위해 제안하였던 머리카락 검출 방법에서 사용한 환자의 모발 두피 사진에 Color Slicing을 적용하여 환자의 두피 모발 사진의 픽셀값을 통일성 있게 구성하였다. 또한, 머리카락 검출하기 위한 방법으로 Swin Transformer를 사용하고, 딥러닝 기반의 영상 분할 기법(Image Segmentation)의 하나인 HTC(Hybrid Task Cascade) 모델을 활용하여 좀 더 효과적으로 머리카락을 검출할 수 있는 모델을 제안한다.

  • PDF

The Method of Vanishing Point Estimation in Natural Environment using RANSAC (RANSAC을 이용한 실외 도로 환경의 소실점 예측 방법)

  • Weon, Sun-Hee;Joo, Sung-Il;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.9
    • /
    • pp.53-62
    • /
    • 2013
  • This paper proposes a method of automatically predicting the vanishing point for the purpose of detecting the road region from natural images. The proposed method stably detects the vanishing point in the road environment by analyzing the dominant orientation of the image and predicting the vanishing point to be at the position where the feature components of the image are concentrated. For this purpose, in the first stage, the image is partitioned into sub-blocks, an edge sample is selected randomly from within the sub-block, and RANSAC is applied for line fitting in order to analyze the dominant orientation of each sub-block. Once the dominant orientation has been detected for all blocks, we proceed to the second stage and randomly select line samples and apply RANSAC to perform the fitting of the intersection point, then measure the cost of the intersection model arising from each line and we predict the vanishing point to be located at the average point, based on the intersection point model with the highest cost. Lastly, quantitative and qualitative analyses are performed to verify the performance in various situations and prove the efficiency of the proposed algorithm for detecting the vanishing point.

Multimodal approach for blocking obscene and violent contents (멀티미디어 유해 콘텐츠 차단을 위한 다중 기법)

  • Baek, Jin-heon;Lee, Da-kyeong;Hong, Chae-yeon;Ahn, Byeong-tae
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.6
    • /
    • pp.113-121
    • /
    • 2017
  • Due to the development of IT technology, harmful multimedia contents are spreading out. In addition, obscene and violent contents have a negative impact on children. Therefore, in this paper, we propose a multimodal approach for blocking obscene and violent video contents. Within this approach, there are two modules each detects obsceneness and violence. In the obsceneness module, there is a model that detects obsceneness based on adult and racy score. In the violence module, there are two models for detecting violence: one is the blood detection model using RGB region and the other is motion extraction model for observation that violent actions have larger magnitude and direction change. Through result of these three models, this approach judges whether or not the content is harmful. This can contribute to the blocking obscene and violent contents that are distributed indiscriminately.

Optimal Parameter Extraction based on Deep Learning for Premature Ventricular Contraction Detection (심실 조기 수축 비트 검출을 위한 딥러닝 기반의 최적 파라미터 검출)

  • Cho, Ik-sung;Kwon, Hyeog-soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1542-1550
    • /
    • 2019
  • Legacy studies for classifying arrhythmia have been studied to improve the accuracy of classification, Neural Network, Fuzzy, etc. Deep learning is most frequently used for arrhythmia classification using error backpropagation algorithm by solving the limit of hidden layer number, which is a problem of neural network. In order to apply a deep learning model to an ECG signal, it is necessary to select an optimal model and parameters. In this paper, we propose optimal parameter extraction method based on a deep learning. For this purpose, R-wave is detected in the ECG signal from which noise has been removed, QRS and RR interval segment is modelled. And then, the weights were learned by supervised learning method through deep learning and the model was evaluated by the verification data. The detection and classification rate of R wave and PVC is evaluated through MIT-BIH arrhythmia database. The performance results indicate the average of 99.77% in R wave detection and 97.84% in PVC classification.

Assembly Performance Evaluation for Prefabricated Steel Structures Using k-nearest Neighbor and Vision Sensor (k-근접 이웃 및 비전센서를 활용한 프리팹 강구조물 조립 성능 평가 기술)

  • Bang, Hyuntae;Yu, Byeongjun;Jeon, Haemin
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.35 no.5
    • /
    • pp.259-266
    • /
    • 2022
  • In this study, we developed a deep learning and vision sensor-based assembly performance evaluation method isfor prefabricated steel structures. The assembly parts were segmented using a modified version of the receptive field block convolution module inspired by the eccentric function of the human visual system. The quality of the assembly was evaluated by detecting the bolt holes in the segmented assembly part and calculating the bolt hole positions. To validate the performance of the evaluation, models of standard and defective assembly parts were produced using a 3D printer. The assembly part segmentation network was trained based on the 3D model images captured from a vision sensor. The sbolt hole positions in the segmented assembly image were calculated using image processing techniques, and the assembly performance evaluation using the k-nearest neighbor algorithm was verified. The experimental results show that the assembly parts were segmented with high precision, and the assembly performance based on the positions of the bolt holes in the detected assembly part was evaluated with a classification error of less than 5%.

Layered Object Detection using Gaussian Mixture Learning for Complex Environment (혼잡한 환경에서 가우시안 혼합 모델을 이용한 계층적 객체 검출)

  • Lee, Jin-Hyeong;Kim, Heon-Gi;Jo, Seong-Won;Kim, Jae-Min
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.435-438
    • /
    • 2007
  • 움직이는 객체를 검출하기 위해서 정확한 배경을 사용하기 위해 널리 사용되는 방법으로는 가우시안 혼합 모델이다. 가우시안 혼합 모텔은 확률적 학습 방법을 사용하는데, 이 방법은 움직이는 배경일 경우와 이동하던 물체가 정지하는 경우 배경을 정확히 모델링하지 못한다. 본 논문에서는 확률적 모델링을 통해 혼잡한 배경을 모델링하고 객체의 계층적 처리를 통해 보다 정확한 배경으로 갱신할 수 있는 학습 방법을 제안한다.

  • PDF

An Application Model for Clustering in Water Sensor Data Mining (수질센서 데이터 마이닝을 위한 클러스터링 적용 모델)

  • Kweon, Daehyeon;Cho, Soosun
    • Annual Conference of KIPS
    • /
    • 2009.11a
    • /
    • pp.29-30
    • /
    • 2009
  • 센서 데이터의 마이닝 기술은 의사결정을 위한 통합정보 및 예측정보를 제공하는 USN 지능형 미들웨어의 주요 구성 요소이다. 본 논문에서는 수질 센서 데이터 마이닝 시스템을 개발하기위해 대표적인 데이터 마이닝 기법인 클러스터링의 적용 모델을 소개한다. 적용 모델의 클러스터링을 통해 중간노드에서의 데이터 이상치 검출과 호스트에서의 시간대별 데이터 변화 검출이 가능하다.

Detection of Faces with Partial Occlusions using Statistical Face Model (통계적 얼굴 모델을 이용한 부분적으로 가려진 얼굴 검출)

  • Seo, Jeongin;Park, Hyeyoung
    • Journal of KIISE
    • /
    • v.41 no.11
    • /
    • pp.921-926
    • /
    • 2014
  • Face detection refers to the process extracting facial regions in an input image, which can improve speed and accuracy of recognition or authorization system, and has diverse applicability. Since conventional works have tried to detect faces based on the whole shape of faces, its detection performance can be degraded by occlusion made with accessories or parts of body. In this paper we propose a method combining local feature descriptors and probability modeling in order to detect partially occluded face effectively. In training stage, we represent an image as a set of local feature descriptors and estimate a statistical model for normal faces. When the test image is given, we find a region that is most similar to face using our face model constructed in training stage. According to experimental results with benchmark data set, we confirmed the effect of proposed method on detecting partially occluded face.

Real-Time Face Detection, Tracking and Tilted Face Image Correction System Using Multi-Color Model and Face Feature (복합 칼라모델과 얼굴 특징자를 이용한 실시간 얼굴 검출 추적과 기울어진 얼굴보정 시스템)

  • Lee Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.4
    • /
    • pp.470-481
    • /
    • 2006
  • In this paper, we propose a real-time face detection, tracking and tilted face image correction system using multi-color model and face feature information. In the proposed system, we detect face candidate using YCbCr and YIQ color model. And also, we detect face using vertical and horizontal projection method and track people's face using Hausdorff matching method. And also, we correct tilted face with the correction of tilted eye features. The experiments have been performed for 110 test images and shows good performance. Experimental results show that the proposed algorithm robust to detection and tracking of face at real-time with the change of exterior condition and recognition of tilted face. Accordingly face detection and tilted face correction rate displayed 92.27% and 92.70% respectively and proposed algorithm shows 90.0% successive recognition rate.

  • PDF