• Title/Summary/Keyword: 카메라 기반 인식

Search Result 700, Processing Time 0.031 seconds

Implementation of fast facial image detecting system based on GPU (GPU 기반 고속 얼굴 영역 검출 구현)

  • Lee, Seong-Yeon;Park, Seong-Mo;Kim, Jong-Nam
    • Annual Conference of KIPS
    • /
    • 2009.04a
    • /
    • pp.130-131
    • /
    • 2009
  • 얼굴 영역 검출은 얼굴 인식, 얼굴 복원 등 산업 및 학술 여러 분야에 걸쳐 사용되는 기술이다. 고속의 얼굴 영역 검출을 위하여 고성능 하드웨어를 사용하거나 고속 알고리즘을 사용하는데, 본 논문에서는 GPU 기반 프로그래밍 기법인 CUDA를 이용하여 고속 얼굴 영역 검출 시스템을 구현하였다. 기존의 얼굴 영역 검출 시스템은 처리 속도의 한계로 인해 고속의 검출이 어려웠을 뿐 아니라 고속으로 동작하도록 하려면 고가의 시스템 부품을 사용하여야 하므로 사용자에게 부담을 안겨주었다. 그러나 nVidia 등 그래픽 칩셋 제조업체들이 속속 내놓고 있는 GPGPU 기술을 이용하여 얼굴 영역 검출 시스템을 구현할 경우 보다 저렴한 가격에 보다 뛰어난 성능을 가질 수 있도록 할 수 있다. 따라서 본 논문에서는 이러한 범용 GPU 사용 기술 중 하나인 nVidia의 CUDA를 이용하여 얼굴 검출 시스템을 구현하였다. 실험 결과 GPU 기반 시스템은 CPU 기반 시스템보다 고속으로 검출이 가능함을 확인하였다. 제안하는 방법은 nVidia 그래픽 카드가 설치된 시스템에서 고속의 감시카메라 서버 등으로 적용이 가능하다.

A Region Depth Estimation Algorithm using Motion Vector from Monocular Video Sequence (단안영상에서 움직임 벡터를 이용한 영역의 깊이추정)

  • 손정만;박영민;윤영우
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.2
    • /
    • pp.96-105
    • /
    • 2004
  • The recovering 3D image from 2D requires the depth information for each picture element. The manual creation of those 3D models is time consuming and expensive. The goal in this paper is to estimate the relative depth information of every region from single view image with camera translation. The paper is based on the fact that the motion of every point within image which taken from camera translation depends on the depth. Motion vector using full-search motion estimation is compensated for camera rotation and zooming. We have developed a framework that estimates the average frame depth by analyzing motion vector and then calculates relative depth of region to average frame depth. Simulation results show that the depth of region belongs to a near or far object is consistent accord with relative depth that man recognizes.

  • PDF

A Study on AR- supported Generative FashionNet (증강현실(AR) 기반의 생성형 FashionNet 에 관한 연구)

  • Min-Yung Yu;Jae- Chern Yoo
    • Annual Conference of KIPS
    • /
    • 2024.05a
    • /
    • pp.851-853
    • /
    • 2024
  • 본 논문에서는 MediaPipe 라이브러리 및 OpenCV 를 활용한 포즈 추정 및 체형 인식 알고리즘을 통해 사용자의 체형과 선호도에 맞는 의류를 가상으로 입어볼 수 있는 생성형 FashionNet 을 제안한다. 구체적으로는 먼저 웹 카메라를 통해 얻어진 사용자의 외형 이미지로부터, 사용자의 신체 포즈를 추정하고, OpenCV 코드를 통해 사용자의 신체 윤곽을 검출한다. 이후 가상 옷장 데이터베이스로부터 선택된 가상 의류를 사용자의 신체 윤곽에 맞춰 입혀진 가상 피팅 이미지를 생성한다. 특히, 본 논문의 FashionNet 은 사용자와 카메라 간의 거리에 따른 인체 비율을 사전 실험으로 미리 설정해놓음으로써, 카메라와 사용자간의 거리에 관계없이 의류 사이즈가 사용자의 신체 조건에 맞게 자동으로 피팅되는 특징을 갖는다. 또한 가상 옷장 데이터베이스로부터 의류 아이템 선정의 편의를 제공하기 위해, 가상 현실 속에서 스크린상의 메뉴 버튼과 사용자의 포즈 동작간의 상호작용을 통해 FashionNet 의 다양한 기능을 수행할 수 있는 증강현실(AR) 기법을 적용하였다. 가상 옷장 데이터베이스를 사용한 다양한 가상 피팅 체험 실험을 통해 온라인상에서 자기가 원하는 의류를 가상으로 착용해 볼 수 있고 이를 통해 구매를 결정하는 등의 FashionNet 의 유효성과 가능성을 확인하였다.

A Study on the Design and Implementation of a Camera-Based 6DoF Tracking and Pose Estimation System (카메라 기반 6DoF 추적 및 포즈 추정 시스템의 설계 및 구현에 관한 연구)

  • Do-Yoon Jeong;Hee-Ja Jeong;Nam-Ho Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.5
    • /
    • pp.53-59
    • /
    • 2024
  • This study presents the design and implementation of a camera-based 6DoF (6 Degrees of Freedom) tracking and pose estimation system. In particular, we propose a method for accurately estimating the positions and orientations of all fingers of a user utilizing a 6DoF robotic arm. The system is developed using the Python programming language, leveraging the Mediapipe and OpenCV libraries. Mediapipe is employed to extract keypoints of the fingers in real-time, allowing for precise recognition of the joint positions of each finger. OpenCV processes the image data collected from the camera to analyze the finger positions, thereby enabling pose estimation. This approach is designed to maintain high accuracy despite varying lighting conditions and changes in hand position. The proposed system's performance has been validated through experiments, evaluating the accuracy of hand gesture recognition and the control capabilities of the robotic arm. The experimental results demonstrate that the system can estimate finger positions in real-time, facilitating precise movements of the 6DoF robotic arm. This research is expected to make significant contributions to the fields of robotic control and human-robot interaction, opening up various possibilities for future applications. The findings of this study will aid in advancing robotic technology and promoting natural interactions between humans and robots.

Development of Authentication Service Model Based Context-Awareness for Accessing Patient's Medical Information (환자 의료정보 접근을 위한 상황인식 기반의 인증서비스 모델 개발)

  • Ham, Gyu-Sung;Joo, Su-Chong
    • Journal of Internet Computing and Services
    • /
    • v.22 no.1
    • /
    • pp.99-107
    • /
    • 2021
  • With the recent establishment of a ubiquitous-based medical and healthcare environment, the medical information system for obtaining situation information from various sensors is increasing. In the medical information system environment based on context-awareness, the patient situation can be determined as normal or emergency using situational information. In addition, medical staff can easily access patient information after simple user authentication using ID and Password through applications on smart devices. However, these services of authentication and patient information access are staff-oriented systems and do not fully consider the ubiquitous-based healthcare information system environment. In this paper, we present a authentication service model based context-awareness system for providing situational information-driven authentication services to users who access medical information, and implemented proposed system. The authentication service model based context-awareness system is a service that recognizes patient situations through sensors and the authentication and authorization of medical staff proceed differently according to patient situations. It was implemented using wearables, biometric data measurement modules, camera sensors, etc. to configure various situational information measurement environments. If the patient situation was emergency situation, the medical information server sent an emergency message to the smart device of the medical staff, and the medical staff that received the emergency message tried to authenticate using the application of the smart device to access the patient information. Once all authentication was completed, medical staff will be given access to high-level medical information and can even checked patient medical information that could not be seen under normal situation. The authentication service model based context-awareness system not only fully considered the ubiquitous medical information system environment, but also enhanced patient-centered systematic security and access transparency.

A Real-Time Pigsty Thermal Control System Based on a Video Sensor (비디오 센서 기반의 실시간 돈사 온도제어 시스템)

  • Choi, Dongwhee;Kim, Haelyeon;Kim, Heegon;Chung, Yongwha;Park, Daihee
    • Annual Conference of KIPS
    • /
    • 2013.05a
    • /
    • pp.223-225
    • /
    • 2013
  • 어미로부터 생후 21일령 또는 28일령에 젖을 때는 어린 자돈들은 면역력이 약하여 통상 폐사율이 30~40%까지 치솟는 등 자돈 관리가 국내 양돈 농가의 가장 큰 문제 중 하나로 인식되고 있다. 본 논문에서는 이러한 양돈 농가의 문제를 해결하기 위하여 자돈사에 비디오 카메라를 설치하고 획득된 영상 정보를 이용하여 자돈들을 관리하는 시스템을 제안한다. 특히 제안된 시스템은 실시간으로 유입되는 영상 스트림 데이터로부터 움직임 여부를 신속히 판단하고, 움직임이 없는 경우(수면 또는 휴식)에 바닥면적 중 자돈들이 차지하지 않은 부분의 면적을 추출하여 수면 또는 휴식 중 자돈들의 밀집 여부를 판단한다. 즉, 카메라를 이용하여 과도하게 밀집된 경우 온도를 올려주고 반대의 경우 온도를 낮춰주는 온도제어 시스템을 설계할 수 있다. 실제, 경상남도 함양군의 한 돼지 농장에 비디오 센서 기반의 실험 환경을 구축하고 자돈사 감시 데이터 셋을 취득하였고, 이를 이용하여 제안된 자돈사 관리 시스템의 프로토타입을 개발하였다.

A Study on the Effectiveness of the Image Recognition Technique of Augmented Reality Contents (증강현실 콘텐츠의 이미지 인식 기법 효과성 연구)

  • Suh, Dong-Hee
    • Cartoon and Animation Studies
    • /
    • s.41
    • /
    • pp.337-356
    • /
    • 2015
  • Recently augmented reality contents are variously used in public such as advertisements or exhibits as well as children's books. Therefore, it is certain that the market, development of augmented reality contents, is gradually growing. Those who are the producer of augmented reality may be familiar with the skill where those images are used as a marker which is created by image recognition technique. In case of using image recognition technique, they usually use the augmented reality marker platform from Qualcomm since it is able to recognize self-produced images and 3-dimensional figures at no cost. This study was started when undergraduate students began to use those general techniques in their contents producing process. AR majoring students in Namseoul University applied image recognition technique to 3 AR contents exhibited in Sejong Center. Creating 3 different images, they have registered images at Image Target Manager provided by Vuforia to use as a marker. Moreover, they have modified the image producing method to raise the recognition rate by research. The higher recognition rate brings the more stable use of augmented reality contents. To achieve the satisfied rate, they have compared the elements of color contrast, pattern and etc. in the use of platform. Thus, the effective image creation method has been drawn. This study is aiming to suggest the production of stable contents by recognizing smart devices' limitation and producing educational contents. The purpose of this study is to help practically augmented reality contents developers by illustrating the application of augmented reality contents which are based on image recognition technique and also its effectiveness at the same time.

Dynamic Bayesian Network based Two-Hand Gesture Recognition (동적 베이스망 기반의 양손 제스처 인식)

  • Suk, Heung-Il;Sin, Bong-Kee
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.4
    • /
    • pp.265-279
    • /
    • 2008
  • The idea of using hand gestures for human-computer interaction is not new and has been studied intensively during the last dorado with a significant amount of qualitative progress that, however, has been short of our expectations. This paper describes a dynamic Bayesian network or DBN based approach to both two-hand gestures and one-hand gestures. Unlike wired glove-based approaches, the success of camera-based methods depends greatly on the image processing and feature extraction results. So the proposed method of DBN-based inference is preceded by fail-safe steps of skin extraction and modeling, and motion tracking. Then a new gesture recognition model for a set of both one-hand and two-hand gestures is proposed based on the dynamic Bayesian network framework which makes it easy to represent the relationship among features and incorporate new information to a model. In an experiment with ten isolated gestures, we obtained the recognition rate upwards of 99.59% with cross validation. The proposed model and the related approach are believed to have a strong potential for successful applications to other related problems such as sign languages.

Image Tracking Based Lane Departure Warning and Forward Collision Warning Methods for Commercial Automotive Vehicle (이미지 트래킹 기반 상용차용 차선 이탈 및 전방 추돌 경고 방법)

  • Kim, Kwang Soo;Lee, Ju Hyoung;Kim, Su Kwol;Bae, Myung Won;Lee, Deok Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.2
    • /
    • pp.235-240
    • /
    • 2015
  • Active Safety system is requested on the market of the medium and heavy duty commercial vehicle over 4.5ton beside the market of passenger car with advancement of the digital equipment proportionally. Unlike the passenger car, the mounting position of camera in case of the medium and heavy duty commercial vehicle is relatively high, it is disadvantaged conditions for lane recognition in contradiction to passenger car. In this work, we show the method of lane recognition through the Sobel edge, based on the spatial domain processing, Hough transform and color conversion correction. Also we suggest the low error method of front vehicles recognition in order to reduce the detection error through Haar-like, Adaboost, SVM and Template matching, etc., which are the object recognition methods by frontal camera vision. It is verified that the reliability over 98% on lane recognition is obtained through the vehicle test.

Design and Implementation of ontology based context-awareness platform using driver intent information (운전자 의도정보를 이용한 온톨로지 기반 지능형자동차 상황인식 플랫폼 설계 및 구현)

  • Ko, Jae-Jin;Choi, Ki-Ho
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.1
    • /
    • pp.14-21
    • /
    • 2014
  • In this paper, we devise a new ontology-based context-aware system to recognize the smart car information, in which driver's intent is utilized by information of car, driver, environment as well as driving state, driver state. So proposed system can handle dynamically risk changes by adding real-time situational awareness information. We utilize the camera image recognition technology for context-aware intelligent vehicle driving information, and implement information acquisition scheme OBD-II protocol to acquire vehicle's information. Experiments confirm that the proposed advanced driver safety assist system outperforms the conventional system, which only utilizes the information of vehicle, driver, and environmental information, to support the service of a high-speed driving, lane-departure service and emergency braking situation awareness.