• 제목/요약/키워드: Real-Time Computer Vision

검색결과 352건 처리시간 0.025초

A Novel Computer Human Interface to Remotely Pick up Moving Human's Voice Clearly by Integrating ]Real-time Face Tracking and Microphones Array

  • Hiroshi Mizoguchi;Takaomi Shigehara;Yoshiyasu Goto;Hidai, Ken-ichi;Taketoshi Mishima
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1998년도 제13차 학술회의논문집
    • /
    • pp.75-80
    • /
    • 1998
  • This paper proposes a novel computer human interface, named Virtual Wireless Microphone (VWM), which utilizes computer vision and signal processing. It integrates real-time face tracking and sound signal processing. VWM is intended to be used as a speech signal input method for human computer interaction, especially for autonomous intelligent agent that interacts with humans like as digital secretary. Utilizing VWM, the agent can clearly listen human master's voice remotely as if a wireless microphone was put just in front of the master.

  • PDF

A Vision-Based Method to Find Fingertips in a Closed Hand

  • Chaudhary, Ankit;Vatwani, Kapil;Agrawal, Tushar;Raheja, J.L.
    • Journal of Information Processing Systems
    • /
    • 제8권3호
    • /
    • pp.399-408
    • /
    • 2012
  • Hand gesture recognition is an important area of research in the field of Human Computer Interaction (HCI). The geometric attributes of the hand play an important role in hand shape reconstruction and gesture recognition. That said, fingertips are one of the important attributes for the detection of hand gestures and can provide valuable information from hand images. Many methods are available in scientific literature for fingertips detection with an open hand but very poor results are available for fingertips detection when the hand is closed. This paper presents a new method for the detection of fingertips in a closed hand using the corner detection method and an advanced edge detection algorithm. It is important to note that the skin color segmentation methodology did not work for fingertips detection in a closed hand. Thus the proposed method applied Gabor filter techniques for the detection of edges and then applied the corner detection algorithm for the detection of fingertips through the edges. To check the accuracy of the method, this method was tested on a vast number of images taken with a webcam. The method resulted in a higher accuracy rate of detections from the images. The method was further implemented on video for testing its validity on real time image capturing. These closed hand fingertips detection would help in controlling an electro-mechanical robotic hand via hand gesture in a natural way.

윤곽 검출용 CMOS 시각칩을 이용한 물체 추적 시스템 요소 기술 연구 (Fundamental research of the target tracking system using a CMOS vision chip for edge detection)

  • 현효영;공재성;신장규
    • 센서학회지
    • /
    • 제18권3호
    • /
    • pp.190-196
    • /
    • 2009
  • In a conventional camera system, a target tracking system consists of a camera part and a image processing part. However, in the field of the real time image processing, the vision chip for edge detection which was made by imitating the algorithm of humanis retina is superior to the conventional digital image processing systems because the human retina uses the parallel information processing method. In this paper, we present a high speed target tracking system using the function of the CMOS vision chip for edge detection.

위치와 각도를 인지하는 책상형 인터랙션 개발 (Tabletop Workspace with Tangible User Interface Using Infrared Vision Sense)

  • 심한수
    • 게임&엔터테인먼트 논문지
    • /
    • 제2권2호
    • /
    • pp.70-74
    • /
    • 2006
  • 본 논문은 적외선 컴퓨터 비전으로 책상형 스크린상에 사용자 조작 물체의 위치와 각도를 실시간으로 추적할 수 있는 시스템을 제안한다. 제안하는 시스템은 기존 컴퓨터 비전 시스템의 약점인 주변 조명의 영향을 거의 받지 않는다. 위치뿐만 아니라 각도와 버튼 클릭 상태도 실시간으로 얻어낸다. 이 시스템을 활용하여 컬러랩 박스를 제작하였다.

  • PDF

3D Facial Landmark Tracking and Facial Expression Recognition

  • Medioni, Gerard;Choi, Jongmoo;Labeau, Matthieu;Leksut, Jatuporn Toy;Meng, Lingchao
    • Journal of information and communication convergence engineering
    • /
    • 제11권3호
    • /
    • pp.207-215
    • /
    • 2013
  • In this paper, we address the challenging computer vision problem of obtaining a reliable facial expression analysis from a naturally interacting person. We propose a system that combines a 3D generic face model, 3D head tracking, and 2D tracker to track facial landmarks and recognize expressions. First, we extract facial landmarks from a neutral frontal face, and then we deform a 3D generic face to fit the input face. Next, we use our real-time 3D head tracking module to track a person's head in 3D and predict facial landmark positions in 2D using the projection from the updated 3D face model. Finally, we use tracked 2D landmarks to update the 3D landmarks. This integrated tracking loop enables efficient tracking of the non-rigid parts of a face in the presence of large 3D head motion. We conducted experiments for facial expression recognition using both framebased and sequence-based approaches. Our method provides a 75.9% recognition rate in 8 subjects with 7 key expressions. Our approach provides a considerable step forward toward new applications including human-computer interactions, behavioral science, robotics, and game applications.

Unusual Motion Detection for Vision-Based Driver Assistance

  • Fu, Li-Hua;Wu, Wei-Dong;Zhang, Yu;Klette, Reinhard
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제15권1호
    • /
    • pp.27-34
    • /
    • 2015
  • For a vision-based driver assistance system, unusual motion detection is one of the important means of preventing accidents. In this paper, we propose a real-time unusual-motion-detection model, which contains two stages: salient region detection and unusual motion detection. In the salient-region-detection stage, we present an improved temporal attention model. In the unusual-motion-detection stage, three kinds of factors, the speed, the motion direction, and the distance, are extracted for detecting unusual motion. A series of experimental results demonstrates the proposed method and shows the feasibility of the proposed model.

건설 현장 CCTV 영상에서 딥러닝을 이용한 사물 인식 기초 연구 (A Basic Study on the Instance Segmentation with Surveillance Cameras at Construction Sties using Deep Learning based Computer Vision)

  • 강경수;조영운;류한국
    • 한국건축시공학회:학술대회논문집
    • /
    • 한국건축시공학회 2020년도 가을 학술논문 발표대회
    • /
    • pp.55-56
    • /
    • 2020
  • The construction industry has the highest occupational fatality and injury rates related to accidents of any industry. Accordingly, safety managers closely monitor to prevent accidents in real-time by installing surveillance cameras at construction sites. However, due to human cognitive ability limitations, it is impossible to monitor many videos simultaneously, and the fatigue of the person monitoring surveillance cameras is also very high. Thus, to help safety managers monitor work and reduce the occupational accident rate, a study on object recognition in construction sites was conducted through surveillance cameras. In this study, we applied to the instance segmentation to identify the classification and location of objects and extract the size and shape of objects in construction sites. This research considers ways in which deep learning-based computer vision technology can be applied to safety management on a construction site.

  • PDF

MediaPipe Framework를 이용한 얼굴과 손의 경혈 판별을 위한 Computer Vision 접근법 (A Computer Vision Approach for Identifying Acupuncture Points on the Face and Hand Using the MediaPipe Framework)

  • 하디;이명기;이병일
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 추계학술발표대회
    • /
    • pp.563-565
    • /
    • 2023
  • Acupuncture and acupressure apply needles or pressure to anatomical points for therapeutic benefit. The over 350 mapped acupuncture points in the human body can each treat various conditions, but anatomical variations make precisely locating these acupoints difficult. We propose a computer vision technique using the real-time hand and face tracking capabilities of the MediaPipe framework to identify acupoint locations. Our model detects anatomical facial and hand landmarks, and then maps these to corresponding acupoint regions. In summary, our proposed model facilitates precise acupoint localization for self-treatment and enhances practitioners' abilities to deliver targeted acupuncture and acupressure therapies.

Object Detection Using Deep Learning Algorithm CNN

  • S. Sumahasan;Udaya Kumar Addanki;Navya Irlapati;Amulya Jonnala
    • International Journal of Computer Science & Network Security
    • /
    • 제24권5호
    • /
    • pp.129-134
    • /
    • 2024
  • Object Detection is an emerging technology in the field of Computer Vision and Image Processing that deals with detecting objects of a particular class in digital images. It has considered being one of the complicated and challenging tasks in computer vision. Earlier several machine learning-based approaches like SIFT (Scale-invariant feature transform) and HOG (Histogram of oriented gradients) are widely used to classify objects in an image. These approaches use the Support vector machine for classification. The biggest challenges with these approaches are that they are computationally intensive for use in real-time applications, and these methods do not work well with massive datasets. To overcome these challenges, we implemented a Deep Learning based approach Convolutional Neural Network (CNN) in this paper. The Proposed approach provides accurate results in detecting objects in an image by the area of object highlighted in a Bounding Box along with its accuracy.

컴퓨터 시각을 이용한 돼지 무게 예측시스템의 개발 (Development of a Pig's Weight Estimating System Using Computer Vision)

  • 엄천일;정종훈
    • Journal of Biosystems Engineering
    • /
    • 제29권3호
    • /
    • pp.275-280
    • /
    • 2004
  • The main objective of this study was to develop and evaluate a model for estimating pigs weight using computer vision for improving the management in Korean swine farms in Korea. This research was carried out in two steps: 1) to find a model that relates the projection area with the weight of a pig; 2) to implement the model in a computer vision system mainly consisted of a monochrome CCD camera, a frame grabber and a computer system for estimating the weight of pigs in a non-contact, real-time manner. The model was developed under an important assumption there were no observable genetic differences among the pigs. The main results were: 1) The relationship between the projection area and the weight of pigs was W = 0.0569 ${\times}$ A - 32.585($R^2$ = 0.953), where W is the weight in kg; A is the projection area of a pig in $\textrm{cm}^2$; 2) The model could estimate the weight of pigs with an error less than 3.5%.