• Title/Summary/Keyword: Hand Region Detection

Search Result 72, Processing Time 0.022 seconds

Fast Human Detection Algorithm for High-Resolution CCTV Camera (고해상도 CCTV 카메라를 위한 빠른 사람 검출 알고리즘)

  • Park, In-Cheol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.8
    • /
    • pp.5263-5268
    • /
    • 2014
  • This paper suggests a fast human detection algorithm that can be applied to a high-resolution CCTV camera. Human detection algorithms, which used a HOG detector show high performance in the region of image processing. On the other hand, it is difficult to apply to real-time high resolution imaging because of its slow processing speed in the extracting figures of HOG. To resolve this problems, we suggest how to detect humans into two stages. First, candidates of a human region are found using background subtraction, and humans and non-humans are distinguished using a HOG detector only. This process increases the detection speed by approximately 2.5 times without any degradation in performance.

Design of Computer Vision Interface by Recognizing Hand Motion (손동작 인식에 의한 컴퓨터 비전 인터페이스 설계)

  • Yun, Jin-Hyun;Lee, Chong-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.3
    • /
    • pp.1-10
    • /
    • 2010
  • As various interfacing devices for computational machines are being developed, a new HCI method using hand motion input is introduced. This interface method is a vision-based approach using a single camera for detecting and tracking hand movements. In the previous researches, only a skin color is used for detecting and tracking hand location. However, in our design, skin color and shape information are collectively considered. Consequently, detection ability of a hand increased. we proposed primary orientation edge descriptor for getting an edge information. This method uses only one hand model. Therefore, we do not need training processing time. This system consists of a detecting part and a tracking part for efficient processing. In tracking part, the system is quite robust on the orientation of the hand. The system is applied to recognize a hand written number in script style using DNAC algorithm. Performance of the proposed algorithm reaches 82% recognition ratio in detecting hand region and 90% in recognizing a written number in script style.

Presentation control of a computer using hand motion identification rules (손동작 식별 규칙을 이용한 컴퓨터의 프레젠테이션 제어)

  • Lee, Kyu-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.9
    • /
    • pp.1172-1178
    • /
    • 2018
  • A system that control computer presentations by using the hand motion recognition and identification is proposed. The system recognizes and identifies various types of motion in hand motion, controlls the presentation without additional control devices. To recognize hand movements, it performs a face and hand region detection. Facial area is detected using Haar classifier and hand region is extracted according to skin color information on HSV color model. The face area is used to determine the beginning and end of hand gestures, the size and direction of motion. It recognizes various hand gestures and uses them to control computer presentations according to the hand motion identification rules that are proposed and set horizontal and vertical axes from the face area. It is confirmed that 97.2% recognition rate is obtained in about 1200 hand motion recognition experiments and the proposed algorithm is valid in presentation control.

HAND GESTURE INTERFACE FOR WEARABLE PC

  • Nishihara, Isao;Nakano, Shizuo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.664-667
    • /
    • 2009
  • There is strong demand to create wearable PC systems that can support the user outdoors. When we are outdoors, our movement makes it impossible to use traditional input devices such as keyboards and mice. We propose a hand gesture interface based on image processing to operate wearable PCs. The semi-transparent PC screen is displayed on the head mount display (HMD), and the user makes hand gestures to select icons on the screen. The user's hand is extracted from the images captured by a color camera mounted above the HMD. Since skin color can vary widely due to outdoor lighting effects, a key problem is accurately discrimination the hand from the background. The proposed method does not assume any fixed skin color space. First, the image is divided into blocks and blocks with similar average color are linked. Contiguous regions are then subjected to hand recognition. Blocks on the edges of the hand region are subdivided for more accurate finger discrimination. A change in hand shape is recognized as hand movement. Our current input interface associates a hand grasp with a mouse click. Tests on a prototype system confirm that the proposed method recognizes hand gestures accurately at high speed. We intend to develop a wider range of recognizable gestures.

  • PDF

Vision-based Motion Control for the Immersive Interaction with a Mobile Augmented Reality Object (모바일 증강현실 물체와 몰입형 상호작용을 위한 비전기반 동작제어)

  • Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.119-129
    • /
    • 2011
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. Especially, recent increasing demands for mobile augmented reality require the development of efficient interactive technologies between the augmented virtual object and users. This paper presents a novel approach to construct marker-less mobile augmented reality object and control the object. Replacing a traditional market, the human hand interface is used for marker-less mobile augmented reality system. In order to implement the marker-less mobile augmented system in the limited resources of mobile device compared with the desktop environments, we proposed a method to extract an optimal hand region which plays a role of the marker and augment object in a realtime fashion by using the camera attached on mobile device. The optimal hand region detection can be composed of detecting hand region with YCbCr skin color model and extracting the optimal rectangle region with Rotating Calipers Algorithm. The extracted optimal rectangle region takes a role of traditional marker. The proposed method resolved the problem of missing the track of fingertips when the hand is rotated or occluded in the hand marker system. From the experiment, we can prove that the proposed framework can effectively construct and control the augmented virtual object in the mobile environments.

Sub-Frame Analysis-based Object Detection for Real-Time Video Surveillance

  • Jang, Bum-Suk;Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.76-85
    • /
    • 2019
  • We introduce a vision-based object detection method for real-time video surveillance system in low-end edge computing environments. Recently, the accuracy of object detection has been improved due to the performance of approaches based on deep learning algorithm such as Region Convolutional Neural Network(R-CNN) which has two stage for inferencing. On the other hand, one stage detection algorithms such as single-shot detection (SSD) and you only look once (YOLO) have been developed at the expense of some accuracy and can be used for real-time systems. However, high-performance hardware such as General-Purpose computing on Graphics Processing Unit(GPGPU) is required to still achieve excellent object detection performance and speed. To address hardware requirement that is burdensome to low-end edge computing environments, We propose sub-frame analysis method for the object detection. In specific, We divide a whole image frame into smaller ones then inference them on Convolutional Neural Network (CNN) based image detection network, which is much faster than conventional network designed forfull frame image. We reduced its computationalrequirementsignificantly without losing throughput and object detection accuracy with the proposed method.

Deep Window Detection in Street Scenes

  • Ma, Wenguang;Ma, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.855-870
    • /
    • 2020
  • Windows are key components of building facades. Detecting windows, crucial to 3D semantic reconstruction and scene parsing, is a challenging task in computer vision. Early methods try to solve window detection by using hand-crafted features and traditional classifiers. However, these methods are unable to handle the diversity of window instances in real scenes and suffer from heavy computational costs. Recently, convolutional neural networks based object detection algorithms attract much attention due to their good performances. Unfortunately, directly training them for challenging window detection cannot achieve satisfying results. In this paper, we propose an approach for window detection. It involves an improved Faster R-CNN architecture for window detection, featuring in a window region proposal network, an RoI feature fusion and a context enhancement module. Besides, a post optimization process is designed by the regular distribution of windows to refine detection results obtained by the improved deep architecture. Furthermore, we present a newly collected dataset which is the largest one for window detection in real street scenes to date. Experimental results on both existing datasets and the new dataset show that the proposed method has outstanding performance.

Robust Skin Area Detection Method in Color Distorted Images (색 왜곡 영상에서의 강건한 피부영역 탐지 방법)

  • Hwang, Daedong;Lee, Keunsoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.7
    • /
    • pp.350-356
    • /
    • 2017
  • With increasing attention to real-time body detection, active research is being conducted on human body detection based on skin color. Despite this, most existing skin detection methods utilize static skin color models and have detection rates in images, in which colors are distorted. This study proposed a method of detecting the skin region using a fuzzy classification of the gradient map, saturation, and Cb and Cr in the YCbCr space. The proposed method, first, creates a gradient map, followed by a saturation map, CbCR map, fuzzy classification, and skin region binarization in that order. The focus of this method is to rigorously detect human skin regardless of the lighting, race, age, and individual differences, using features other than color. On the other hand,the borders between these features and non-skin regions are unclear. To solve this problem, the membership functions were defined by analyzing the relationship between the gradient, saturation, and color features and generate 108 fuzzy rules. The detection accuracy of the proposed method was 86.35%, which is 2~5% better than the conventional method.

Finger Detection using a Distance Graph (거리 그래프를 이용한 손가락 검출)

  • Song, Ji-woo;Oh, Jeong-su
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.10
    • /
    • pp.1967-1972
    • /
    • 2016
  • This paper defines a distance graph for a hand region in a depth image and proposes an algorithm detecting finger using it. The distance graph is a graph expressing the hand contour with angles and Euclidean distances between the center of palm and the hand contour. Since the distance graph has local maximum at fingertips' position, we can detect finger points and recognize the number of them. The hand contours are always divided into 360 angles and the angles are aligned with the center of the wrist as a starting point. And then the proposed algorithm can well detect fingers without influence of the size and orientation of the hand. Under some limited recognition test conditions, the recognition test's results show that the recognition rate is 100% under 1~3 fingers and 98% under 4~5 fingers and that the failure case can also be recognized by simple conditions to be available to add.

Real-Time License Plate Detection Based on Faster R-CNN (Faster R-CNN 기반의 실시간 번호판 검출)

  • Lee, Dongsuk;Yoon, Sook;Lee, Jaehwan;Park, Dong Sun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.511-520
    • /
    • 2016
  • Automatic License Plate Detection (ALPD) is a key technology for a efficient traffic control. It is used to improve work efficiency in many applications such as toll payment systems and parking and traffic management. Until recently, the hand-crafted features made for image processing are used to detect license plates in most studies. It has the advantage in speed. but can degrade the detection rate with respect to various environmental changes. In this paper, we propose a way to utilize a Faster Region based Convolutional Neural Networks (Faster R-CNN) and a Conventional Convolutional Neural Networks (CNN), which improves the computational speed and is robust against changed environments. The module based on Faster R-CNN is used to detect license plate candidate regions from images and is followed by the module based on CNN to remove False Positives from the candidates. As a result, we achieved a detection rate of 99.94% from images captured under various environments. In addition, the average operating speed is 80ms/image. We implemented a fast and robust Real-Time License Plate Detection System.