• Title/Summary/Keyword: Camera-based Recognition

Search Result 593, Processing Time 0.03 seconds

Implementation of Hand-Gesture-Based Augmented Reality Interface on Mobile Phone (휴대폰 상에서의 손동작 기반 증강현실 인터페이스 구현)

  • Choi, Jun-Yeong;Park, Han-Hoon;Park, Jung-Sik;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.16 no.6
    • /
    • pp.941-950
    • /
    • 2011
  • With the recent advance in the performance of mobile phones, many effective interfaces for them have been proposed. This paper implements a hand-gesture-and-vision-based interface on a mobile phone. This paper assumes natural interaction scenario when user holds a mobile phone in a hand and sees the other hand's palm through mobile phone's camera. Then, a virtual object is rendered on his/her palm and reacts to hand and finger movements. Since the implemented interface is based on hand familiar to humans and does not require any additional sensors or markers, user freely interacts with the virtual object anytime and anywhere without any training. The implemented interface worked at 5 fps on mobile phone (Galaxy S2 having a dual-core processor).

A Study on Object Control in Mobile Augmented Reality Using Indoor Location Based Service (실내 위치기반 서비스를 이용한 모바일 증강현실에서의 객체 제어에 관한 연구)

  • Yoon, Chang-Pyo;Lee, Hae-Jun;Lee, Dae-Sung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.288-290
    • /
    • 2017
  • Recently, interest and demand of Augmented Reality(AR) contents are increasing as an application field of AR. Generally, when the AR contents are served in the outdoor environment, the position information using the GPS signal is used to control the display of the object on the AR screen, or a marker based on the image of the object is used. However, there is a problem that location information can not be used in an indoor environment. If the service is provided using only the marker, there is a problem that the recognition of the marker due to the moving obstacle in the vicinity is unstable. and there is a problem that information displayed on the AR screen is not displayed in a fixed state at a specific position, it moves according to the movement of the camera. In this paper, we have studied the object control method for displaying the object to be displayed on the AR screen by using iBeacon using indoor location recognition and specific markers.

  • PDF

Research on Drivable Road Area Recognition and Real-Time Tracking Techniques Based on YOLOv8 Algorithm (YOLOv8 알고리즘 기반의 주행 가능한 도로 영역 인식과 실시간 추적 기법에 관한 연구)

  • Jung-Hee Seo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.3
    • /
    • pp.563-570
    • /
    • 2024
  • This paper proposes a method to recognize and track drivable lane areas to assist the driver. The main topic is designing a deep-based network that predicts drivable road areas using computer vision and deep learning technology based on images acquired in real time through a camera installed in the center of the windshield inside the vehicle. This study aims to develop a new model trained with data directly obtained from cameras using the YOLO algorithm. It is expected to play a role in assisting the driver's driving by visualizing the exact location of the vehicle on the actual road consistent with the actual image and displaying and tracking the drivable lane area. As a result of the experiment, it was possible to track the drivable road area in most cases, but in bad weather such as heavy rain at night, there were cases where lanes were not accurately recognized, so improvement in model performance is needed to solve this problem.

Eyelid Detection Algorithm Based on Parabolic Hough Transform for Iris Recognition (홍채 인식을 위한 포물 허프 변환 기반 눈꺼풀 영역 검출 알고리즘)

  • Jang, Young-Kyoon;Kang, Byung-Jun;Park, Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.1
    • /
    • pp.94-104
    • /
    • 2007
  • Iris recognition is biometric technology which uses a unique iris pattern of user in order to identify person. In the captured iris image by conventional iris recognition camera, it is often the case with eyelid occlusion, which covers iris information. The eyelids are unnecessary information that causes bad recognition performance, so this paper proposes robust algorithm in order to detect eyelid. This research has following three advantages compared to previous works. First, we remove the detected eyelash and specular reflection by linear interpolation method because they act as noise factors when locating eyelid. Second, we detect the candidate points of eyelid by using mask in limited eyelid searching area, which is determined by searching the cross position of eyelid and the outer boundary of iris. And our proposed algorithm detects eyelid by using parabolic hough transform based on the detected candidate points. Third, there have been many researches to detect eyelid, but they did not consider the rotation of eyelid in an iris image. Whereas, we consider the rotation factor in parabolic hough transform to overcome such problem. We tested our algorithm with CASIA Database. As the experimental results, the detection accuracy were 90.82% and 96.47% in case of detecting upper and lower eyelid, respectively.

Design of Optimized pRBFNNs-based Face Recognition Algorithm Using Two-dimensional Image and ASM Algorithm (최적 pRBFNNs 패턴분류기 기반 2차원 영상과 ASM 알고리즘을 이용한 얼굴인식 알고리즘 설계)

  • Oh, Sung-Kwun;Ma, Chang-Min;Yoo, Sung-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.6
    • /
    • pp.749-754
    • /
    • 2011
  • In this study, we propose the design of optimized pRBFNNs-based face recognition system using two-dimensional Image and ASM algorithm. usually the existing 2 dimensional face recognition methods have the effects of the scale change of the image, position variation or the backgrounds of an image. In this paper, the face region information obtained from the detected face region is used for the compensation of these defects. In this paper, we use a CCD camera to obtain a picture frame directly. By using histogram equalization method, we can partially enhance the distorted image influenced by natural as well as artificial illumination. AdaBoost algorithm is used for the detection of face image between face and non-face image area. We can butt up personal profile by extracting the both face contour and shape using ASM(Active Shape Model) and then reduce dimension of image data using PCA. The proposed pRBFNNs consists of three functional modules such as the condition part, the conclusion part, and the inference part. In the condition part of fuzzy rules, input space is partitioned with Fuzzy C-Means clustering. In the conclusion part of rules, the connection weight of RBFNNs is represented as three kinds of polynomials such as constant, linear, and quadratic. The essential design parameters (including learning rate, momentum coefficient and fuzzification coefficient) of the networks are optimized by means of Differential Evolution. The proposed pRBFNNs are applied to real-time face image database and then demonstrated from viewpoint of the output performance and recognition rate.

Design of an Visitor Identification system for the Front Door of an Apartment using Deep learning (딥러닝 기반 이용한 공동주택현관문의 출입자 식별 시스템 설계)

  • Lee, Min-Hye;Mun, Hyung-Jin
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.4
    • /
    • pp.45-51
    • /
    • 2022
  • Fear of contact exists due to the prevention of the spread of infectious diseases such as COVID-19. When using the common entrance door of an apartment, access is possible only if the resident enters a password or obtains the resident's permission. There is the inconvenience of having to manually enter the number and password for the common entrance door to enter. Also, contactless entry is required due to COVID-19. Due to the development of ICT, users can be easily identified through the development of face recognition and voice recognition technology. The proposed method detects a visitor's face through a CCTV or camera attached to the common entrance door, recognizes the face, and identifies it as a registered resident. Then, based on the registered information of the resident, it is possible to operate without contact by interworking with the elevator on the server. In particular, if face recognition fails with a hat or mask, the visitor is identified by voice or additional authentication of the visitor is performed based on the voice message. It is possible to block the spread of contagiousness without leaving any contactless function and fingerprint information when entering and exiting the front door of an apartment house, and without the inconvenience of access.

A Study on Recognition of Moving Object Crowdedness Based on Ensemble Classifiers in a Sequence (혼합분류기 기반 영상내 움직이는 객체의 혼잡도 인식에 관한 연구)

  • An, Tae-Ki;Ahn, Seong-Je;Park, Kwang-Young;Park, Goo-Man
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.2A
    • /
    • pp.95-104
    • /
    • 2012
  • Pattern recognition using ensemble classifiers is composed of strong classifier which consists of many weak classifiers. In this paper, we used feature extraction to organize strong classifier using static camera sequence. The strong classifier is made of weak classifiers which considers environmental factors. So the strong classifier overcomes environmental effect. Proposed method uses binary foreground image by frame difference method and the boosting is used to train crowdedness model and recognize crowdedness using features. Combination of weak classifiers makes strong ensemble classifier. The classifier could make use of potential features from the environment such as shadow and reflection. We tested the proposed system with road sequence and subway platform sequence which are included in "AVSS 2007" sequence. The result shows good accuracy and efficiency on complex environment.

Real-Time Object Recognition Using Local Features (지역 특징을 사용한 실시간 객체인식)

  • Kim, Dae-Hoon;Hwang, Een-Jun
    • Journal of IKEEE
    • /
    • v.14 no.3
    • /
    • pp.224-231
    • /
    • 2010
  • Automatic detection of objects in images has been one of core challenges in the areas such as computer vision and pattern analysis. Especially, with the recent deployment of personal mobile devices such as smart phone, such technology is required to be transported to them. Usually, these smart phone users are equipped with devices such as camera, GPS, and gyroscope and provide various services through user-friendly interface. However, the smart phones fail to give excellent performance due to limited system resources. In this paper, we propose a new scheme to improve object recognition performance based on pre-computation and simple local features. In the pre-processing, we first find several representative parts from similar type objects and classify them. In addition, we extract features from each classified part and train them using regression functions. For a given query image, we first find candidate representative parts and compare them with trained information to recognize objects. Through experiments, we have shown that our proposed scheme can achieve resonable performance.

Hand gesture based a pet robot control (손 제스처 기반의 애완용 로봇 제어)

  • Park, Se-Hyun;Kim, Tae-Ui;Kwon, Kyung-Su
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.4
    • /
    • pp.145-154
    • /
    • 2008
  • In this paper, we propose the pet robot control system using hand gesture recognition in image sequences acquired from a camera affixed to the pet robot. The proposed system consists of 4 steps; hand detection, feature extraction, gesture recognition and robot control. The hand region is first detected from the input images using the skin color model in HSI color space and connected component analysis. Next, the hand shape and motion features from the image sequences are extracted. Then we consider the hand shape for classification of meaning gestures. Thereafter the hand gesture is recognized by using HMMs (hidden markov models) which have the input as the quantized symbol sequence by the hand motion. Finally the pet robot is controlled by a order corresponding to the recognized hand gesture. We defined four commands of sit down, stand up, lie flat and shake hands for control of pet robot. And we show that user is able to control of pet robot through proposed system in the experiment.

  • PDF

A Study on Hand-signal Recognition System in 3-dimensional Space (3차원 공간상의 수신호 인식 시스템에 대한 연구)

  • 장효영;김대진;김정배;변증남
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.103-114
    • /
    • 2004
  • This paper deals with a system that is capable of recognizing hand-signals in 3-dimensional space. The system uses 2 color cameras as input devices. Vision-based gesture recognition system is known to be user-friendly because of its contact-free characteristic. But as with other applications using a camera as an input device, there are difficulties under complex background and varying illumination. In order to detect hand region robustly from a input image under various conditions without any special gloves or markers, the paper uses previous position information and adaptive hand color model. The paper defines a hand-signal as a combination of two basic elements such as 'hand pose' and 'hand trajectory'. As an extensive classification method for hand pose, the paper proposes 2-stage classification method by using 'small group concept'. Also, the paper suggests a complementary feature selection method from images from two color cameras. We verified our method with a hand-signal application to our driving simulator.