• Title/Summary/Keyword: vision-based technology

Search Result 1,063, Processing Time 0.028 seconds

A Study on Vision Based Gesture Recognition Interface Design for Digital TV (동작인식기반 Digital TV인터페이스를 위한 지시동작에 관한 연구)

  • Kim, Hyun-Suk;Hwang, Sung-Won;Moon, Hyun-Jung
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.257-268
    • /
    • 2007
  • The development of Human Computer Interface has been relied on the development of technology. Mice and keyboards are the most popular HCI devices for personal computing. However, device-based interfaces are quite different from human to human interaction and very artificial. To develop more intuitive interfaces which mimic human to human interface has been a major research topic among HCI researchers and engineers. Also, technology in the TV industry has rapidly developed and the market penetration rate for big size screen TVs has increased rapidly. The HDTV and digital TV broadcasting are being tested. These TV environment changes require changes of Human to TV interface. A gesture recognition-based interface with a computer vision system can replace the remote control-based interface because of its immediacy and intuitiveness. This research focuses on how people use their hands or arms for command gestures. A set of gestures are sampled to control TV set up by focus group interviews and surveys. The result of this paper can be used as a reference to design a computer vision based TV interface.

  • PDF

3D geometric model generation based on a stereo vision system using random pattern projection (랜덤 패턴 투영을 이용한 스테레오 비전 시스템 기반 3차원 기하모델 생성)

  • Na, Sang-Wook;Son, Jeong-Soo;Park, Hyung-Jun
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2005.05a
    • /
    • pp.848-853
    • /
    • 2005
  • 3D geometric modeling of an object of interest has been intensively investigated in many fields including CAD/CAM and computer graphics. Traditionally, CAD and geometric modeling tools are widely used to create geometric models that have nearly the same shape of 3D real objects or satisfy designers intent. Recently, with the help of the reverse engineering (RE) technology, we can easily acquire 3D point data from the objects and create 3D geometric models that perfectly fit the scanned data more easily and fast. In this paper, we present 3D geometric model generation based on a stereo vision system (SVS) using random pattern projection. A triangular mesh is considered as the resulting geometric model. In order to obtain reasonable results with the SVS-based geometric model generation, we deal with many steps including camera calibration, stereo matching, scanning from multiple views, noise handling, registration, and triangular mesh generation. To acquire reliable stere matching, we project random patterns onto the object. With experiments using various random patterns, we propose several tips helpful for the quality of the results. Some examples are given to show their usefulness.

  • PDF

Relative Navigation for Autonomous Aerial Refueling Using Infra-red based Vision Systems (자동 공중급유를 위한 적외선 영상기반 상대 항법)

  • Yoon, Hyungchul;Yang, Youyoung;Leeghim, Henzeh
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.46 no.7
    • /
    • pp.557-566
    • /
    • 2018
  • In this paper, a vision-based relative navigation system is addressed for autonomous aerial refueling. In the air-to-air refueling, it is assumed that the tanker has the drogue, and the receiver has the probe. To obtain the relative information from the drogue, a vision-based imaging technology by infra-red camera is applied. In this process, the relative information is obtained by using Gaussian Least Squares Differential Correction (GLSDC), and Levenberg-Marquadt(LM), where the drouge geometric information calculated through image processing is used. These two approaches proposed in this paper are analyzed through numerical simulations.

Multi-robot Formation based on Object Tracking Method using Fisheye Images (어안 영상을 이용한 물체 추적 기반의 한 멀티로봇의 대형 제어)

  • Choi, Yun Won;Kim, Jong Uk;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.6
    • /
    • pp.547-554
    • /
    • 2013
  • This paper proposes a novel formation algorithm of identical robots based on object tracking method using omni-directional images obtained through fisheye lenses which are mounted on the robots. Conventional formation methods of multi-robots often use stereo vision system or vision system with reflector instead of general purpose camera which has small angle of view to enlarge view angle of camera. In addition, to make up the lack of image information on the environment, robots share the information on their positions through communication. The proposed system estimates the region of robots using SURF in fisheye images that have $360^{\circ}$ of image information without merging images. The whole system controls formation of robots based on moving directions and velocities of robots which can be obtained by applying Lucas-Kanade Optical Flow Estimation for the estimated region of robots. We confirmed the reliability of the proposed formation control strategy for multi-robots through both simulation and experiment.

The Effect of Visual Feedback on One-hand Gesture Performance in Vision-based Gesture Recognition System

  • Kim, Jun-Ho;Lim, Ji-Hyoun;Moon, Sung-Hyun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.551-556
    • /
    • 2012
  • Objective: This study presents the effect of visual feedback on one-hand gesture performance in vision-based gesture recognition system when people use gestures to control a screen device remotely. Backgroud: gesture interaction receives growing attention because it uses advanced sensor technology and it allows users natural interaction using their own body motion. In generating motion, visual feedback has been to considered critical factor affect speed and accuracy. Method: three types of visual feedback(arrow, star, and animation) were selected and 20 gestures were listed. 12 participants perform each 20 gestures while given 3 types of visual feedback in turn. Results: People made longer hand trace and take longer time to make a gesture when they were given arrow shape feedback than star-shape feedback. The animation type feedback was most preferred. Conclusion: The type of visual feedback showed statistically significant effect on the length of hand trace, elapsed time, and speed of motion in performing a gesture. Application: This study could be applied to any device that needs visual feedback for device control. A big feedback generate shorter length of motion trace, less time, faster than smaller one when people performs gestures to control a device. So the big size of visual feedback would be recommended for a situation requiring fast actions. On the other hand, the smaller visual feedback would be recommended for a situation requiring elaborated actions.

Vision-based Predictive Model on Particulates via Deep Learning

  • Kim, SungHwan;Kim, Songi
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.5
    • /
    • pp.2107-2115
    • /
    • 2018
  • Over recent years, high-concentration of particulate matters (e.g., a.k.a. fine dust) in South Korea has increasingly evoked considerable concerns about public health. It is intractable to track and report $PM_{10}$ measurements to the public on a real-time basis. Even worse, such records merely amount to averaged particulate concentration at particular regions. Under this circumstance, people are prone to being at risk at rapidly dispersing air pollution. To address this challenge, we attempt to build a predictive model via deep learning to the concentration of particulates ($PM_{10}$). The proposed method learns a binary decision rule on the basis of video sequences to predict whether the level of particulates ($PM_{10}$) in real time is harmful (>$80{\mu}g/m^3$) or not. To our best knowledge, no vision-based $PM_{10}$ measurement method has been proposed in atmosphere research. In experimental studies, the proposed model is found to outperform other existing algorithms in virtue of convolutional deep learning networks. In this regard, we suppose this vision based-predictive model has lucrative potentials to handle with upcoming challenges related to particulate measurement.

3D VISION SYSTEM FOR THE RECOGNITION OF FREE PARKING SITE LOCATION

  • Jung, H.G.;Kim, D.S.;Yoon, P.J.;Kim, J.H.
    • International Journal of Automotive Technology
    • /
    • v.7 no.3
    • /
    • pp.361-367
    • /
    • 2006
  • This paper describes a novel stereo vision based localization of free parking site, which recognizes the target position of automatic parking system. Pixel structure classification and feature based stereo matching extract the 3D information of parking site in real time. The pixel structure represents intensity configuration around a pixel and the feature based stereo matching uses step-by-step investigation strategy to reduce computational load. This paper considers only parking site divided by marking, which is generally drawn according to relevant standards. Parking site marking is separated by plane surface constraint and is transformed into bird's eye view, on which template matching is performed to determine the location of parking site. Obstacle depth map, which is generated from the disparity of adjacent vehicles, can be used as the guideline of template matching by limiting search range and orientation. Proposed method using both the obstacle depth map and the bird's eye view of parking site marking increases operation speed and robustness to visual noise by effectively limiting search range.

Lane Detection System Based on Vision Sensors Using a Robust Filter for Inner Edge Detection (차선 인접 에지 검출에 강인한 필터를 이용한 비전 센서 기반 차선 검출 시스템)

  • Shin, Juseok;Jung, Jehan;Kim, Minkyu
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.164-170
    • /
    • 2019
  • In this paper, a lane detection and tracking algorithm based on vision sensors and employing a robust filter for inner edge detection is proposed for developing a lane departure warning system (LDWS). The lateral offset value was precisely calculated by applying the proposed filter for inner edge detection in the region of interest. The proposed algorithm was subsequently compared with an existing algorithm having lateral offset-based warning alarm occurrence time, and an average error of approximately 15ms was observed. Tests were also conducted to verify whether a warning alarm is generated when a driver departs from a lane, and an average accuracy of approximately 94% was observed. Additionally, the proposed LDWS was implemented as an embedded system, mounted on a test vehicle, and was made to travel for approximately 100km for obtaining experimental results. Obtained results indicate that the average lane detection rates at day time and night time are approximately 97% and 96%, respectively. Furthermore, the processing time of the embedded system is found to be approximately 12fps.

Vision-based support in the characterization of superelastic U-shaped SMA elements

  • Casciati, F.;Casciati, S.;Colnaghi, A.;Faravelli, L.;Rosadini, L.;Zhu, S.
    • Smart Structures and Systems
    • /
    • v.24 no.5
    • /
    • pp.641-648
    • /
    • 2019
  • The authors investigate the feasibility of applying a vision-based displacement-measurement technique in the characterization of a SMA damper recently introduced in the literature. The experimental campaign tests a steel frame on a uni-axial shaking table driven by sinusoidal signals in the frequency range from 1Hz to 5Hz. Three different cameras are used to collect the images, namely an industrial camera and two commercial smartphones. The achieved results are compared. The camera showing the better performance is then used to test the same frame after its base isolation. U-shaped, shape-memory-alloy (SMA) elements are installed as dampers at the isolation level. The accelerations of the shaking table and those of the frame basement are measured by accelerometers. A system of markers is glued on these system components, as well as along the U-shaped elements serving as dampers. The different phases of the test are discussed, in the attempt to obtain as much possible information on the behavior of the SMA elements. Several tests were carried out until the thinner U-shaped element went to failure.

Review for vision-based structural damage evaluation in disasters focusing on nonlinearity

  • Sifan Wang;Mayuko Nishio
    • Smart Structures and Systems
    • /
    • v.33 no.4
    • /
    • pp.263-279
    • /
    • 2024
  • With the increasing diversity of internet media, available video data have become more convenient and abundant. Related video data-based research has advanced rapidly in recent years owing to advantages such as noncontact, low-cost data acquisition, high spatial resolution, and simultaneity. Additionally, structural nonlinearity extraction has attracted increasing attention as a tool for damage evaluation. This review paper aims to summarize the research experience with the recent developments and applications of video data-based technology for structural nonlinearity extraction and damage evaluation. The most regularly used object detection images and video databases are first summarized, followed by suggestions for obtaining video data on structural nonlinear damage events. Technologies for linear and nonlinear system identification based on video data are then discussed. In addition, common nonlinear damage types in disaster events and prevalent processing algorithms are reviewed in the section on structural damage evaluation using video data uploaded on online platform. Finally, a discussion regarding some potential research directions is proposed to address the weaknesses of the current nonlinear extraction technology based on video data, such as the use of uni-dimensional time-series data as leverage to further achieve nonlinear extraction and the difficulty of real-time detection, including the fields of nonlinear extraction for spatial data, real-time detection, and visualization.