• Title/Summary/Keyword: Vision Based Sensor

Search Result 425, Processing Time 0.025 seconds

Development of the Computer Vision based Continuous 3-D Feature Extraction System via Laser Structured Lighting (레이저 구조광을 이용한 3차원 컴퓨터 시각 형상정보 연속 측정 시스템 개발)

  • Im, D. H.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.24 no.2
    • /
    • pp.159-166
    • /
    • 1999
  • A system to extract continuously the real 3-D geometric fearture information from 2-D image of an object, which is fed randomly via conveyor has been developed. Two sets of structured laser lightings were utilized. And the laser structured light projection image was acquired using the camera from the signal of the photo-sensor mounted on the conveyor. Camera coordinate calibration matrix was obtained, which transforms 2-D image coordinate information into 3-D world space coordinate using known 6 points. The maximum error after calibration showed 1.5 mm within the height range of 103mm. The correlation equation between the shift amount of the laser light and the height was generated. Height information estimated after correlation showed the maximum error of 0.4mm within the height range of 103mm. An interactive 3-D geometric feature extracting software was developed using Microsoft Visual C++ 4.0 under Windows system environment. Extracted 3-D geometric feature information was reconstructed into 3-D surface using MATLAB.

  • PDF

Indoor Location and Pose Estimation Algorithm using Artificial Attached Marker (인공 부착 마커를 활용한 실내 위치 및 자세 추정 알고리즘)

  • Ahn, Byeoung Min;Ko, Yun-Ho;Lee, Ji Hong
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.240-251
    • /
    • 2016
  • This paper presents a real-time indoor location and pose estimation method that utilizes simple artificial markers and image analysis techniques for the purpose of warehouse automation. The conventional indoor localization methods cannot work robustly in warehouses where severe environmental changes usually occur due to the movement of stocked goods. To overcome this problem, the proposed framework places artificial markers having different interior pattern on the predefined position of the warehouse floor. The proposed algorithm obtains marker candidate regions from a captured image by a simple binarization and labeling procedure. Then it extracts maker interior pattern information from each candidate region in order to decide whether the candidate region is a true marker or not. The extracted interior pattern information and the outer boundary of the marker are used to estimate location and heading angle of the localization system. Experimental results show that the proposed localization method can provide high performance which is almost equivalent to that of the conventional method using an expensive LIDAR sensor and AMCL algorithm.

Moving Target Indication using an Image Sensor for Small UAVs (소형 무인항공기용 영상센서 기반 이동표적표시 기법)

  • Yun, Seung-Gyu;Kang, Seung-Eun;Ko, Sangho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.12
    • /
    • pp.1189-1195
    • /
    • 2014
  • This paper addresses a Moving Target Indication (MTI) algorithm which can be used for small Unmanned Aerial Vehicles (UAVs) equipped with image sensors. MTI is a system (or an algorithm) which detects moving objects. The principle of the MTI algorithm is to analyze the difference between successive image data. It is difficult to detect moving objects in the images recorded from dynamic cameras attached to moving platforms such as UAVs flying at low altitudes over a variety of terrain, since the acquired images have two motion components: 'camera motion' and 'object motion'. Therefore, the motion of independent objects can be obtained after the camera motion is compensated thoroughly via proper manipulations. In this study, the camera motion effects are removed by using wiener filter-based image registration, one of the non-parametric methods. In addition, an image pyramid structure is adopted to reduce the computational complexity for UAVs. We demonstrate the effectiveness of our method with experimental results on outdoor video sequences.

A Proposal of the Olfactory Information Presentation Method and Its Application for Scent Generator Using Web Service

  • Kim, Jeong-Do;Byun, Hyung-Gi
    • Journal of Sensor Science and Technology
    • /
    • v.21 no.4
    • /
    • pp.249-255
    • /
    • 2012
  • Among the human senses, olfactory information still does not have a proper data presentation method unlike that regarding vision and auditory information. It makes presenting the sense of smell into multimedia information impossible, which may be an exploratory field in human computer interaction. In this paper, we propose an olfactory information presentation method, which is a way to use smell as multimedia information, and show an application for scent generation and odor display using a web service. The olfactory information can present smell characteristics such as intensity, persistence, hedonic tone, and odor description. The structure of data format based on olfactory information can also be organized according to data types such as integer, float, char, string, and bitmap. Furthermore, it can be used for data transmitting via a web service and for odor display using a scent generator. The scent generator, which can display information of smell, is developed to generate 6 odors using 6 aroma solutions and a diluted solution with 14 micro-valves and a micropump. Throughout the experiment, we confirm that the remote user can grasp information of smell transmitted by messenger service and request odor display to the computer controlled scent generator. It contributes to enlarge existing virtual reality and to be proposed as a standard reference method regarding olfactory information presentation for future multimedia technology.

Real-time Simultaneous Localization and Mapping (SLAM) for Vision-based Autonomous Navigation (영상기반 자동항법을 위한 실시간 위치인식 및 지도작성)

  • Lim, Hyon;Lim, Jongwoo;Kim, H. Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.5
    • /
    • pp.483-489
    • /
    • 2015
  • In this paper, we propose monocular visual simultaneous localization and mapping (SLAM) in the large-scale environment. The proposed method continuously computes the current 6-DoF camera pose and 3D landmarks position from video input. The proposed method successfully builds consistent maps from challenging outdoor sequences using a monocular camera as the only sensor. By using a binary descriptor and metric-topological mapping, the system demonstrates real-time performance on a large-scale outdoor dataset without utilizing GPUs or reducing input image size. The effectiveness of the proposed method is demonstrated on various challenging video sequences.

Visual Sensing of the Light Spot of a Laser Pointer for Robotic Applications

  • Park, Sung-Ho;Kim, Dong Uk;Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.27 no.4
    • /
    • pp.216-220
    • /
    • 2018
  • In this paper, we present visual sensing techniques that can be used to teach a robot using a laser pointer. The light spot of an off-the-shelf laser pointer is detected and its movement is tracked on consecutive images of a camera. The three-dimensional position of the spot is calculated using stereo cameras. The light spot on the image is detected based on its color, brightness, and shape. The detection results in a binary image, and morphological processing steps are performed on the image to refine the detection. The movement of the laser spot is measured using two methods. The first is a simple method of specifying the region of interest (ROI) centered at the current location of the light spot and finding the spot within the ROI on the next image. It is assumed that the movement of the spot is not large on two consecutive images. The second method is using a Kalman filter, which has been widely employed in trajectory estimation problems. In our simulation study of various cases, Kalman filtering shows better results mostly. However, there is a problem of fitting the system model of the filter to the pattern of the spot movement.

Development of a Lane Keeping Assist System using Vision Sensor and DRPG Algorithm (비젼센서와 DRPG알고리즘을 이용한 차선 유지 보조 시스템 개발)

  • Hwang, Jun-Yeon;Huh, Kun-Soo;Na, Hyuk-Min;Jung, Ho-Gi;Kang, Hyung-Jin;Yoon, Pal-Joo
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.17 no.1
    • /
    • pp.50-57
    • /
    • 2009
  • Lane Keeping Assistant Systems (LKAS) require the cooperative operation between drivers and active steering angle/torque controllers. An LKAS is proposed in this study such that the desired reference path generation (DRPG) system generates the desired path to minimize the trajectory overshoot. Based on the reference path from the DRPG system, an optimal controller is designed to minimize the cost function. A HIL (Hardware In the Loop) simulator is constructed to evaluate the proposed LKAS system. The single camera is mounted on the simulator and acquires the monitor images to detect lane markers. The performance of the proposed system is evaluated by HIL system using the Carsim and the Matlab Simulink.

Mobile Robots for the Concrete Crack Search and Sealing (콘크리트 크랙 탐색 및 실링을 위한 다수의 자율주행로봇)

  • Jin, Sung-Hun;Cho, Cheol-Joo;Lim, Kye-Young
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.2
    • /
    • pp.60-72
    • /
    • 2016
  • This study proposes a multi-robot system, using multiple autonomous robots, to explore concrete structures and assist in their maintenance by sealing any cracks present in the structure. The proposed system employed a new self-localization method that is essential for autonomous robots, along with a visualization system to recognize the external environment and to detect and explore cracks efficiently. Moreover, more efficient crack search in an unknown environment became possible by arranging the robots into search areas divided depending on the surrounding situations. Operations with increased efficiency were also realized by overcoming the disadvantages of the infeasible logical behavioral model design with only six basic behavioral strategies based on distributed control-one of the methods to control swarm robots. Finally, this study investigated the efficiency of the proposed multi-robot system via basic sensor testing and simulation.

Robust Terrain Classification Against Environmental Variation for Autonomous Off-road Navigation (야지 자율주행을 위한 환경에 강인한 지형분류 기법)

  • Sung, Gi-Yeul;Lyou, Joon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.13 no.5
    • /
    • pp.894-902
    • /
    • 2010
  • This paper presents a vision-based robust off-road terrain classification method against environmental variation. As a supervised classification algorithm, we applied a neural network classifier using wavelet features extracted from wavelet transform of an image. In order to get over an effect of overall image feature variation, we adopted environment sensors and gathered the training parameters database according to environmental conditions. The robust terrain classification algorithm against environmental variation was implemented by choosing an optimal parameter using environmental information. The proposed algorithm was embedded on a processor board under the VxWorks real-time operating system. The processor board is containing four 1GHz 7448 PowerPC CPUs. In order to implement an optimal software architecture on which a distributed parallel processing is possible, we measured and analyzed the data delivery time between the CPUs. And the performance of the present algorithm was verified, comparing classification results using the real off-road images acquired under various environmental conditions in conformity with applied classifiers and features. Experiments show the robustness of the classification results on any environmental condition.

Human Activity Recognition with LSTM Using the Egocentric Coordinate System Key Points

  • Wesonga, Sheilla;Park, Jang-Sik
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.6_1
    • /
    • pp.693-698
    • /
    • 2021
  • As technology advances, there is increasing need for research in different fields where this technology is applied. On of the most researched topic in computer vision is Human activity recognition (HAR), which has widely been implemented in various fields which include healthcare, video surveillance and education. We therefore present in this paper a human activity recognition system based on scale and rotation while employing the Kinect depth sensors to obtain the human skeleton joints. In contrast to previous approaches that use joint angles, in this paper we propose that each limb has an angle with the X, Y, Z axes which we employ as feature vectors. The use of the joint angles makes our system scale invariant. We further calculate the body relative direction in the egocentric coordinates in order to provide the rotation invariance. For the system parameters, we employ 8 limbs with their corresponding angles each having the X, Y, Z axes from the coordinate system as feature vectors. The extracted features are finally trained and tested with the Long short term memory (LSTM) Network which gives us an average accuracy of 98.3%.