• Title/Summary/Keyword: stereo system

Search Result 794, Processing Time 0.028 seconds

HMM-based Intent Recognition System using 3D Image Reconstruction Data (3차원 영상복원 데이터를 이용한 HMM 기반 의도인식 시스템)

  • Ko, Kwang-Enu;Park, Seung-Min;Kim, Jun-Yeup;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.2
    • /
    • pp.135-140
    • /
    • 2012
  • The mirror neuron system in the cerebrum, which are handled by visual information-based imitative learning. When we observe the observer's range of mirror neuron system, we can assume intention of performance through progress of neural activation as specific range, in include of partially hidden range. It is goal of our paper that imitative learning is applied to 3D vision-based intelligent system. We have experiment as stereo camera-based restoration about acquired 3D image our previous research Using Optical flow, unscented Kalman filter. At this point, 3D input image is sequential continuous image as including of partially hidden range. We used Hidden Markov Model to perform the intention recognition about performance as result of restoration-based hidden range. The dynamic inference function about sequential input data have compatible properties such as hand gesture recognition include of hidden range. In this paper, for proposed intention recognition, we already had a simulation about object outline and feature extraction in the previous research, we generated temporal continuous feature vector about feature extraction and when we apply to Hidden Markov Model, make a result of simulation about hand gesture classification according to intention pattern. We got the result of hand gesture classification as value of posterior probability, and proved the accuracy outstandingness through the result.

Study on a Suspension of a Planetary Exploration Rover to Improve Driving Performance During Overcoming Obstacles

  • Eom, We-Sub;Kim, Youn-Kyu;Lee, Joo-Hee;Choi, Gi-Hyuk;Sim, Eun-Sup
    • Journal of Astronomy and Space Sciences
    • /
    • v.29 no.4
    • /
    • pp.381-387
    • /
    • 2012
  • The planetary exploration rover executes various missions after moving to the target point in an unknown environment in the shortest distance. Such missions include the researches for geological and climatic conditions as well as the existence of water or living creatures. If there is any obstacle on the way, it is detected by such sensors as ultrasonic sensor, infrared light sensor, stereo vision, and laser ranger finder. After the obtained data is transferred to the main controller of the rover, decisions can be made to either overcome or avoid the obstacle on the way based on the operating algorithm of the rover. All the planetary exploration rovers which have been developed until now receive the information of the height or width of the obstacle from such sensors before analyzing it in order to find out whether it is possible to overcome the obstacle or not. If it is decided to be better to overcome the obstacle in terms of the operating safety and the electric consumption of the rover, it is generally made to overcome it. Therefore, for the purpose of carrying out the planetary exploration task, it is necessary to design the proper suspension system of the rover which enables it to safely overcome any obstacle on the way on the surface in any unknown environment. This study focuses on the design of the new double 4-bar linkage type of suspension system applied to the Korea Aerospace Research Institute rover (a tentatively name) that is currently in the process of development by our institute in order to develop the planetary exploration rover which absolutely requires the capacity of overcoming any obstacle. Throughout this study, the negative moment which harms the capacity of the rover for overcoming an obstacle was induced through the dynamical modeling process for the rocker-bogie applied to the Mars exploration rover of the US and the improved version of rocker-bogie as well as the suggested double 4-bar linkage type of suspension system. Also, based on the height of the obstacle, a simulation was carried out for the negative moment of the suspension system before the excellence of the suspension system suggested through the comparison of responding characteristics was proved.

Entropy-Based 6 Degrees of Freedom Extraction for the W-band Synthetic Aperture Radar Image Reconstruction (W-band Synthetic Aperture Radar 영상 복원을 위한 엔트로피 기반의 6 Degrees of Freedom 추출)

  • Hyokbeen Lee;Duk-jin Kim;Junwoo Kim;Juyoung Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1245-1254
    • /
    • 2023
  • Significant research has been conducted on the W-band synthetic aperture radar (SAR) system that utilizes the 77 GHz frequency modulation continuous wave (FMCW) radar. To reconstruct the high-resolution W-band SAR image, it is necessary to transform the point cloud acquired from the stereo cameras or the LiDAR in the direction of 6 degrees of freedom (DOF) and apply them to the SAR signal processing. However, there are difficulties in matching images due to the different geometric structures of images acquired from different sensors. In this study, we present the method to extract an optimized depth map by obtaining 6 DOF of the point cloud using a gradient descent method based on the entropy of the SAR image. An experiment was conducted to reconstruct a tree, which is a major road environment object, using the constructed W-band SAR system. The SAR image, reconstructed using the entropy-based gradient descent method, showed a decrease of 53.2828 in mean square error and an increase of 0.5529 in the structural similarity index, compared to SAR images reconstructed from radar coordinates.

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

Automation of Bio-Industrial Process Via Tele-Task Command(I) -identification and 3D coordinate extraction of object- (원격작업 지시를 이용한 생물산업공정의 생력화 (I) -대상체 인식 및 3차원 좌표 추출-)

  • Kim, S. C.;Choi, D. Y.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.1
    • /
    • pp.21-28
    • /
    • 2001
  • Major deficiencies of current automation scheme including various robots for bioproduction include the lack of task adaptability and real time processing, low job performance for diverse tasks, and the lack of robustness of take results, high system cost, failure of the credit from the operator, and so on. This paper proposed a scheme that could solve the current limitation of task abilities of conventional computer controlled automatic system. The proposed scheme is the man-machine hybrid automation via tele-operation which can handle various bioproduction processes. And it was classified into two categories. One category was the efficient task sharing between operator and CCM(computer controlled machine). The other was the efficient interface between operator and CCM. To realize the proposed concept, task of the object identification and extraction of 3D coordinate of an object was selected. 3D coordinate information was obtained from camera calibration using camera as a measurement device. Two stereo images were obtained by moving a camera certain distance in horizontal direction normal to focal axis and by acquiring two images at different locations. Transformation matrix for camera calibration was obtained via least square error approach using specified 6 known pairs of data points in 2D image and 3D world space. 3D world coordinate was obtained from two sets of image pixel coordinates of both camera images with calibrated transformation matrix. As an interface system between operator and CCM, a touch pad screen mounted on the monitor and remotely captured imaging system were used. Object indication was done by the operator’s finger touch to the captured image using the touch pad screen. A certain size of local image processing area was specified after the touch was made. And image processing was performed with the specified local area to extract desired features of the object. An MS Windows based interface software was developed using Visual C++6.0. The software was developed with four modules such as remote image acquisiton module, task command module, local image processing module and 3D coordinate extraction module. Proposed scheme shoed the feasibility of real time processing, robust and precise object identification, and adaptability of various job and environments though selected sample tasks.

  • PDF

Multiple Camera Based Imaging System with Wide-view and High Resolution and Real-time Image Registration Algorithm (다중 카메라 기반 대영역 고해상도 영상획득 시스템과 실시간 영상 정합 알고리즘)

  • Lee, Seung-Hyun;Kim, Min-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.10-16
    • /
    • 2012
  • For high speed visual inspection in semiconductor industries, it is essential to acquire two-dimensional images on regions of interests with a large field of view (FOV) and a high resolution simultaneously. In this paper, an imaging system is newly proposed to achieve high quality image in terms of precision and FOV, which is composed of single lens, a beam splitter, two camera sensors, and stereo image grabbing board. For simultaneously acquired object images from two camera sensors, Zhang's camera calibration method is applied to calibrate each camera first of all. Secondly, to find a mathematical mapping function between two images acquired from different view cameras, the matching matrix from multiview camera geometry is calculated based on their image homography. Through the image homography, two images are finally registered to secure a large inspection FOV. Here the inspection system of using multiple images from multiple cameras need very fast processing unit for real-time image matching. For this purpose, parallel processing hardware and software are utilized, such as Compute Unified Device Architecture (CUDA). As a result, we can obtain a matched image from two separated images in real-time. Finally, the acquired homography is evaluated in term of accuracy through a series of experiments, and the obtained results shows the effectiveness of the proposed system and method.

Intermediate View Image and its Digital Hologram Generation for an Virtual Arbitrary View-Point Hologram Service (임의의 가상시점 홀로그램 서비스를 위한 중간시점 영상 및 디지털 홀로그램 생성)

  • Seo, Young-Ho;Lee, Yoon-Hyuk;Koo, Ja-Myung;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.1
    • /
    • pp.15-31
    • /
    • 2013
  • This paper proposes an intermediate image generation method for the viewer's view point by tracking the viewer's face, which is converted to a digital hologram. Its purpose is to increase the viewing angle of a digital hologram, which is gathering higher and higher interest these days. The method assumes that the image information for the leftmost and the rightmost view points within the viewing angle to be controlled are given. It uses a stereo-matching method between the leftmost and the rightmost depth images to obtain the pseudo-disparity increment per depth value. With this increment, the positional informations from both the leftmost view point and the rightmost view point are generated, which are blended to get the information at the wanted intermediate viewpoint. The occurrable dis-occlusion region in this case is defined and a inpainting method is proposed. The results from implementing and experimenting this method showed that the average image qualities of the generated depth and RGB image were 33.83[dB] and 29.5[dB], respectively, and the average execution time was 250[ms] per frame. Also, we propose a prototype system to service digital hologram interactively to the viewer by using the proposed intermediate view generation method. It includes the operations of data acquisition for the leftmost and the rightmost viewpoints, camera calibration and image rectification, intermediate view image generation, computer-generated hologram (CGH) generation, and reconstruction of the hologram image. This system is implemented in the LabView(R) environments, in which CGH generation and hologram image reconstruction are implemented with GPGPUs, while others are implemented in software. The implemented system showed the execution speed to process about 5 frames per second.

Development of an Intelligent Hexapod Walking Robot (지능형 6족 보행 로봇의 개발)

  • Seo, Hyeon-Se;Sung, Young-Whee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.2
    • /
    • pp.124-129
    • /
    • 2013
  • Hexapod walking robots are superior to biped or quadruped ones in terms of walking stability. Therefore hexapod robots have the advantage in performing intelligent tasks based on walking stability. In this paper, we propose a hexapod robot that has one fore leg, one hind leg, two left legs, and two right legs and can perform various intelligent tasks. We build the robot by using 26 motors and implement a controller which consists of a host PC, a DSP main controller, an AVR auxiliary controller, and smart phone/pad. We show by several experiments that the implemented robot can perform various intelligent tasks such as uneven surface walking, tracking and kicking a ball, remote control and 3D monitoring by using data obtained from stereo camera, infrared sensors, ultra sound sensors, and contact sensors.

Spatial Distribution of Evergreen Coniferous Dead Trees in Seoraksan National Park - In the Case of Northwestern Ridge - (설악산국립공원 상록침엽수 고사목 공간분포 특성 - 서북능선 일원을 대상으로 -)

  • Kim, Jin-Won;Park, Hong-Chul;Park, Eun-Ha;Lee, Na-Yeon;Oh, Choong-Hyeon
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.23 no.5
    • /
    • pp.59-71
    • /
    • 2020
  • Using high-resolution stereoscopic aerial images (in 2008, 2012 and 2016), we conducted to analyze the spatial characteristics affecting evergreen coniferous die-off in the northwestern ridge (major distribution area such as Abies nephrolepis), Seoraksan National Park. The detected number of dead trees at evergreen coniferous forest (5.24㎢) was 1,223 in 2008, was 2,585 in 2012 and was 3,239 in 2016. The number of cumulated dead trees was 7,047 in 2016. In recent years, the number of dead trees increased relatively in the northwest ridge, Seoraksan National Park. Among the analysed spatial factor (altitude, aspect, slope, solar radiation and topographic wetness index), the number of dead trees was increased in the conditions with high altitude, steep slope and dry soil moisture. A spatial distribution of dead tree was divided into 2 groups largely (high altitude with high solar radiation, low altitude with steep slope). In conclusion, the dead trees of evergreen coniferous were concentrated at spatial distribution characteristics causing dryness in the northwestern ridge, Seoraksan National Park.

Supervised Hybrid Control Architecture for Navigation of a Personal Robot

  • Shin, Hyun-Jong;Im, Chang-Jun;Kim, Jin-Oh;Lee, Ho-Gil
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1178-1183
    • /
    • 2003
  • As personal robots coexist with a person with a role to help a person, while adapting various human life and environment, the personal robots have to accommodate frequently-changing or different-from-home-to-home environment. In addition, personal robots may have many kinds of different Kinematic configurations depending on the capabilities. Some may have a mobile base and others may have arms and a head. The motivation of this study arises from this not-well-defined home environment and varying Kinematic configuration. So the goal of this study is to develop a general control architecture for personal robots. There exist three major architectures; deliberative, reactive and hybrid. We found that these are applicable only for the defined environment with a fixed Kinematic configuration. Neither could accommodate the above two requirements. For the general solution, we propose a Supervised Hybrid Architecture (SHA), in which we use double layers of deliberative and reactive controls, distributed control with a modular design of Kinematic configurations, and real-time Linux OS. Deliberative and reactive actions interact through a corresponding arbitrator. These arbitrators help a robot to choose an appropriate architecture depending on the current situation to successfully perform a given task. The distributed control modules communicate through IEEE 1394 for the easy expandability. With a personal robot platform with a mobile base, two arms, a head and a pan-tilt stereo eye system, we tested the developed SHA for static as well as dynamic environments. For this application, we developed decision-making rules for selecting appropriate control methods for several situations of navigation task. Examples are shown to show the effectiveness.

  • PDF