• Title/Summary/Keyword: Camera localization

Search Result 200, Processing Time 0.021 seconds

Position Control of Mobile Robot for Human-Following in Intelligent Space with Distributed Sensors

  • Jin Tae-Seok;Lee Jang-Myung;Hashimoto Hideki
    • International Journal of Control, Automation, and Systems
    • /
    • v.4 no.2
    • /
    • pp.204-216
    • /
    • 2006
  • Latest advances in hardware technology and state of the art of mobile robot and artificial intelligence research can be employed to develop autonomous and distributed monitoring systems. And mobile service robot requires the perception of its present position to coexist with humans and support humans effectively in populated environments. To realize these abilities, robot needs to keep track of relevant changes in the environment. This paper proposes a localization of mobile robot using the images by distributed intelligent networked devices (DINDs) in intelligent space (ISpace) is used in order to achieve these goals. This scheme combines data from the observed position using dead-reckoning sensors and the estimated position using images of moving object, such as those of a walking human, used to determine the moving location of a mobile robot. The moving object is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the intelligent space. Using the a priori known path of a moving object and a perspective camera model, the geometric constraint equations that represent the relation between image frame coordinates of a moving object and the estimated position of the robot are derived. The proposed method utilizes the error between the observed and estimated image coordinates to localize the mobile robot, and the Kalman filtering scheme is used to estimate the location of moving robot. The proposed approach is applied for a mobile robot in ISpace to show the reduction of uncertainty in the determining of the location of the mobile robot. Its performance is verified by computer simulation and experiment.

3D Object Recognition for Localization of Outdoor Robotic Vehicles (실외 주행 로봇의 위치 추정을 위한 3 차원 물체 인식)

  • Baek, Seung-Min;Kim, Jae-Woong;Lee, Jang-Won;Zhaojin, Lu;Lee, Suk-Han
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.200-204
    • /
    • 2008
  • In this paper, to solve localization problem for out-door navigation of robotic vehicles, a particle filter based 3D object recognition framework that can estimate the pose of a building or its entrance is presented. A particle filter framework of multiple evidence fusion and model matching in a sequence of images is presented for robust recognition and pose estimation of 3D objects. The proposed approach features 1) the automatic selection and collection of an optimal set of evidences 2) the derivation of multiple interpretations, as particles representing possible object poses in 3D space, and the assignment of their probabilities based on matching the object model with evidences, and 3) the particle filtering of interpretations in time with the additional evidences obtained from a sequence of images. The proposed approach has been validated by the stereo-camera based experimentation of 3D object recognition and pose estimation, where a combination of photometric and geometric features are used for evidences.

  • PDF

Estimation of two-dimensional position of soybean crop for developing weeding robot (제초로봇 개발을 위한 2차원 콩 작물 위치 자동검출)

  • SooHyun Cho;ChungYeol Lee;HeeJong Jeong;SeungWoo Kang;DaeHyun Lee
    • Journal of Drive and Control
    • /
    • v.20 no.2
    • /
    • pp.15-23
    • /
    • 2023
  • In this study, two-dimensional location of crops for auto weeding was detected using deep learning. To construct a dataset for soybean detection, an image-capturing system was developed using a mono camera and single-board computer and the system was mounted on a weeding robot to collect soybean images. A dataset was constructed by extracting RoI (region of interest) from the raw image and each sample was labeled with soybean and the background for classification learning. The deep learning model consisted of four convolutional layers and was trained with a weakly supervised learning method that can provide object localization only using image-level labeling. Localization of the soybean area can be visualized via CAM and the two-dimensional position of the soybean was estimated by clustering the pixels associated with the soybean area and transforming the pixel coordinates to world coordinates. The actual position, which is determined manually as pixel coordinates in the image was evaluated and performances were 6.6(X-axis), 5.1(Y-axis) and 1.2(X-axis), 2.2(Y-axis) for MSE and RMSE about world coordinates, respectively. From the results, we confirmed that the center position of the soybean area derived through deep learning was sufficient for use in automatic weeding systems.

Research on Development of Construction Spatial Information Technology, using Rover's Camera System (로버 카메라 시스템을 이용한 건설공간정보화 기술의 개발 방안 연구)

  • Hong, Sungchul;Chung, Taeil;Park, Jaemin;Shin, Hyu-Sung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.7
    • /
    • pp.630-637
    • /
    • 2019
  • The scientific, economical and industrial values of the Moon have been increased, as massive ice-water and rare resource were founded from the lunar exploration missions. Korea and other major space agencies in the world are competitively developing the ISRU (In Situ Resource Utilization) technology to secure future lunar resource as well as to construct the lunar base. To prepare for the lunar construction, it is essential to develop the rover based construction spatial information technology to provide a decision-making aided information during the lunar construction process. Thus, this research presented the construction spatial information technology based upon rover's camera system. Specifically, the conceptual design of rover based camera system was designed for acquisition of a rover's navigation image, and lunar terrain and construction images around the rover. The reference architecture of the rover operation system was designed for computation of the lunar construction spatial information. Also, rover's localization and terrain reconstruction methods were introduced considering the characteristics of lunar surface environments. It is necessary to test and validate the conceptual design of the construction spatial information technology. Thus, in the future study, the developed rover and rover operation system will be applied to the lunar terrestrial analogue site for further improvements.

A Speaker Detection System based on Stereo Vision and Audio (스테레오 시청각 기반의 화자 검출 시스템)

  • An, Jun-Ho;Hong, Kwang-Seok
    • Journal of Internet Computing and Services
    • /
    • v.11 no.6
    • /
    • pp.21-29
    • /
    • 2010
  • In this paper, we propose the system which detects the speaker, who is speaking currently, among a number of users. A proposed speaker detection system based on stereo vision and audio is mainly composed of the followings: a position estimation of speaker candidates using stereo camara and microphone, a current speaker detection, and a speaker information acquisition based on a mobile device. We use the haar-like features and the adaboost algorithm to detect the faces of speaker candidates with stereo camera, and the position of speaker candidates is estimated by a triangulation method. Next, the Time Delay Of Arrival (TDOA) is estimated by the Cross Power Spectrum Phase (CPSP) analysis to find the direction of source with two microphone. Finally we acquire the information of the speaker including his position, voice, and face by comparing the information of the stereo camera with that of two microphone. Furthermore, the proposed system includes a TCP client/server connection method for mobile service.

Development and Evaluation of Maximum-Likelihood Position Estimation with Poisson and Gaussian Noise Models in a Small Gamma Camera

  • Chung, Yong-Hyun;Park, Yong;Song, Tae-Yong;Jung, Jin-Ho;Gyuseong Cho
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2002.09a
    • /
    • pp.331-334
    • /
    • 2002
  • It has been reported that maximum-likelihood position-estimation (MLPE) algorithms offer advantages of improved spatial resolution and linearity over conventional Anger algorithm in gamma cameras. The purpose of this study is to evaluate the performances of the noise models, Poisson and Gaussian, in MLPE for the localization of photons in a small gamma camera (SGC) using NaI(Tl) plate and PSPMT. The SGC consists of a single NaI(Tl) crystal, 10 cm diameter and 6 mm thick, optically coupled to a PSPMT (Hamamatsu R3292-07). The PSPMT was read out using a resistive charge divider, which multiplexes 28(X) by 28(Y) cross wire anodes into four channels. Poisson and Gaussian based MLPE methods have been implemented using experimentally measured light response functions. The system resolutions estimated by Poisson and Gaussian based MLPE were 4.3 mm and 4.0 mm, respectively. Integral uniformities were 29.7% and 30.6%, linearities were 1.5 mm and 1.0 mm and count rates were 1463 cps and 1388 cps in Poisson and Gaussian based MLPE, respectively. The results indicate that Gaussian based MLPE, which is convenient to implement, has better performances and is more robust to statistical noise than Poisson based MLPE.

  • PDF

Landmark Recognition Method based on Geometric Invariant Vectors (기하학적 불변벡터기반 랜드마크 인식방법)

  • Cha Jeong-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.173-182
    • /
    • 2005
  • In this paper, we propose a landmark recognition method which is irrelevant to the camera viewpoint on the navigation for localization. Features in previous research is variable to camera viewpoint, therefore due to the wealth of information, extraction of visual landmarks for positioning is not an easy task. The proposed method in this paper, has the three following stages; first, extraction of features, second, learning and recognition, third, matching. In the feature extraction stage, we set the interest areas of the image. where we extract the corner points. And then, we extract features more accurate and resistant to noise through statistical analysis of a small eigenvalue. In learning and recognition stage, we form robust feature models by testing whether the feature model consisted of five corner points is an invariant feature irrelevant to viewpoint. In the matching stage, we reduce time complexity and find correspondence accurately by matching method using similarity evaluation function and Graham search method. In the experiments, we compare and analyse the proposed method with existing methods by using various indoor images to demonstrate the superiority of the proposed methods.

  • PDF

Target Latitude and Longitude Detection Using UAV Rotation Angle (UAV의 회전각을 이용한 목표물 위경도 탐지 방법)

  • Shin, Kwang-Seong;Jung, Nyum;Youm, Sungkwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.107-112
    • /
    • 2020
  • Recently, as the field of use of drones is diversified, it is actively used not only for surveying but also for search and rescue work. In these applications it is very important to know the location of the target or the location of the UAV. This paper proposes a target detection method using images taken from drones. The proposed method calculates the latitude and longitude information of the target by finding the location of the target by comparing it with the image to find the image taken by the drone. The exact latitude and longitude information of the target is calculated by calculating the actual distance corresponding to the distance of the image image using the characteristics of the pinhole camera. The proposed method through the actual experiment confirmed that the latitude and longitude of the target was accurately identified.

A Study on the Image Based Auto-focus Method Considering Jittering of Airborne EO/IR (항공탑재 EO/IR의 영상떨림을 고려한 영상기반 자동 초점조절 기법 연구)

  • Kang, Myung-Ho;Kim, Sung-Jae;Koh, Yeong Jun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.1
    • /
    • pp.39-45
    • /
    • 2022
  • In this paper, we propose methods to improve image-based auto-focus that can compensate for drawbacks of traditional auto-focus control. When adjusting the focus, there is a problem that the focus window cannot be set to the same position if the camera's LOS is not directed at the same location and flow or shake. To address this issue, we applied image tracking techniques to improve optimal focus localization accuracy. And also, although the same focus value should be calculated at the same focus step, but different values can be calculated by camera's fine shaking or image disturbance due to atmospheric scattering. To tackle this problem a SAFS (Stable Adjacency Frame Selection) has been proposed. As a result of this study, our proposed methodology shows more accurate than traditional methods in terms of finding best focus position.

Scholarly Assessment of Aruco Marker-Driven Worker Localization Techniques within Construction Environments (Aruco marker 기반 건설 현장 작업자 위치 파악 적용성 분석)

  • Choi, Tae-Hun;Kim, Do-Kuen;Jang, Se-Jun
    • Journal of the Korea Institute of Building Construction
    • /
    • v.23 no.5
    • /
    • pp.629-638
    • /
    • 2023
  • This study introduces an innovative approach to monitor the whereabouts of workers within indoor construction settings. While traditional modalities such as GPS and NTRIP have demonstrated efficacy for outdoor localizations, their precision dwindles in indoor environments. In response, this research advocates for the adoption of Aruco markers. Leveraging computer vision technology, these markers facilitate the quantification of the distance between a worker and the marker, subsequently pinpointing the worker's instantaneous location with heightened accuracy. The methodology's efficacy was rigorously evaluated in a real-world construction scenario. Parameters including system stability, the influence of lighting conditions, the extremity of measurable distances, and the breadth of recognition angles were methodically appraised. System stability was ascertained by maneuvering the camera at a uniform velocity, gauging its marker recognition prowess. The impact of varying luminosity on marker discernibility was scrutinized by modulating the ambient lighting. Furthermore, the camera's spatial movement ascertained both the upper threshold of distance until marker recognition waned and the maximal angle at which markers remained discernible.