• 제목/요약/키워드: robotic vision

검색결과 126건 처리시간 0.026초

3차원 공간 맵핑을 통한 로봇의 경로 구현 (Implementation of Path Finding Method using 3D Mapping for Autonomous Robotic)

  • 손은호;김영철;정길도
    • 제어로봇시스템학회논문지
    • /
    • 제14권2호
    • /
    • pp.168-177
    • /
    • 2008
  • Path finding is a key element in the navigation of a mobile robot. To find a path, robot should know their position exactly, since the position error exposes a robot to many dangerous conditions. It could make a robot move to a wrong direction so that it may have damage by collision by the surrounding obstacles. We propose a method obtaining an accurate robot position. The localization of a mobile robot in its working environment performs by using a vision system and Virtual Reality Modeling Language(VRML). The robot identifies landmarks located in the environment. An image processing and neural network pattern matching techniques have been applied to find location of the robot. After the self-positioning procedure, the 2-D scene of the vision is overlaid onto a VRML scene. This paper describes how to realize the self-positioning, and shows the overlay between the 2-D and VRML scenes. The suggested method defines a robot's path successfully. An experiment using the suggested algorithm apply to a mobile robot has been performed and the result shows a good path tracking.

Autonomous vision-based damage chronology for spatiotemporal condition assessment of civil infrastructure using unmanned aerial vehicle

  • Mondal, Tarutal Ghosh;Jahanshahi, Mohammad R.
    • Smart Structures and Systems
    • /
    • 제25권6호
    • /
    • pp.733-749
    • /
    • 2020
  • This study presents a computer vision-based approach for representing time evolution of structural damages leveraging a database of inspection images. Spatially incoherent but temporally sorted archival images captured by robotic cameras are exploited to represent the damage evolution over a long period of time. An access to a sequence of time-stamped inspection data recording the damage growth dynamics is premised to this end. Identification of a structural defect in the most recent inspection data set triggers an exhaustive search into the images collected during the previous inspections looking for correspondences based on spatial proximity. This is followed by a view synthesis from multiple candidate images resulting in a single reconstruction for each inspection round. Cracks on concrete surface are used as a case study to demonstrate the feasibility of this approach. Once the chronology is established, the damage severity is quantified at various levels of time scale documenting its progression through time. The proposed scheme enables the prediction of damage severity at a future point in time providing a scope for preemptive measures against imminent structural failure. On the whole, it is believed that the present study will immensely benefit the structural inspectors by introducing the time dimension into the autonomous condition assessment pipeline.

햅틱스 시스템용 3D 재구성을 위한 LoG 방법과 DoG 방법의 성능 분석 (Comparison of LoG and DoG for 3D reconstruction in haptic systems)

  • 성미영;김기권
    • 한국멀티미디어학회논문지
    • /
    • 제15권6호
    • /
    • pp.711-721
    • /
    • 2012
  • 본 연구의 목적은 "로봇의 시각"과 "로봇의 촉각"을 대체할 수 있는 스테레오 비전 기반 햅틱스 시스템에서 가장 적합하고 효과적인 3D 재구성(3D reconstruction) 방법을 제안하는 것이다. 삼차원 영상에 대하여 정확하게 촉감을 전달하려면 스테레오 영상에서 사물의 깊이 정보와 사물의 경계면에 대한 정확한 정보가 필요하다. 본 연구에서는 스테레오 영상에서 사물의 깊이 정보를 정확하게 얻기 위하여 전통적인 스테레오 정합과정에 경계면 추출 방법인 LoG(Laplacian of Gaussian) 방법과 DoG(Difference of Gaussian) 방법을 혼합적용하여 3D 영상을 재구성한 결과를 제시한다. 또한 어떤 방법이 햅틱 렌더링을 적용하는데 유용한 지 검증하기 위하여 연산 시간 및 오차 분석 실험을 수행한 결과, 본 연구처럼 비주얼 렌더링에 햅틱 렌더링을 추가하여 사용하는 경우에는 잡음 감소와 경계면 추출 성능이 더 우수한 DoG 방법이 더 효율적인 것으로 판단되었다. 본 논문에서 제안하는 스테레오 비전 기반 햅틱스 시스템을 위한 3D 재구성 방법은 이동형 정찰 로봇의 성능을 높이는 연구 등 여러 산업 분야와 군사 분야에 응용이 가능할 것이다.

내시경 로봇의 기술동향 (Technological Trend of Endoscopic Robots)

  • 김민영;조형석
    • 제어로봇시스템학회논문지
    • /
    • 제20권3호
    • /
    • pp.345-355
    • /
    • 2014
  • Since the beginning of the 21st century, emergence of innovative technologies in robotic and telepresence surgery has revolutionized minimally access surgery and continually has advanced them till recent years. One of such surgeries is endoscopic surgery, in which endoscope and endoscopic instruments are inserted into the body through small incision or natural openings, surgical operations being carried out by a laparoscopic procedure. Due to a vast amount of developments in this technology, this review article describes only a technological state-of-the arts and trend of endoscopic robots, being further limited to the aspects of key components, their functional requirements and operational procedure in surgery. In particular, it first describes technological limitations in developments of key components and then focuses on the description of the performance required for their functions, which include position control, tracking, navigation, and manipulation of the flexible endoscope body and its end effector as well, and so on. In spite of these rapid developments in functional components, endoscopic surgical robots should be much smaller, less expensive, easier to operate, and should seamlessly integrate emerging technologies for their intelligent vision and dexterous hands not only from the points of the view of surgical, ergonomic but also from safety. We believe that in these respects a medical robotic technology related to endoscopic surgery continues to be revolutionized in the near future, sufficient enough to replace almost all kinds of current endoscopic surgery. This issue remains to be addressed elsewhere in some other review articles.

지반형상 3차원 모델링을 위한 스테레오 비전 영상의 노이즈 제거 알고리즘 개발 (Development of the Noise Elimination Algorithm of Stereo-Vision Images for 3D Terrain Modeling)

  • 유현석;김영석;한승우
    • 한국건설관리학회논문집
    • /
    • 제10권2호
    • /
    • pp.145-154
    • /
    • 2009
  • 작업환경 주변의 사물(target object)을 자동으로 인식하고 그 결과를 효과적으로 모델링하는 기술은 작업 품질, 생산성 등 개발 장비의 성능(performance)에도 지대한 영향을 미치게 되므로 이는 건설자동화 장비를 개발함에 있어 필수적으로 요구되는 핵심 요소기술이다. 현재 국내에서는 2006년부터 지능형 굴삭 로봇(intelligent robotic excavator)의 개발을 위하여 토공 작업환경을 대상으로 스테레오 비전을 활용하여 굴삭 로봇 주변 영역의 지반형상을 3차원으로 모델링하기 위한 기술을 개발하고 있다. 본 연구의 목적은 실제 토공 작업환경을 3차원으로 모델링하는 과정에서 필연적으로 발생되는 스테레오 매칭 노이즈를 효과적으로 제거하기 위하여 다양한 토공작업 환경 요소가 포함된 스테레오 영상을 수집하고 토공 작업 환경의 3차원 모델링에 적합한 노이즈 제거 알고리즘을 제안하는 것이다. 본 연구를 통해 개발된 디지털 영상처리 기술은 토공 작업환경을 대상으로 주변을 자동 인식하고 추출하고자 하는 관심의 대상을 3차원으로 모델링해야 하는 굴삭기 이외의 자동화 장비 개발에 있어서도 응용성이 매우 클 것으로 기대된다.

로봇 착유기를 위한 3차원 위치정보획득 시스템 (3D Image Processing System for an Robotic Milking System)

  • 김웅;권두중;서광욱;이대원
    • 한국축산시설환경학회지
    • /
    • 제8권3호
    • /
    • pp.165-170
    • /
    • 2002
  • This study was carried out to measure the 3D-distance of a cow model teat for an application possibility on Robotic Milking System(RMS). A teat recognition algorithm was made to find 3D-distance of the model by using Gonzalrez's theory. Some of the results are as follows. 1 . In the distance measurement experiment on the test board, as the measured length, and the length between the center of image surface and the measured image point became longer, their error values increased. 2. The model teat was installed and measured the error value at the random position. The error value of X and Y coordinates was less than 5㎜, and that of Z coordinates was less than 20㎜. The error value increased as the distance of camera's increased. 3. The equation for distance information acquirement was satisfied with obtaining accurate distance that was necessary for a milking robot to trace teats, A teat recognition algorithm was recognized well four model cow teats. It's processing time was about 1 second. It appeared that a teat recognition algorithm could be used to determine the 3D-distance of the cow teat to develop a RMS.

  • PDF

착유 자동화를 위한 로봇 착탈 시스템 (Development of a Robotic Milking Cluster System)

  • 이대원;최동윤;김현태;이원희;권두중;이승기;한정대
    • 한국축산시설환경학회지
    • /
    • 제6권2호
    • /
    • pp.113-119
    • /
    • 2000
  • A Robotic milking cluster system with the manipulator for an automatic milking system was designed and built for farmer to work easily and comfortably during milking processing. The cluster system was composed of screws, cams and links for power transmission, DC motors, the Quick Basic one-chip microprocessor, the vision system for image processing, and tea-cups. Software, written in Visual C+ and Quick Basic, combined the function of image capture, image processing, milking cluster control, and control into one control. The unit was made to transfer from four fixed points to four teats with four teat-cups. Performance tests of the cluster unit, the fully integrated system, were conducted to attach and detach the teat-cup on the teat of a artificial cow. The transfer programming provided for a teat-cup milking loop during the system starts and comes back the original fixed point at the manipulator of it for milking. It transferred the teat-cup with a success rate of more than 70%. The average time it took ot perform the milking loop was about 20 seconds.

  • PDF

지능형 로봇 공간을 위한 실내 측위기술 (Indoor Localization Technique for Intelligent Robotic Space)

  • 안효성;이재영;유원필;한규서
    • 전자통신동향분석
    • /
    • 제22권2호통권104호
    • /
    • pp.48-57
    • /
    • 2007
  • 본 고에서 다루고자 하는 지능형 로봇 공간(intelligent robotic space)은 이동성(mobility), 조작성(manipulability)으로 대표되는 로봇의 독특한 기능을 분산센싱, 분산처리환경을 구축하여 고기능화함으로써 자연스러운 이동, 조작기능의 구현이 가능한 공간으로 정의할 수 있다. 이는 개념적으로 가상 공간(virtual space), 추론 공간(semantic space), 물리 공간(physical space)으로 구성된다. 가상 공간은 로봇-센서간 융합을 통한 환경지도 작성 및 표현을 위한 플랫폼 기술이고 추론 공간은 로봇 및 로봇과 연동된 사람이나 사물의 상태 해석을 위한 객체 모델 기술이다. 물리 공간은 지능형이동성과 로봇 조작 능력의 향상을 위한 지능형 하드웨어 공간이다. 본 고에서는 물리공간에서 가장 핵심적인 이슈인 실내 측위기술에 대해서 알아본다. 측위기술은 사람이나 사물의 위치를 정밀하게 결정하여 로봇이 인간과 공존할 수 있도록 안정적이고 신뢰성 있는 측위 정보를 제공하는 것을 목적으로 한다. 지능형 로봇을 위한 측위기술은 크게 무선 센서네트워크 기반의 광역(coarse) 위치 결정과 RFID 및 로봇 비전(vision)을 기반으로 하는 정밀(fine) 위치 결정으로 나뉘어진다. 본 고에서는 Wi-Fi, ZigBee, UWB를 이용하는 무선 센서네트워크 기반의 실내 위치 측정에 관한 연구 개발 동향을 분석하고 각각의 기술이 가지는 장단점을 비교한다.

카메라 기반 객체의 위치인식을 위한 왜곡제거 및 오검출 필터링 기법 (Distortion Removal and False Positive Filtering for Camera-based Object Position Estimation)

  • 진실;송지민;최지호;진용식;정재진;이상준
    • 대한임베디드공학회논문지
    • /
    • 제19권1호
    • /
    • pp.1-8
    • /
    • 2024
  • Robotic arms have been widely utilized in various labor-intensive industries such as manufacturing, agriculture, and food services, contributing to increasing productivity. In the development of industrial robotic arms, camera sensors have many advantages due to their cost-effectiveness and small sizes. However, estimating object positions is a challenging problem, and it critically affects to the robustness of object manipulation functions. This paper proposes a method for estimating the 3D positions of objects, and it is applied to a pick-and-place task. A deep learning model is utilized to detect 2D bounding boxes in the image plane, and the pinhole camera model is employed to compute the object positions. To improve the robustness of measuring the 3D positions of objects, we analyze the effect of lens distortion and introduce a false positive filtering process. Experiments were conducted on a real-world scenario for moving medicine bottles by using a camera-based manipulator. Experimental results demonstrated that the distortion removal and false positive filtering are effective to improve the position estimation precision and the manipulation success rate.

간단한 기구부와 결합한 공간증강현실 시스템의 샘플 기반 제어 방법 (Sampling-based Control of SAR System Mounted on A Simple Manipulator)

  • 이아현;이주호;이주행
    • 한국CDE학회논문집
    • /
    • 제19권4호
    • /
    • pp.356-367
    • /
    • 2014
  • A robotic sapatial augmented reality (RSAR) system, which combines robotic components with projector-based AR technique, is unique in its ability to expand the user interaction area by dynamically changing the position and orientation of a projector-camera unit (PCU). For a moving PCU mounted on a conventional robotic device, we can compute its extrinsic parameters using a robot kinematics method assuming a link and joint geometry is available. In a RSAR system based on user-created robot (UCR), however, it is difficult to calibrate or measure the geometric configuration, which limits to apply a conventional kinematics method. In this paper, we propose a data-driven kinematics control method for a UCR-based RSAR system. The proposed method utilized a pre-sampled data set of camera calibration acquired at sufficient instances of kinematics configurations in fixed joint domains. Then, the sampled set is compactly represented as a set of B-spline surfaces. The proposed method have merits in two folds. First, it does not require any kinematics model such as a link length or joint orientation. Secondly, the computation is simple since it just evaluates a several polynomials rather than relying on Jacobian computation. We describe the proposed method and demonstrates the results for an experimental RSAR system with a PCU on a simple pan-tilt arm.