• 제목/요약/키워드: Vision navigation

검색결과 313건 처리시간 0.022초

Video Road Vehicle Detection and Tracking based on OpenCV

  • Hou, Wei;Wu, Zhenzhen;Jung, Hoekyung
    • Journal of information and communication convergence engineering
    • /
    • 제20권3호
    • /
    • pp.226-233
    • /
    • 2022
  • Video surveillance is widely used in security surveillance, military navigation, intelligent transportation, etc. Its main research fields are pattern recognition, computer vision and artificial intelligence. This article uses OpenCV to detect and track vehicles, and monitors by establishing an adaptive model on a stationary background. Compared with traditional vehicle detection, it not only has the advantages of low price, convenient installation and maintenance, and wide monitoring range, but also can be used on the road. The intelligent analysis and processing of the scene image using CAMSHIFT tracking algorithm can collect all kinds of traffic flow parameters (including the number of vehicles in a period of time) and the specific position of vehicles at the same time, so as to solve the vehicle offset. It is reliable in operation and has high practical value.

자율 주행로봇을 위한 국부 경로계획 알고리즘 (A local path planning algorithm for free-ranging mobil robot)

  • 차영엽;권대갑
    • 한국정밀공학회지
    • /
    • 제11권4호
    • /
    • pp.88-98
    • /
    • 1994
  • A new local path planning algorithm for free-ranging robots is proposed. Considering that a laser range finder has the excellent resolution with respect to angular and distance measurements, a simple local path planning algorithm is achieved by a directional weighting method for obtaining a heading direction of nobile robot. The directional weighting method decides the heading direction of the mobile robot by estimating the attractive resultant force which is obtained by directional weighting function times range data, and testing whether the collision-free path and the copen parthway conditions are satisfied. Also, the effectiveness of the established local path planning algorithm is estimated by computer simulation in complex environment.

  • PDF

시각-언어 이동 작업을 위한 장소 미리보기 메모리 (Lookahead Place Memory for Vision-Language Navigation Tasks)

  • 오선택;김인철
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2020년도 추계학술발표대회
    • /
    • pp.992-995
    • /
    • 2020
  • 시각-언어 이동 작업은 에이전트가 주어진 지시를 따라 특정 실내 공간 내에서 목적 위치로 이동하는 작업이다. 시각-언어 이동 작업의 특성상 자연어 지시 속에 등장하는 랜드마크인 장소 정보를 인지하는 것은 작업을 수행하는 데 큰 도움이 된다. 본 논문에서는 환경을 구성하는 주요 장소 정보를 저장하기 위한 장소 미리보기 메모리를 제안한다. 에이전트는 장소 미리보기 메모리에 저장된 장소 정보를 고려하여 작업을 수행하게 된다. 본 논문에서는 Matterport3D 시뮬레이션 환경에서의 실험을 통해 R2R 벤치마크 데이터 집합에서 가장 높은 성능을 보였다.

카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법 (Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model)

  • 임이지;최대선
    • 정보보호학회논문지
    • /
    • 제33권6호
    • /
    • pp.1099-1110
    • /
    • 2023
  • 자율주행 및 robot navigation의 인식 시스템은 성능 향상을 위해 다중 센서를 융합(Multi-Sensor Fusion)을 한 후, 객체 인식 및 추적, 차선 감지 등의 비전 작업을 한다. 현재 카메라와 라이다 센서의 융합을 기반으로 한 딥러닝 모델에 대한 연구가 활발히 이루어지고 있다. 그러나 딥러닝 모델은 입력 데이터의 변조를 통한 적대적 공격에 취약하다. 기존의 다중 센서 기반 자율주행 인식 시스템에 대한 공격은 객체 인식 모델의 신뢰 점수를 낮춰 장애물 오검출을 유도하는 데에 초점이 맞춰져 있다. 그러나 타겟 모델에만 공격이 가능하다는 한계가 있다. 센서 융합단계에 대한 공격의 경우 융합 이후의 비전 작업에 대한 오류를 연쇄적으로 유발할 수 있으며, 이러한 위험성에 대한 고려가 필요하다. 또한 시각적으로 판단하기 어려운 라이다의 포인트 클라우드 데이터에 대한 공격을 진행하여 공격 여부를 판단하기 어렵도록 한다. 본 연구에서는 이미지 스케일링 기반 카메라-라이다 융합 모델(camera-LiDAR calibration model)인 LCCNet 의 정확도를 저하시키는 공격 방법을 제안한다. 제안 방법은 입력 라이다의 포인트에 스케일링 공격을 하고자 한다. 스케일링 알고리즘과 크기별 공격 성능 실험을 진행한 결과 평균 77% 이상의 융합 오류를 유발하였다.

DSM과 다시점 거리영상의 3차원 등록을 이용한 무인이동차량의 위치 추정: 가상환경에서의 적용 (Localization of Unmanned Ground Vehicle using 3D Registration of DSM and Multiview Range Images: Application in Virtual Environment)

  • 박순용;최성인;장재석;정순기;김준;채정숙
    • 제어로봇시스템학회논문지
    • /
    • 제15권7호
    • /
    • pp.700-710
    • /
    • 2009
  • A computer vision technique of estimating the location of an unmanned ground vehicle is proposed. Identifying the location of the unmaned vehicle is very important task for automatic navigation of the vehicle. Conventional positioning sensors may fail to work properly in some real situations due to internal and external interferences. Given a DSM(Digital Surface Map), location of the vehicle can be estimated by the registration of the DSM and multiview range images obtained at the vehicle. Registration of the DSM and range images yields the 3D transformation from the coordinates of the range sensor to the reference coordinates of the DSM. To estimate the vehicle position, we first register a range image to the DSM coarsely and then refine the result. For coarse registration, we employ a fast random sample matching method. After the initial position is estimated and refined, all subsequent range images are registered by applying a pair-wise registration technique between range images. To reduce the accumulation error of pair-wise registration, we periodically refine the registration between range images and the DSM. Virtual environment is established to perform several experiments using a virtual vehicle. Range images are created based on the DSM by modeling a real 3D sensor. The vehicle moves along three different path while acquiring range images. Experimental results show that registration error is about under 1.3m in average.

내시경 로봇의 기술동향 (Technological Trend of Endoscopic Robots)

  • 김민영;조형석
    • 제어로봇시스템학회논문지
    • /
    • 제20권3호
    • /
    • pp.345-355
    • /
    • 2014
  • Since the beginning of the 21st century, emergence of innovative technologies in robotic and telepresence surgery has revolutionized minimally access surgery and continually has advanced them till recent years. One of such surgeries is endoscopic surgery, in which endoscope and endoscopic instruments are inserted into the body through small incision or natural openings, surgical operations being carried out by a laparoscopic procedure. Due to a vast amount of developments in this technology, this review article describes only a technological state-of-the arts and trend of endoscopic robots, being further limited to the aspects of key components, their functional requirements and operational procedure in surgery. In particular, it first describes technological limitations in developments of key components and then focuses on the description of the performance required for their functions, which include position control, tracking, navigation, and manipulation of the flexible endoscope body and its end effector as well, and so on. In spite of these rapid developments in functional components, endoscopic surgical robots should be much smaller, less expensive, easier to operate, and should seamlessly integrate emerging technologies for their intelligent vision and dexterous hands not only from the points of the view of surgical, ergonomic but also from safety. We believe that in these respects a medical robotic technology related to endoscopic surgery continues to be revolutionized in the near future, sufficient enough to replace almost all kinds of current endoscopic surgery. This issue remains to be addressed elsewhere in some other review articles.

컴퓨터 비젼을 이용한 선박 교통량 측정 및 항적 평가 (The Vessels Traffic Measurement and Real-time Track Assessment using Computer Vision)

  • 주기세;정중식;김철승;정재용
    • 해양환경안전학회지
    • /
    • 제17권2호
    • /
    • pp.131-136
    • /
    • 2011
  • 컴퓨터 비젼을 이용한 항행선박의 항적을 계산하고 교통량을 측정하는 방법은 해양사고의 예방관점에서 사고발생 가능성 여부를 예측해 볼 수 있는 유용한 방법이다. 본 연구에서는 컴퓨터 비젼을 이용하여 영상축소, 미분연산자, 최대 최소값 등을 이용하여 선박을 인식한 후 실세계 상에서의 좌표 값을 계산하여 실시간 항적을 전자 해도에 표시함으로서 해상 구조물과의 충돌여부를 직접 육안으로 확인 할 수 있는 알고리즘을 개발하였다. 본 연구에서 개발된 알고리즘은 영역 정보를 기반으로 개발되었기 때문에 점 정보에 의존하고 있는 기존 레이더 시스템의 단점을 보완하는 장점을 지니고 있다.

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • 제33권5호
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

Real-Time Comprehensive Assistance for Visually Impaired Navigation

  • Amal Al-Shahrani;Amjad Alghamdi;Areej Alqurashi;Raghad Alzahrani;Nuha imam
    • International Journal of Computer Science & Network Security
    • /
    • 제24권5호
    • /
    • pp.1-10
    • /
    • 2024
  • Individuals with visual impairments face numerous challenges in their daily lives, with navigating streets and public spaces being particularly daunting. The inability to identify safe crossing locations and assess the feasibility of crossing significantly restricts their mobility and independence. Globally, an estimated 285 million people suffer from visual impairment, with 39 million categorized as blind and 246 million as visually impaired, according to the World Health Organization. In Saudi Arabia alone, there are approximately 159 thousand blind individuals, as per unofficial statistics. The profound impact of visual impairments on daily activities underscores the urgent need for solutions to improve mobility and enhance safety. This study aims to address this pressing issue by leveraging computer vision and deep learning techniques to enhance object detection capabilities. Two models were trained to detect objects: one focused on street crossing obstacles, and the other aimed to search for objects. The first model was trained on a dataset comprising 5283 images of road obstacles and traffic signals, annotated to create a labeled dataset. Subsequently, it was trained using the YOLOv8 and YOLOv5 models, with YOLOv5 achieving a satisfactory accuracy of 84%. The second model was trained on the COCO dataset using YOLOv5, yielding an impressive accuracy of 94%. By improving object detection capabilities through advanced technology, this research seeks to empower individuals with visual impairments, enhancing their mobility, independence, and overall quality of life.

충돌회피 및 차선추적을 위한 무인자동차의 제어 및 모델링 (Unmanned Ground Vehicle Control and Modeling for Lane Tracking and Obstacle Avoidance)

  • 유환신;김상겸
    • 한국항행학회논문지
    • /
    • 제11권4호
    • /
    • pp.359-370
    • /
    • 2007
  • 무인 자동차 시스템에 있어 차선추적과 물체회피 기술은 중요한 핵심기술 이다. 본 논문에서는 차량제어와 모델링, 센서 실험을 통하여 차선추적 및 물체회피 방법을 제안하고자 한다. 첫 번째 물체회피는 가/감속을 위한 종 방향 제어와 조향제어에 의한 횡 방향 제어 두 개의 부분으로 구성되어 진다. 각각의 시스템은 무인자동차의 제어를 위하여 차량의 위치, 주변환경 인식, 상황에 따른 빠른 처리를 요구한다. 차량의 제어 전략이 작동되는 동안 도로에서의 물체인식과 회피는 차량의 속도에 달려 있다. 두 번째 영상시스템을 통한 차선추석방법을 설명한다. 이 또한 두 부분으로 구성된다. 첫 번째 횡/종 제어를 위한 로도 모델이 포함된다. 두 번째 차선추적방법, 영상처리 알고리즘, 필터링 방법 및 영상처리 방법을 다룰 것이다. 마지막으로 본 논문에서는 실차실험을 통한 차선추적 및 물체회피 차량제어 및 모델링 방법을 제안한다.

  • PDF