• 제목/요약/키워드: Camera Work

검색결과 500건 처리시간 0.03초

지평선을 이용한 영상기반 위치 추정 방법 및 위치 추정 오차 (A Vision-based Position Estimation Method Using a Horizon)

  • 신종진;남화진;김병주
    • 한국군사과학기술학회지
    • /
    • 제15권2호
    • /
    • pp.169-176
    • /
    • 2012
  • GPS(Global Positioning System) is widely used for the position estimation of an aerial vehicle. However, GPS may not be available due to hostile jamming or strategic reasons. A vision-based position estimation method can be effective if GPS does not work properly. In mountainous areas without any man-made landmark, a horizon is a good feature for estimating the position of an aerial vehicle. In this paper, we present a new method to estimate the position of the aerial vehicle equipped with a forward-looking infrared camera. It is assumed that INS(Inertial Navigation System) provides the attitudes of an aerial vehicle and a camera. The horizon extracted from an infrared image is compared with horizon models generated from DEM(Digital Elevation Map). Because of a narrow field of view of the camera, two images with a different camera view are utilized to estimate a position. The algorithm is tested using real infrared images acquired on the ground. The experimental results show that the method can be used for estimating the position of an aerial vehicle.

카메라의 동작을 보정한 장면전환 검출 (Shot Transition Detection by Compensating Camera Operations)

  • 장석우;최형일
    • 정보처리학회논문지B
    • /
    • 제12B권4호
    • /
    • pp.403-412
    • /
    • 2005
  • 본 논문에서는 비디오 데이터로부터 장면 사이의 경계를 검출하고, 이들을 그 종류별로 분류하는 장면전환 검출 방법을 제안한다 제안한 장면전환 검출 방법은 급진적인 장면전환인 컷(cut)과 점진적인 장면전환인 페이드(fade) 및 디졸브(dissolve)를 검출한다. 본 논문에서는 영상 내에 포함된 카메라의 동작 정보를 이용하여 영상을 보정하고, 보정된 영상으로부터 특징을 추출하여 장면전환을 검출한다. 따라서 카메라의 동작으로 인해 기인하는 여러 가지 오 검출을 방지한다. 또한, 영상을 보정하는 과정에서 지역적인 이동 물체의 동작을 제거하므로 이동 물체의 동작으로 인해 기인하는 장면전환의 오 검출도 방지한다. 실험에서는 다양한 비디오 데이터를 입력 받아 기존의 장면전환 검출 방법들과 제안한 방법의 성능을 비교 분석함으로써 제안한 방법의 우수함을 보인다.

모노 비전 기반 3차원 평행직선의 방향 추정 기법 및 파렛트 측정 응용 (A Monocular Vision Based Technique for Estimating Direction of 3D Parallel Lines and Its Application to Measurement of Pallets)

  • 김민환;변성민;김진
    • 한국멀티미디어학회논문지
    • /
    • 제21권11호
    • /
    • pp.1254-1262
    • /
    • 2018
  • Many parallel lines may be shown in our real life and they are useful for analyzing structure of objects or buildings. In this paper, a vision based technique for estimating three-dimensional direction of parallel lines is suggested, which uses a calibrated camera and is applicable to an image being captured from the camera. Correctness of the technique is theoretically described and discussed in this paper. The technique is well applicable to measurement of orientation of a pallet in a warehouse, because a pair of parallel lines is well detected in the front plane of the pallet. Thereby the technique enables a forklift with a well-calibrated camera to engage the pallet automatically. Such a forklift in a warehouse can engage a pallet on a storing rack as well as one on the ground. Usefulness of the suggested technique for other applications is also discussed. We conducted an experiment of measuring a real commercial pallet with various orientation and distance and found for the technique to work correctly and accurately.

카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법 (Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model)

  • 임이지;최대선
    • 정보보호학회논문지
    • /
    • 제33권6호
    • /
    • pp.1099-1110
    • /
    • 2023
  • 자율주행 및 robot navigation의 인식 시스템은 성능 향상을 위해 다중 센서를 융합(Multi-Sensor Fusion)을 한 후, 객체 인식 및 추적, 차선 감지 등의 비전 작업을 한다. 현재 카메라와 라이다 센서의 융합을 기반으로 한 딥러닝 모델에 대한 연구가 활발히 이루어지고 있다. 그러나 딥러닝 모델은 입력 데이터의 변조를 통한 적대적 공격에 취약하다. 기존의 다중 센서 기반 자율주행 인식 시스템에 대한 공격은 객체 인식 모델의 신뢰 점수를 낮춰 장애물 오검출을 유도하는 데에 초점이 맞춰져 있다. 그러나 타겟 모델에만 공격이 가능하다는 한계가 있다. 센서 융합단계에 대한 공격의 경우 융합 이후의 비전 작업에 대한 오류를 연쇄적으로 유발할 수 있으며, 이러한 위험성에 대한 고려가 필요하다. 또한 시각적으로 판단하기 어려운 라이다의 포인트 클라우드 데이터에 대한 공격을 진행하여 공격 여부를 판단하기 어렵도록 한다. 본 연구에서는 이미지 스케일링 기반 카메라-라이다 융합 모델(camera-LiDAR calibration model)인 LCCNet 의 정확도를 저하시키는 공격 방법을 제안한다. 제안 방법은 입력 라이다의 포인트에 스케일링 공격을 하고자 한다. 스케일링 알고리즘과 크기별 공격 성능 실험을 진행한 결과 평균 77% 이상의 융합 오류를 유발하였다.

Adobe Camera Raw를 이용한 효과적인 3D 렌더 이미지 보정 (Efficient Color Correction for 3D rendered images using Adobe camera raw)

  • 윤영두;최은영
    • 만화애니메이션 연구
    • /
    • 통권33호
    • /
    • pp.425-447
    • /
    • 2013
  • 디지털 카메라의 대중화로 인하여 ISP(Image Signal Process)에 관한 연구들이 활발히 일어나고 있으며 이와 함께 사용자들이 쉽게 사용할 수 있는 이미지 보정 어플리케이션들도 많이 개발되고 있다. 특히 AWB(Automatic White Balance) 와 Auto Exposure 는 ISP 기능 중에서도 가장 관심을 받는 분야로서 이미지나 영상의 질을 높이는데 많은 기여를 하고 있다. 3D 프로그램에서 라이팅과 카메라의 원리는 실제 카메라와 라이팅의 원리를 적용하여 만들어졌으나, 실제의 카메라에서 사용되고 있는 자동노출 기능과 AWB 기능들과 같은 전문적인 기능들은 3D 프로그램 상에서 구현되지 못하고 있다. 영상에서 색상의 보정은 색상에 대한 전문적인 지식이 있어야 가능하며, 합성프로그램에서 제공하는 기능들은 일반적인 디지털 사진을 보정하는 프로그램 방식에 비하여 복잡한 것이 사실이다. 특히 애니메이션을 공부하는 대학생들의 경우에는 애니메이션의 제작과정에 있어서 색상보정의 과정을 생략하고 렌더링과 합성과정을 거쳐서 애니메이션을 완성하고 있다. 따라서 본 연구는 색상보정에 사용하는 기능들을 3D 애니메이션제작 실사에서 적용 가능하게 함으로서 색상에 대한 전문적인 지식이 없는 사람이라도 쉽게 색상을 보정하여 렌더 이미지의 퀼리티를 높이는 3D 제작 파이프라인을 본 연구를 통하여 제시하고자 한다.

반도체 자동화를 위한 빈피킹 로봇의 비전 기반 캘리브레이션 방법에 관한 연구 (A Study on Vision-based Calibration Method for Bin Picking Robots for Semiconductor Automation)

  • 구교문;김기현;김효영;심재홍
    • 반도체디스플레이기술학회지
    • /
    • 제22권1호
    • /
    • pp.72-77
    • /
    • 2023
  • In many manufacturing settings, including the semiconductor industry, products are completed by producing and assembling various components. Sorting out from randomly mixed parts and classification operations takes a lot of time and labor. Recently, many efforts have been made to select and assemble correct parts from mixed parts using robots. Automating the sorting and classification of randomly mixed components is difficult since various objects and the positions and attitudes of robots and cameras in 3D space need to be known. Previously, only objects in specific positions were grasped by robots or people sorting items directly. To enable robots to pick up random objects in 3D space, bin picking technology is required. To realize bin picking technology, it is essential to understand the coordinate system information between the robot, the grasping target object, and the camera. Calibration work to understand the coordinate system information between them is necessary to grasp the object recognized by the camera. It is difficult to restore the depth value of 2D images when 3D restoration is performed, which is necessary for bin picking technology. In this paper, we propose to use depth information of RGB-D camera for Z value in rotation and movement conversion used in calibration. Proceed with camera calibration for accurate coordinate system conversion of objects in 2D images, and proceed with calibration of robot and camera. We proved the effectiveness of the proposed method through accuracy evaluations for camera calibration and calibration between robots and cameras.

  • PDF

비대면(Untact) 업무를 위한 화상인식 PCA 사용자 인증 시스템 연구 (A Study on the PCA base Face Authentication System for Untact Work)

  • 박종순;박찬길
    • 디지털산업정보학회논문지
    • /
    • 제16권4호
    • /
    • pp.67-74
    • /
    • 2020
  • As the information age develops, Online education and Non-face-to-face work are becoming common. Telecommuting such as tele-education and video conferencing through the application of information technology is also becoming common due to the COVID-19. Unexpected information leakage can occur online when the company conducts work remotely or holds meetings. A system to authenticate users is needed to reduce information leakage. In this study, there are various ways to authenticate remote access users. By applying burn authentication using a biometric system, a method to identify users is proposed. The method used in the study was studied the main component analysis method, which recognizes several characteristics in facial recognition and processes interrelationships. It proposed a method that can be easily utilized without additional devices by utilizing a camera connected to a computer by authenticating the user using the shape and characteristics of the face by using the PCA method.

Improved LiDAR-Camera Calibration Using Marker Detection Based on 3D Plane Extraction

  • Yoo, Joong-Sun;Kim, Do-Hyeong;Kim, Gon-Woo
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권6호
    • /
    • pp.2530-2544
    • /
    • 2018
  • In this paper, we propose an enhanced LiDAR-camera calibration method that extracts the marker plane from 3D point cloud information. In previous work, we estimated the straight line of each board to obtain the vertex. However, the errors in the point information in relation to the z axis were not considered. These errors are caused by the effects of user selection on the board border. Because of the nature of LiDAR, the point information is separated in the horizontal direction, causing the approximated model of the straight line to be erroneous. In the proposed work, we obtain each vertex by estimating a rectangle from a plane rather than obtaining a point from each straight line in order to obtain a vertex more precisely than the previous study. The advantage of using planes is that it is easier to select the area, and the most point information on the board is available. We demonstrated through experiments that the proposed method could be used to obtain more accurate results compared to the performance of the previous method.

자율적인 시각 센서 피드백 기능을 갖는 원격 로보트 시스템교환 제어 (Traded control of telerobot system with an autonomous visual sensor feedback)

  • 김주곤;차동혁;김승호
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 한국자동제어학술회의논문집(국내학술편); 포항공과대학교, 포항; 24-26 Oct. 1996
    • /
    • pp.940-943
    • /
    • 1996
  • In teleoperating, as seeing the monitor screen obtained from a camera instituted in the working environment, human operator generally controls the slave arm. Because we can see only 2-D image in a monitor, human operator does not know the depth information and can not work with high accuracy. In this paper, we proposed a traded control method using an visual sensor for the purpose of solving this problem. We can control a teleoperation system with precision when we use the proposed algorithm. Not only a human operator command but also an autonomous visual sensor feedback command is given to a slave arm for the purpose of coincidence current image features and target image features. When the slave arm place in a distant place from the target position, human operator can know very well the difference between the desired image features and the current image features, but calculated visual sensor command have big errors. And when the slave arm is near the target position, the state of affairs is changed conversely. With this visual sensor feedback, human does not need coincide the detail difference between the desired image features and the current image features and proposed method can work with higher accuracy than other method without, sensor feedback. The effectiveness of the proposed control method is verified through series of experiments.

  • PDF

DEM generation from KOMPSAT-1 Electro-Optical Camera Data

  • Kim, Taejung;Lee, Heung-Kyu
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 1998년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.325-330
    • /
    • 1998
  • The first Korean remote sensing satellite, Korea Multi-Purpose Satellite (KOMPSAT-1), is going to be launched in 1999. This will carry a 7m resolution Electro-Optical Camera (EOC) for earth observation. The primary mission of the KOMPSAT-1 is to acquire stereo imagery over the Korean peninsular for the generation of 1:25,000 cartographic maps. For this mission, research is being carried out to assess the possibilities of automated or semi-automated mapping of EOC data and to develop, if necessary, such enabling tools. This paper discusses the issue of automated DEM generation from EOC data and identifies some important aspects in developing a for DEM generation system from EOC data. This paper also presents the current status of the development work for such a system. The development work has focused on sensor modelling, stereo matching and DEM interpolation techniques. The performance of the system is shown with a SPOT stereo pair. A DEM generated from a commercial software is also presented for comparison. The paper concludes that the proposed system creates preferable results to the commercial software and suggests future developments for successful generation of DEM for EOC data.

  • PDF