• Title/Summary/Keyword: one camera

Search Result 1,583, Processing Time 0.031 seconds

Images Grouping Technology based on Camera Sensors for Efficient Stitching of Multiple Images (다수의 영상간 효율적인 스티칭을 위한 카메라 센서 정보 기반 영상 그룹핑 기술)

  • Im, Jiheon;Lee, Euisang;Kim, Hoejung;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.713-723
    • /
    • 2017
  • Since the panoramic image can overcome the limitation of the viewing angle of the camera and have a wide field of view, it has been studied effectively in the fields of computer vision and stereo camera. In order to generate a panoramic image, stitching images taken by a plurality of general cameras instead of using a wide-angle camera, which is distorted, is widely used because it can reduce image distortion. The image stitching technique creates descriptors of feature points extracted from multiple images, compares the similarities of feature points, and links them together into one image. Each feature point has several hundreds of dimensions of information, and data processing time increases as more images are stitched. In particular, when a panorama is generated on the basis of an image photographed by a plurality of unspecified cameras with respect to an object, the extraction processing time of the overlapping feature points for similar images becomes longer. In this paper, we propose a preprocessing process to efficiently process stitching based on an image obtained from a number of unspecified cameras for one object or environment. In this way, the data processing time can be reduced by pre-grouping images based on camera sensor information and reducing the number of images to be stitched at one time. Later, stitching is done hierarchically to create one large panorama. Through the grouping preprocessing proposed in this paper, we confirmed that the stitching time for a large number of images is greatly reduced by experimental results.

An Algorithm for Collecting Traffic Information by Vehicle Tracking Method from CCTV Camera Images on the Highway (고속도로변 폐쇄회로 카메라 영상에서 트래킹에 의한 교통정보수집 알고리즘)

  • Lee In Jung;Min Joan Young;Jang Young Sang
    • Journal of Information Technology Applications and Management
    • /
    • v.11 no.4
    • /
    • pp.169-179
    • /
    • 2004
  • There are many inductive loop detectors under the highways in Korea. Among the other detectors, some are image detectors. Almost all image detectors are focused one or two lane of the road and are measuring traffic information. This paper proposes to an algorithm for detecting traffic information automatically from CCTV camera images installed on the highway. The information which is counted in one lane or two contains some critical errors by occlusion frequently in case of passing larger vehicles. In this paper, we use a tracking algorithm in which the detection area include all lanes, then the traffic informations are collected from the vehicles individually using difference images in this detection area. This tracking algorithm is better than lane by lane detecting algorithm. The experiment have been conducted two different real road scenes for 20 minutes. For the experiments, the images are provided with CCTV camera which was installed at Kiheung Interchange upstream of Kyongbu highway, and video recording images at Chungkye Tunnel. For image processing, images captured by frame-grabber board 30 frames per second, 640${\times}$480 pixels resolution and 256 gray-levels to reduce the total amount of data to be Interpreted.

  • PDF

Opto - Mechanical Design of IGRINS Slit-viewing Camera Barrel

  • Oh, Hee-Young;Yuk, In-Soo;Park, Chan;Lee, Han-Shin;Lee, Sung-Ho;Chun, Moo-Young;Jaffe, Daniel T.
    • Bulletin of the Korean Space Science Society
    • /
    • 2011.04a
    • /
    • pp.31.2-31.2
    • /
    • 2011
  • IGRINS (Immersion GRating INfrared Spectrometer) is a high resolution wide-band infrared spectrograph developed by Korea Astronomy and Space Science Institute (KASI) and the University of Texas at Austin (UT). The slit-viewing camera is one of four re-imaging optics in IGRINS including the input relay optics and the H- and K- band spectrograph cameras. Consisting of five lenses and one Ks-band filter, the slit viewing camera relays the infrared image of $2'{\times}2'$ field around the slit to the detector focal plane. Since IGRINS is a cryogenic instrument, the lens barrel is designed to be optimized at the operating temperature of 130 K. The barrel design also aims to achieve easy alignment and assembly. We use radial springs and axial springs to support lenses and lens spacers against the gravity and thermal contraction. Total weight of the lens barrel is estimated to be 1.2 kg. Results from structural analysis are presented.

  • PDF

Lateral Control of Vision-Based Autonomous Vehicle using Neural Network (신형회로망을 이용한 비젼기반 자율주행차량의 횡방향제어)

  • 김영주;이경백;김영배
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.687-690
    • /
    • 2000
  • Lately, many studies have been progressed for the protection human's lives and property as holding in check accidents happened by human's carelessness or mistakes. One part of these is the development of an autonomouse vehicle. General control method of vision-based autonomous vehicle system is to determine the navigation direction by analyzing lane images from a camera, and to navigate using proper control algorithm. In this paper, characteristic points are abstracted from lane images using lane recognition algorithm with sobel operator. And then the vehicle is controlled using two proposed auto-steering algorithms. Two steering control algorithms are introduced in this paper. First method is to use the geometric relation of a camera. After transforming from an image coordinate to a vehicle coordinate, a steering angle is calculated using Ackermann angle. Second one is using a neural network algorithm. It doesn't need to use the geometric relation of a camera and is easy to apply a steering algorithm. In addition, It is a nearest algorithm for the driving style of human driver. Proposed controller is a multilayer neural network using Levenberg-Marquardt backpropagation learning algorithm which was estimated much better than other methods, i.e. Conjugate Gradient or Gradient Decent ones.

  • PDF

LATERAL CONTROL OF AUTONOMOUS VEHICLE USING SEVENBERG-MARQUARDT NEURAL NETWORK ALGORITHM

  • Kim, Y.-B.;Lee, K.-B.;Kim, Y.-J.;Ahn, O.-S.
    • International Journal of Automotive Technology
    • /
    • v.3 no.2
    • /
    • pp.71-78
    • /
    • 2002
  • A new control method far vision-based autonomous vehicle is proposed to determine navigation direction by analyzing lane information from a camera and to navigate a vehicle. In this paper, characteristic featured data points are extracted from lane images using a lane recognition algorithm. Then the vehicle is controlled using new Levenberg-Marquardt neural network algorithm. To verify the usefulness of the algorithm, another algorithm, which utilizes the geometric relation of a camera and vehicle, is introduced. The second one involves transformation from an image coordinate to a vehicle coordinate, then steering is determined from Ackermann angle. The steering scheme using Ackermann angle is heavily depends on the correct geometric data of a vehicle and a camera. Meanwhile, the proposed neural network algorithm does not need geometric relations and it depends on the driving style of human driver. The proposed method is superior than other referenced neural network algorithms such as conjugate gradient method or gradient decent one in autonomous lateral control .

Mathematical Modeling for the Physical Relationship between the Coordinate Systems of IMU/GPS and Camera (IMU/GPS와 카메라 좌표계간의 물리적 관계를 위한 수학적 모델링)

  • Chon, Jae-Choon;Shibasaki, R.
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.6
    • /
    • pp.611-616
    • /
    • 2008
  • When extracting geo-referenced 3D data from cameras mounted on Mobile Mapping Systems, one of important properties for accuracy of extracted data is the alignment of the relative translation(lever-arm) and rotation(bore-sight) between the coordinate systems of Inertial Measurement Unit(IMU)/Ground Positioning System(GPS) and cameras. Since the conventional method calculates absolute camera orientation using ground control points (GCP), the alignment is determined in one Coordinated System (GPS Coordinated System). It basically require GCP. We proposed a mathematical model for the alignment using the initially uncoupled data of cameras and IMU/GPS without GCPs.

MULTI-POINT MEASUREMENT OF STRUCTURAL VIBRATION USING PATTERN RECOGNITION FROM CAMERA IMAGE

  • Jeon, Hyeong-Seop;Choi, Young-Chul;Park, Jin-Ho;Park, Jong-Won
    • Nuclear Engineering and Technology
    • /
    • v.42 no.6
    • /
    • pp.704-711
    • /
    • 2010
  • Modal testing requires measuring the vibration of many points, for which an accelerometer, a gab sensor and laser vibrometer are generally used. Conventional modal testing requires mounting of these sensors to all measurement points in order to acquire the signals. However, this can be disadvantageous because it requires considerable measurement time and effort when there are many measurement points. In this paper, we propose a method for modal testing using a camera image. A camera can measure the vibration of many points at the same time. However, this task requires that the measurement points be classified frame by frame. While it is possible to classify the measurement points one by one, this also requires much time. Therefore, we try to classify multiple points using pattern recognition. The feasibility of the proposed method is verified by a beam experiment. The experimental results demonstrate that we can obtain good results.

New algorithm to estimate proton beam range for multi-slit prompt-gamma camera

  • Ku, Youngmo;Jung, Jaerin;Kim, Chan Hyeong
    • Nuclear Engineering and Technology
    • /
    • v.54 no.9
    • /
    • pp.3422-3428
    • /
    • 2022
  • The prompt gamma imaging (PGI) technique is considered as one of the most promising approaches to estimate the range of proton beam in the patient and unlock the full potential of proton therapy. In the PGI technique, a dedicated algorithm is required to estimate the range of the proton beam from the prompt gamma (PG) distribution acquired by a PGI system. In the present study, a new range estimation algorithm was developed for a multi-slit prompt-gamma camera, one of PGI systems, to estimate the range of proton beam with high accuracy. The performance of the developed algorithm was evaluated by Monte Carlo simulations for various beam/phantom combinations. Our results generally show that the developed algorithm is very robust, showing very high accuracy and precision for all the cases considered in the present study. The range estimation accuracy of the developed algorithm was 0.5-1.7 mm, which is approximately 1% of beam range, for 1×109 protons. Even for the typical number of protons for a spot (1×108), the range estimation accuracy of the developed algorithm was 2.1-4.6 mm and smaller than the range uncertainties and typical safety margin, while that of the existing algorithm was 2.5-9.6 mm.

Development of precision optical system and its application (납땜 검사용 정밀 광학 장치 개발과 응용)

  • 고국원;조형석;김재선;김성권
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.36-39
    • /
    • 1997
  • In this paper, we described an approach to design of precision optical system for visual inspection of solder joint defects of SMC(surface mount components) on PCBs(Printed Circuit Board). The illumination system, consisting of three tiered LED lamps and one main camera and four side view camera, is implemented to generated iso-contour on the solder joint according to gradient of the soldered surface. We analyze LED design parameter such as incident angle, diameter of LED ring, and so on to acquire uniform illumination.

  • PDF

A study on approach of localization problem using landmarks (Landmark를 이용한 localization 문제 접근에 관한 연구)

  • 김태우;이쾌희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.44-47
    • /
    • 1997
  • Building a reliable mobile robot - one that can navigate without failures for long periods of time - requires that the uncertainty which results from control and sensing is bounded. This paper proposes a new mobile robot localization method using artificial landmarks. For a mobile robot localization, the proposed method uses a camera calibration(only extrinsic parameters). We use the FANUC arc mate to estimate the posture error, and the result shows that the position error is less than 1 cm and the orientation error less than 1 degrees.

  • PDF