• Title/Summary/Keyword: multiple cameras

Search Result 223, Processing Time 0.027 seconds

Free Moving Time-lapse dolly Design (움직임이 자유로운 Time-lapse dolly 설계)

  • Kim, Seung-Min;Kim, Heung-Il;Jeon, Seung-Woo;Hwang, Jeong-Kil;Woo, Yoonhwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.12
    • /
    • pp.6082-6089
    • /
    • 2013
  • The supply of DSLR cameras has increased recently. Because of its widespread use, different types of DSLR camera video techniques have also been developed. Time-lapse is an effective filming technique that helps produce a fine-quality video clip by taking pictures of multiple camera shots during certain time intervals without recording the actual video clip, as the existing technique does. Machinery equipment (dolly), which controls the speed of movement and shutter, needs to be deleted. This paper introduces a method for a designing movement system that satisfies the design conditions. In addition, this paper proved the validity of the design by performing stress analysis on the static and dynamic conditions of the final design.

Motion Plane Estimation for Real-Time Hand Motion Recognition (실시간 손동작 인식을 위한 동작 평면 추정)

  • Jeong, Seung-Dae;Jang, Kyung-Ho;Jung, Soon-Ki
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.347-358
    • /
    • 2009
  • In this thesis, we develop a vision based hand motion recognition system using a camera with two rotational motors. Existing systems were implemented using a range camera or multiple cameras and have a limited working area. In contrast, we use an uncalibrated camera and get more wide working area by pan-tilt motion. Given an image sequence provided by the pan-tilt camera, color and pattern information are integrated into a tracking system in order to find the 2D position and direction of the hand. With these pose information, we estimate 3D motion plane on which the gesture motion trajectory from approximately forms. The 3D trajectory of the moving finger tip is projected into the motion plane, so that the resolving power of the linear gesture patterns is enhanced. We have tested the proposed approach in terms of the accuracy of trace angle and the dimension of the working volume.

Development of a Forest Fire Tracking and GIS Mapping Base on Live Streaming (실시간 영상 기반 산불 추적 및 매핑기법 개발)

  • Cho, In-Je;Kim, Gyou-Beom;Park, Beom-Sun
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.10
    • /
    • pp.123-127
    • /
    • 2020
  • In order to obtain the overall fire line information of medium and large forest fires at night, the ground control system was developed to determine whether forest fires occurred through real-time video clips and to calculate the location of the forest fires determined using the location of drones, angle information of video cameras, and altitude information on the map to reduce the time required for regular video matches obtained after the completion of the mission. To verify the reliability of the developed function, the error distance of the aiming position information of the flight altitude star and the image camera was measured, and the location information within the reliable range was displayed on the map. As the function developed in this paper allows real-time identification of multiple locations of forest fires, it is expected that overall fire line information for the establishment of forest fire extinguishing measures will be obtained more quickly.

Disparity-based Depth Scaling of Multiview Images (변이 기반 다시점 영상의 인식 깊이감 조절)

  • Jo, Cheol-Yong;Kim, Man-Bae;Um, Gi-Mun;Hur, Nam-Ho;Kim, Jin-Woong
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.796-803
    • /
    • 2008
  • In this paper, we present a depth scaling method for multiview images that could provide an 3D depth that a user prefers. Unlike previous works that change a camera configuration, the proposed method utilizes depth data in order to carry out the scaling of a depth range requested by users. From multivew images and their corresponding depth data, depth data is transformed into a disparity and the disparity is adjusted in order to control the perceived depth. In particular, our method can deal with multiview images captured by multiple cameras, and can be expanded from stereoscopic to multiview images. Based upon a DSCQS subjective evaluation test, our experimental results tested on an automultiscopic 3D display show that the perceived depth is appropriately scaled according to user's preferred depth.

A Synchronized Multiplexing Scheme for Multi-view HD Video Transport System over IP Networks (실시간 다시점 고화질 비디오 전송 시스템을 위한 동기화된 다중화 기법)

  • Kim, Jong-Ryool;Kim, Jong-Won
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.930-940
    • /
    • 2008
  • This paper proposes a prototype realization of multi-view HD video transport system with synchronized multiplexing over IP networks. The proposed synchronized multiplexing considers the synchronization during video acquisition and the multiplexing for the interactive view-selection during transport. For the synchronized acquisition from multiple HDV camcorders through IEEE 1394 interface, we estimate the timeline differences among MPEG-2 compressed video streams by using global time of network between the cameras and a server and correct timelines of video streams by changing the time stamp of the MPEG-2 system stream. Also, we multiplex a selected number of acquired HD views at the MPEG-2 TS (transport stream) level for the interactive view-selection during transport. Thus, with the proposed synchronized multiplexing scheme, we can display synchronized HD view.

A Study on Robot Arm Control System using Detection of Foot Movement (발 움직임 검출을 통한 로봇 팔 제어에 관한 연구)

  • Ji, H.;Lee, D.H.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.9 no.1
    • /
    • pp.67-72
    • /
    • 2015
  • The system for controlling the robotic arm through the foot motion detection was implemented for the disabled who not free to use of the arm. In order to get an image on foot movement, two cameras were setup in front of both foot. After defining multiple regions of interest by using LabView-based Vision Assistant from acquired images, we could detect foot movement based on left/right and up/down edge detection within the left/right image area. After transferring control data which was obtained according to left/right and up/down edge detection numbers from two foot images of left/right sides through serial communication, control system was implemented to control 6-joint robotic arm into up/down and left/right direction by foot. As a result of experiment, we was able to get within 0.5 second reaction time and operational recognition rate of more 88%.

  • PDF

Fusing Algorithm for Dense Point Cloud in Multi-view Stereo (Multi-view Stereo에서 Dense Point Cloud를 위한 Fusing 알고리즘)

  • Han, Hyeon-Deok;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.798-807
    • /
    • 2020
  • As technologies using digital camera have been developed, 3D images can be constructed from the pictures captured by using multiple cameras. The 3D image data is represented in a form of point cloud which consists of 3D coordinate of the data and the related attributes. Various techniques have been proposed to construct the point cloud data. Among them, Structure-from-Motion (SfM) and Multi-view Stereo (MVS) are examples of the image-based technologies in this field. Based on the conventional research, the point cloud data generated from SfM and MVS may be sparse because the depth information may be incorrect and some data have been removed. In this paper, we propose an efficient algorithm to enhance the point cloud so that the density of the generated point cloud increases. Simulation results show that the proposed algorithm outperforms the conventional algorithms objectively and subjectively.

PTZ Camera Based Multi Event Processing for Intelligent Video Network (지능형 영상네트워크 연계형 PTZ카메라 기반 다중 이벤트처리)

  • Chang, Il-Sik;Ahn, Seong-Je;Park, Gwang-Yeong;Cha, Jae-Sang;Park, Goo-Man
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11A
    • /
    • pp.1066-1072
    • /
    • 2010
  • In this paper we proposed a multi event handling surveillance system using multiple PTZ cameras. One event is assigned to each PTZ camera to detect unusual situation. If a new object appears in the scene while a camera is tracking the old one, it can not handle two objects simultaneously. In the second case that the object moves out of the scene during the tracking, the camera loses the object. In the proposed method, the nearby camera takes the role to trace the new one or detect the lost one in each case. The nearby camera can get the new object location information from old camera and set the seamless event link for the object. Our simulation result shows the continuous camera-to-camera object tracking performance.

A Study on Efficient Self-Calibration of a Non-Metric Camera for Close-range Photogrammetry (근접 사진측량을 위한 효율적인 비측정카메라 캘리브레이션)

  • Lee, Chang No;Oh, Jae Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_1
    • /
    • pp.511-518
    • /
    • 2012
  • It is well-known that non-metric digital cameras have to be calibrated for the close-range photogrammetry. But, the self-calibration is still not an easy task because it requires rather a large calibration site of accurate control points, multiple image acquisitions in different positions, and accurate image point measurements that are quite labor-intensive and time-consuming. Based on the premise, this study carried out check point accuracy analysis from self-calibration of different control point designs and photo combinations. The test results showed that the calibration using three photos covering three-dimensional control points produced high accuracy, but control points on a plane could attain the comparable accuracy with four photos including a 90-degree rotated photo. We then compared the target accuracy of on-site self-calibration using flat control points to that of laboratory-self calibration and observed comparable results.

Fixed Homography-Based Real-Time SW/HW Image Stitching Engine for Motor Vehicles

  • Suk, Jung-Hee;Lyuh, Chun-Gi;Yoon, Sanghoon;Roh, Tae Moon
    • ETRI Journal
    • /
    • v.37 no.6
    • /
    • pp.1143-1153
    • /
    • 2015
  • In this paper, we propose an efficient architecture for a real-time image stitching engine for vision SoCs found in motor vehicles. To enlarge the obstacle-detection distance and area for safety, we adopt panoramic images from multiple telegraphic cameras. We propose a stitching method based on a fixed homography that is educed from the initial frame of a video sequence and is used to warp all input images without regeneration. Because the fixed homography is generated only once at the initial state, we can calculate it using SW to reduce HW costs. The proposed warping HW engine is based on a linear transform of the pixel positions of warped images and can reduce the computational complexity by 90% or more as compared to a conventional method. A dual-core SW/HW image stitching engine is applied to stitching input frames in parallel to improve the performance by 70% or more as compared to a single-core engine operation. In addition, a dual-core structure is used to detect a failure in state machines using rock-step logic to satisfy the ISO26262 standard. The dual-core SW/HW image stitching engine is fabricated in SoC with 254,968 gate counts using Global Foundry's 65 nm CMOS process. The single-core engine can make panoramic images from three YCbCr 4:2:0 formatted VGA images at 44 frames per second and frequency of 200 MHz without an LCD display.