• Title/Summary/Keyword: $360^{\circ}$ 카메라

Search Result 32, Processing Time 0.026 seconds

Coordinates Transformation and Correction Techniques of the Distorted Omni-directional Image (왜곡된 전 방향 영상에서의 좌표 변환 및 보정)

  • Cha, Sun-Hee;Park, Young-Min;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.816-819
    • /
    • 2005
  • This paper proposes a coordinate correction technique using the transformation of 3D parabolic coordinate function and BP(Back Propagation) neural network in order to solve space distortion problem caused by using catadioptric camera. Although Catadioptric camera can obtain omni-directional image at all directions of 360 degrees, it makes an image distorted because of an external form of lens itself. Accordingly, To obtain transformed ideal distance coordinate information from distorted image on 3 dimensional space, we use coordinate transformation function that uses coordinates of a focus at mirror in the shape of parabolic plane and another one which projected into the shape of parabolic from input image. An error of this course is modified by BP neural network algorithm.

  • PDF

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.

Image Transmission System Development for DARS Robot (DARS 로봇에서의 영상 전송 시스템 개발)

  • Lee Dong-Hoon;Kim Dae-Wook;Sim Kwee-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.04a
    • /
    • pp.229-232
    • /
    • 2005
  • 본 논문에서는 다수의 로봇이 협동 제어 및 분산 제어를 목적으로 고안된 DARS 로봇의 마이크로 컨트롤러가 적은 메모리를 내장하여 영상 처리와 같은 많은 데이터를 처리하는 부분에서는 여러 제약이 생기는 문제점을 해결하기 위하여 DARS 로봇의 영상 처리 및 전송에 있어 데이터의 전송량을 줄이는 방법으로 영상 압축 방식을 사용하여 영상 압축 데이터의 전송을 구현하였다. 또한 DARS 로봇이 이동하면서 특정 미션의 수행이 가능하도록 배터리로 정전압을 공급하고, 물체를 감지하는데 있어 사각이 없이 $360^{\circ}$전 방향을 감지하도록 적외선 센서부를 설계하였다. DARS 로봇의 이동이 용이하도록 설계된 모터 구동부는 센서에 감지되는 물체의 거리에 따라 DARS 로봇이 속도를 정밀하게 가$\cdot$감속 제어를 하고, 마이컴 제어부는 카메라로부터 입력되 영상 신호를 압축 알고리즘을 이용하여 압축하고, 압축된 데이터를 컴퓨터로 전송한다. 컴퓨터에서는 입력된 영상을 Visual c++을 사용하여 화면 표시 및 DARS 로봇을 제어 할 수 있도록 구현하였다.

  • PDF

A Study on effective directive technique of 3D animation in Virtual Reality -Focus on Interactive short using 3D Animation making of Unreal Engine- (가상현실에서 효과적인 3차원 영상 연출을 위한 연구 -언리얼 엔진의 영상 제작을 이용한 인터렉티브 쇼트 중심으로-)

  • Lee, Jun-soo
    • Cartoon and Animation Studies
    • /
    • s.47
    • /
    • pp.1-29
    • /
    • 2017
  • 360-degree virtual reality has been a technology that has been available for a long time and has been actively promoted worldwide in recent years due to development of devices such as HMD (Head Mounted Display) and development of hardware for controlling and executing images of virtual reality. The production of the 360 degree VR requires a different mode of production than the traditional video production, and the matters to be considered for the user have begun to appear. Since the virtual reality image is aimed at a platform that requires enthusiasm, presence and interaction, it is necessary to have a suitable cinematography. In VR, users can freely enjoy the world created by the director and have the advantage of being able to concentrate on his interests during playing the image. However, the director had to develope and install the device what the observer could concentrate on the narrative progression and images to be delivered. Among the various methods of transmitting images, the director can use the composition of the short. In this paper, we will study how to effectively apply the technique of directing through the composition of this shot to 360 degrees virtual reality. Currently, there are no killer contents that are still dominant in the world, including inside and outside the country. In this situation, the potential of virtual reality is recognized and various images are produced. So the way of production follows the traditional image production method, and the shot composition is the same. However, in the 360 degree virtual reality, the use of the long take or blocking technique of the conventional third person view point is used as the main production configuration, and the limit of the short configuration is felt. In addition, while the viewer can interactively view the 360-degree screen using the HMD tracking, the configuration of the shot and the connection of the shot are absolutely dependent on the director like the existing cinematography. In this study, I tried to study whether the viewer can freely change the cinematography such as the composition of the shot at a user's desired time using the feature of interaction of the VR image. To do this, 3D animation was created using a game tool called Unreal Engine to construct an interactive image. Using visual scripting of Unreal Engine called blueprint, we create a device that distinguishes the true and false condition of a condition with a trigger node, which makes a variety of shorts. Through this, various direction techniques are developed and related research is expected, and it is expected to help the development of 360 degree VR image.

Removal of Urinary Calculi by Laparoscopic-Assisted Cystoscopy in Five Dogs (다섯 마리의 개에서 복강경 보조 방광경을 이용한 요로결석 제거)

  • Lee, Seung-Yong;Park, Se-Jin;Jin, So-Young;Kim, Min-Hyang;Seok, Seong-Hoon;Kim, Young-Ki;Lee, Hee-Chun;Yeon, Seong-Chan
    • Journal of Veterinary Clinics
    • /
    • v.31 no.5
    • /
    • pp.371-375
    • /
    • 2014
  • This article describes the use of laparoscopic-assisted cystoscopy for removal of urinary calculi in five dogs. All dogs had micturition disorder due to urinary calculi. The surgical technique used was same in all cases. A urethral catheter passed into the urinary bladder through the urethra preoperatively. A 5-mm diameter cannula was placed in the ventral midline, 1 to 2 cm cranial to the umbilicus, and the 5-mm laparoscope was introduced via the cannula. A 10-mm diameter cannula was placed adjacent to the apex of the bladder under the visual guidance of laparoscopy. The bladder was then partially exteriorized through the 10-mm portal site, and a stab incision was performed on the bladder wall. The incisional margin of the bladder was sutured to the skin of the second portal site in $360^{\circ}$ simple continuous suture. A 2.7-mm diameter cystoscope with a sheath was introduced into the bladder lumen. The cystic and urethral calculi were removed under the visual guidance of cystoscopy with continuous fluid flushing. No major postoperative complications were identified. During the follow up period (range 7 to 21 months), no episodes of urinary dysfunction or recurrence of clinical signs were observed.

Remote Navigation and Monitoring System for Mobile Robot Using Smart Phone (스마트 폰을 이용한 모바일로봇의 리모트 주행제어 시스템)

  • Park, Jong-Jin;Choi, Gyoo-Seok;Chun, Chang-Hee;Park, In-Ku;Kang, Jeong-Jin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.6
    • /
    • pp.207-214
    • /
    • 2011
  • In this paper, using Zigbee-based wireless sensor networks and Lego MindStorms NXT robot, a remote monitoring and navigation system for mobile robot has been developed. Mobile robot can estimate its position using encoder values of its motor, but due to the existing friction and shortage of motor power etc., error occurs. To fix this problem and obtain more accurate position of mobile robot, a ultrasound module on wireless sensor networks has been used in this paper. To overcome disadvantages of ultrasound which include straightforwardness and narrow detection coverage, we rotate moving node attached to mobile robot by $360^{\circ}$ to measure each distance from four fixed nodes. Then location of mobile robot is estimated by triangulation using measured distance values. In addition, images are sent via a network using a USB Web camera to smart phone. On smart phones we can see location of robot, and images around places where robot navigates. And remote monitoring and navigation is possible by just clicking points at the map on smart phones.

Image Mosaic from a Video Sequence using Block Matching Method (블록매칭을 이용한 비디오 시퀀스의 이미지 모자익)

  • 이지근;정성태
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.8
    • /
    • pp.1792-1801
    • /
    • 2003
  • In these days, image mosaic is getting interest in the field of advertisement, tourism, game, medical imaging, and so on with the development of internet technology and the performance of personal computers. The main problem of mage mosaic is searching corresponding points correctly in the overlapped area between images. However, previous methods requires a lot of CPU times and data processing for finding corresponding points. And they need repeated recording with a revolution of 360 degree around objects or background. This paper presents a new image mosaic method which generates a panorama image from a video sequence recorded by a general video camera. Our method finds the corresponding points between two successive images by using a new direction oriented 3­step block matching methods. Experimental results show that the suggested method is more efficient than the methods based on existing block matching algorithm, such as full search and K­step search algorithm.

Omni Camera Vision-Based Localization for Mobile Robots Navigation Using Omni-Directional Images (옴니 카메라의 전방향 영상을 이용한 이동 로봇의 위치 인식 시스템)

  • Kim, Jong-Rok;Lim, Mee-Seub;Lim, Joon-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.206-210
    • /
    • 2011
  • Vision-based robot localization is challenging due to the vast amount of visual information available, requiring extensive storage and processing time. To deal with these challenges, we propose the use of features extracted from omni-directional panoramic images and present a method for localization of a mobile robot equipped with an omni-directional camera. The core of the proposed scheme may be summarized as follows : First, we utilize an omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Second, Nodes around the robot are extracted by the correlation coefficients of Circular Horizontal Line between the landmark and the current captured image. Third, the robot position is determined from the locations by the proposed correlation-based landmark image matching. To accelerate computations, we have assigned the node candidates using color information and the correlation values are calculated based on Fast Fourier Transforms. Experiments show that the proposed method is effective in global localization of mobile robots and robust to lighting variations.

과학기술위성 3호 주탑재체 MIRIS의 비행모델 우주환경시험

  • Mun, Bong-Gon;Park, Yeong-Sik;Park, Gwi-Jong;Lee, Deok-Haeng;Lee, Dae-Hui;Jeong, Ung-Seop;Nam, Uk-Won;Park, Won-Gi;Kim, Il-Jung;Cha, Won-Ho;Sin, Gu-Hwan;Lee, Sang-Hyeon;Seo, Jeong-Gi;Park, Jong-O;Lee, Seung-U;Han, Won-Yong
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.37 no.2
    • /
    • pp.205.1-205.1
    • /
    • 2012
  • 러시아 발사체 드네프르에 의해 발사될 과학기술위성 3호의 주탑재체 다목적적외선영상시스템, MIRIS (Multipurpose InfraRed Imaging System)는 한국천문연구원에서 주관하여 개발되었다. 그 구성 카메라인 EOC (Earth Observation Camera)는 한반도재난감시를 수행하고, SOC (Space Observation Camera)는 우리 은하 평면의 근적외선 서베이 관측을 통해 $360^{\circ}{\times}6^{\circ}$ Paschen-${\alpha}$ 방출선 지도를 작성하고 I, H 밴드 필터를 이용해서 황도 남북극에 대한 적외선우주배경복사를 관측한다. MIRIS 비행모델이 제작 완료되었고, 그 구성 기기인 SOC, EOC, 전장박스에 대한 최종 우주환경시험을 수행하였다. 과학기술위성 3호의 비행모델 우주환경시험은 진동시험과 열진공시험으로 이뤄지며, 그 시험 규격은 문서에 규정된 Acceptance Level로 수행된다. 충격시험은 공학인증모델을 통해 검증되었다. 열진공시험은 한국천문연구원에서 수행되었으며, 진동시험은 한국과학기술원 인공위성센터에서 수행되었다. 또한 전체 위성이 조립된 후 과학기술위성 3호의 열진공시험은 한국항공우주연구원에서 수행되었다. 이 발표에서는 MIRIS 비행모델에 대한 환경시험과정 및 결과를 보고하고, 과학기술위성이 전체적으로 조립된 후의 MIRIS 진동 및 열진공 시험 결과도 함께 논의한다.

  • PDF

Automatic identification of ARPA radar tracking vessels by CCTV camera system (CCTV 카메라 시스템에 의한 ARPA 레이더 추적선박의 자동식별)

  • Lee, Dae-Jae
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.45 no.3
    • /
    • pp.177-187
    • /
    • 2009
  • This paper describes a automatic video surveillance system(AVSS) with long range and 360$^{\circ}$ coverage that is automatically rotated in an elevation over azimuth mode in response to the TTM(tracked target message) signal of vessels tracked by ARPA(automatic radar plotting aids) radar. This AVSS that is a video security and tracking system supported by ARPA radar, CCTV(closed-circuit television) camera system and other sensors to automatically identify and track, detect the potential dangerous situations such as collision accidents at sea and berthing/deberthing accidents in harbor, can be used in monitoring the illegal fishing vessels in inshore and offshore fishing ground, and in more improving the security and safety of domestic fishing vessels in EEZ(exclusive economic zone) area. The movement of the target vessel chosen by the ARPA radar operator in the AVSS can be automatically tracked by a CCTV camera system interfaced to the ECDIS(electronic chart display and information system) with the special functions such as graphic presentation of CCTV image, camera position, camera azimuth and angle of view on the ENC, automatic and manual controls of pan and tilt angles for CCTV system, and the capability that can replay and record continuously all information of a selected target. The test results showed that the AVSS developed experimentally in this study can be used as an extra navigation aid for the operator on the bridge under the confusing traffic situations, to improve the detection efficiency of small targets in sea clutter, to enhance greatly an operator s ability to identify visually vessels tracked by ARPA radar and to provide a recorded history for reference or evidentiary purposes in EEZ area.