• Title/Summary/Keyword: Wide angle camera

Search Result 88, Processing Time 0.026 seconds

Image Stitching focused on Priority Object using Deep Learning based Object Detection (딥러닝 기반 사물 검출을 활용한 우선순위 사물 중심의 영상 스티칭)

  • Rhee, Seongbae;Kang, Jeonho;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.882-897
    • /
    • 2020
  • Recently, the use of immersive media contents representing Panorama and 360° video is increasing. Since the viewing angle is limited to generate the content through a general camera, image stitching is mainly used to combine images taken with multiple cameras into one image having a wide field of view. However, if the parallax between the cameras is large, parallax distortion may occur in the stitched image, which disturbs the user's content immersion, thus an image stitching overcoming parallax distortion is required. The existing Seam Optimization based image stitching method to overcome parallax distortion uses energy function or object segment information to reflect the location information of objects, but the initial seam generation location, background information, performance of the object detector, and placement of objects may limit application. Therefore, in this paper, we propose an image stitching method that can overcome the limitations of the existing method by adding a weight value set differently according to the type of object to the energy value using object detection based on deep learning.

Development of a Low-cost Monocular PSD Motion Capture System with Two Active Markers at Fixed Distance (일정간격의 두 능동마커를 이용한 저가형 단안 PSD 모션캡쳐 시스템 개발)

  • Seo, Pyeong-Won;Kim, Yu-Geon;Han, Chang-Ho;Ryu, Young-Kee;Oh, Choon-Suk
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.2
    • /
    • pp.61-71
    • /
    • 2009
  • In this paper, we propose a low-cost and compact motion capture system which enables to play motion games in PS2(Play Station 2). Recently, motion capture systems which are being used as a part in film producing and making games are too expensive and enormous systems. Now days, motion games using common USB camera are slow and have two-dimension recognition. But PSD sensor has a few good points, such as fast and low-cost. In recently year, 3D motion capture systems using 2D PSD (Position Sensitive Detector) optic sensor for motion capturing have been developed. One is Multi-PSD motion capture system applying stereo vision and another is Single-PSD motion capture system applying optical theory ship. But there are some problems to apply them to motion games. The Multi-PSD is high-cost and complicated because of using two more PSD Camera. It is so difficult to make markers having omni-direction equal intensity in Single-PSD. In this research, we propose a new theory that solves aforementioned problems. It can measure 3D coordination if separated two marker's intensity is equal to. We made a system based on this theory and experimented for performance capability. As a result, we were able to develop a motion capture system which is a single, low-cost, fast, compact, wide-angle and an adaptable motion games. The developed system is expected to be useful in animation, movies and games.

Development of Distortion Correction Technique in Tilted Image for River Surface Velocity Measurement (하천 표면영상유속 측정을 위한 경사영상 왜곡 보정 기술 개발)

  • Kim, Hee Joung;Lee, Jun Hyeong;Yoon, Byung Man;Kim, Seo Jun
    • Ecology and Resilient Infrastructure
    • /
    • v.8 no.2
    • /
    • pp.88-96
    • /
    • 2021
  • In surface image velocimetry, a wide area of a river is photographed at an angle to measure its velocity, inevitably causing image distortion. Although a distorted image can be corrected into an orthogonal image by using 2D projective coordinate transformation and considering reference points on the same plane as the water surface, this method is limited by the uncertainty of changes in the water level in the event of a flood. Therefore, in this study, we developed a tilt image correction technique that corrects distortions in oblique images without resetting the reference points while coping with changes in the water level using the geometric relationship between the coordinates of the reference points set at a high position the camera, and the vertical distance between the water surface and the camera. Furthermore, we developed a distortion correction method to verify the corrected image, wherein we conducted a full-scale river experiment to verify the reference point transformation equation and measure the surface velocity. Based on the verification results, the proposed tilt image correction method was found to be over 97% accurate, whereas the experiment result of the surface velocity differed by approximately 4% as compared to the results calculated using the proposed method, thereby indicating high accuracy. Application of the proposed method to an image-based fixed automatic discharge measurement system can improve the accuracy of discharge measurement in the event of a flood when the water level changes rapidly.

A 2D / 3D Map Modeling of Indoor Environment (실내환경에서의 2 차원/ 3 차원 Map Modeling 제작기법)

  • Jo, Sang-Woo;Park, Jin-Woo;Kwon, Yong-Moo;Ahn, Sang-Chul
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.355-361
    • /
    • 2006
  • In large scale environments like airport, museum, large warehouse and department store, autonomous mobile robots will play an important role in security and surveillance tasks. Robotic security guards will give the surveyed information of large scale environments and communicate with human operator with that kind of data such as if there is an object or not and a window is open. Both for visualization of information and as human machine interface for remote control, a 3D model can give much more useful information than the typical 2D maps used in many robotic applications today. It is easier to understandable and makes user feel like being in a location of robot so that user could interact with robot more naturally in a remote circumstance and see structures such as windows and doors that cannot be seen in a 2D model. In this paper we present our simple and easy to use method to obtain a 3D textured model. For expression of reality, we need to integrate the 3D models and real scenes. Most of other cases of 3D modeling method consist of two data acquisition devices. One for getting a 3D model and another for obtaining realistic textures. In this case, the former device would be 2D laser range-finder and the latter device would be common camera. Our algorithm consists of building a measurement-based 2D metric map which is acquired by laser range-finder, texture acquisition/stitching and texture-mapping to corresponding 3D model. The algorithm is implemented with laser sensor for obtaining 2D/3D metric map and two cameras for gathering texture. Our geometric 3D model consists of planes that model the floor and walls. The geometry of the planes is extracted from the 2D metric map data. Textures for the floor and walls are generated from the images captured by two 1394 cameras which have wide Field of View angle. Image stitching and image cutting process is used to generate textured images for corresponding with a 3D model. The algorithm is applied to 2 cases which are corridor and space that has the four wall like room of building. The generated 3D map model of indoor environment is shown with VRML format and can be viewed in a web browser with a VRML plug-in. The proposed algorithm can be applied to 3D model-based remote surveillance system through WWW.

  • PDF

Design of FPGA Camera Module with AVB based Multi-viewer for Bus-safety (AVB 기반의 버스안전용 멀티뷰어의 FPGA 카메라모듈 설계)

  • Kim, Dong-jin;Shin, Wan-soo;Park, Jong-bae;Kang, Min-goo
    • Journal of Internet Computing and Services
    • /
    • v.17 no.4
    • /
    • pp.11-17
    • /
    • 2016
  • In this paper, we proposed a multi-viewer system with multiple HD cameras based AVB(Audio Video Bridge) ethernet cable using IP networking, and FPGA(Xilinx Zynq 702) for bus safety systems. This AVB (IEEE802.1BA) system can be designed for the low latency based on FPGA, and transmit real-time with HD video and audio signals in a vehicle network. The proposed multi-viewer platform can multiplex H.264 video signals from 4 wide-angle HD cameras with existed ethernet 1Gbps. and 2-wire 100Mbps cables. The design of Zynq 702 based low latency to H.264 AVC CODEC was proposed for the minimization of time-delay in the HD video transmission of car area network, too. And the performance of PSNR(Peak Signal-to-noise-ratio) was analyzed with the reference model JM for encoding and decoding results in H.264 AVC CODEC. These PSNR values can be confirmed according the theoretical and HW result from the signal of H.264 AVC CODEC based on Zynq 702 the multi-viewer with multiple cameras. As a result, proposed AVB multi-viewer platform with multiple cameras can be used for the surveillance of audio and video around a bus for the safety due to the low latency of H.264 AVC CODEC design.

Hybrid (refrctive/diffractive) lens design for the ultra-compact camera module (초소형 영상 전송 모듈용 DOE(Diffractive optical element)렌즈의 설계 및 평가)

  • Lee, Hwan-Seon;Rim, Cheon-Seog;Jo, jae-Heung;Chang, Soo;Lim, Hyun-Kyu
    • Korean Journal of Optics and Photonics
    • /
    • v.12 no.3
    • /
    • pp.240-249
    • /
    • 2001
  • A high speed ultra-compact lens with a diffractive optical element (DOE) is designed, which can be applied to mobile communication devices such as IMT2000, PDA, notebook computer, etc. The designed hybrid lens has sufficiently high performance of less than f/2.2, compact size of 3.3 mm (1st surf. to image), and wide field angle of more than 30 deg. compared with the specifications of a single lens. By proper choice of the aspheric and DOE surface which has very large negative dispersion, we can correct chromatic and high order aberrations through the optimization technique. From Seidel third order aberration theory and Sweatt modeling, the initial data and surface configurations, that is, the combination condition of the DOE and the aspherical surface are obtained. However, due to the consideration of diffraction efficiency of a DOE, we can choose only four cases as the optimization input, and present the best solution after evaluating and comparing those four cases. On the other hand, we also report dramatic improvement in optical performance by inserting another refractive lens (so-called, field flattener), that keeps the refractive power of an original DOE lens and makes the petzval sum zero in the original DOE lens system. ystem.

  • PDF

Effect of All Sky Image Correction on Observations in Automatic Cloud Observation (자동 운량 관측에서 전천 영상 보정이 관측치에 미치는 효과)

  • Yun, Han-Kyung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.2
    • /
    • pp.103-108
    • /
    • 2022
  • Various studies have been conducted on cloud observation using all-sky images acquired with a wide-angle camera system since the early 21st century, but it is judged that an automatic observation system that can completely replace the eye observation has not been obtained. In this study, to verify the quantification of cloud observation, which is the final step of the algorithm proposed to automate the observation, the cloud distribution of the all-sky image and the corrected image were compared and analyzed. The reason is that clouds are formed at a certain height depending on the type, but like the retina image, the center of the lens is enlarged and the edges are reduced, but the effect of human learning ability and spatial awareness on cloud observation is unknown. As a result of this study, the average cloud observation error of the all-sky image and the corrected image was 1.23%. Therefore, when compared with the eye observation in the decile, the error due to correction is 1.23% of the observed amount, which is very less than the allowable error of the eye observation, and it does not include human error, so it is possible to collect accurately quantified data. Since the change in cloudiness due to the correction is insignificant, it was confirmed that accurate observations can be obtained even by omitting the unnecessary correction step and observing the cloudiness in the pre-correction image.

Matching Points Filtering Applied Panorama Image Processing Using SURF and RANSAC Algorithm (SURF와 RANSAC 알고리즘을 이용한 대응점 필터링 적용 파노라마 이미지 처리)

  • Kim, Jeongho;Kim, Daewon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.144-159
    • /
    • 2014
  • Techniques for making a single panoramic image using multiple pictures are widely studied in many areas such as computer vision, computer graphics, etc. The panorama image can be applied to various fields like virtual reality, robot vision areas which require wide-angled shots as an useful way to overcome the limitations such as picture-angle, resolutions, and internal informations of an image taken from a single camera. It is so much meaningful in a point that a panoramic image usually provides better immersion feeling than a plain image. Although there are many ways to build a panoramic image, most of them are using the way of extracting feature points and matching points of each images for making a single panoramic image. In addition, those methods use the RANSAC(RANdom SAmple Consensus) algorithm with matching points and the Homography matrix to transform the image. The SURF(Speeded Up Robust Features) algorithm which is used in this paper to extract featuring points uses an image's black and white informations and local spatial informations. The SURF is widely being used since it is very much robust at detecting image's size, view-point changes, and additionally, faster than the SIFT(Scale Invariant Features Transform) algorithm. The SURF has a shortcoming of making an error which results in decreasing the RANSAC algorithm's performance speed when extracting image's feature points. As a result, this may increase the CPU usage occupation rate. The error of detecting matching points may role as a critical reason for disqualifying panoramic image's accuracy and lucidity. In this paper, in order to minimize errors of extracting matching points, we used $3{\times}3$ region's RGB pixel values around the matching points' coordinates to perform intermediate filtering process for removing wrong matching points. We have also presented analysis and evaluation results relating to enhanced working speed for producing a panorama image, CPU usage rate, extracted matching points' decreasing rate and accuracy.