• Title/Summary/Keyword: Stitching images

Search Result 93, Processing Time 0.024 seconds

Generation of Spatial Adjacency Map and Contents File Format for Ultra Wide Viewing Service (울트라 와이드 뷰잉 서비스를 위한 공간 유사도 맵 생성 및 울트라 와이드 뷰잉 콘텐츠 저장 방법)

  • Lee, Euisang;Kang, Dongjin;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.4
    • /
    • pp.473-483
    • /
    • 2017
  • Since the advent of 3D and UHD contents, demand for high-quality panoramic images has been increasing. The UWV(Ultra-Wide Viewing) service uses a wider viewing angle than conventional panoramas to provide a lively experience for users and enhance their understanding of the event. In this paper, we propose a spatial adjacency map generation method and an UWV file storage format technology to provide UWV service. The spatial adjacency map measures the similarity between images and generate the position information of the images based on similarity. And the stitching time of the image can be shortened through the generated position information. Through the spatial adjacency map, we generate the large screen content quickly. The UWV file format which is based on ISOBMFF process spatial adjacency map and videos and support random access. In this paper, we design the UWV player to verify the spatial adjacency map and UWV file format and show the result of experiments.

Construction of 2D Image Mosaics Using Quasi-feature Point (유사 특징점을 이용한 모자이킹 영상의 구성)

  • Kim, Dae-Hyeon;Choe, Jong-Su
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.4
    • /
    • pp.381-391
    • /
    • 2001
  • This paper presents an efficient approach to build an image mosaics from image sequences. Unlike general panoramic stitching methods, which usually require some geometrical feature points or solve the iterative nonlinear equations, our algorithm can directly recover the 8-parameter planar perspective transforms. We use four quasi-feature points in order to compute the projective transform between two images. This feature is based on the graylevel distribution and defined in the overlap area between two images. Therefore the proposed algorithm can reduce the total amount of the computation. We also present an algorithm lot efficiently matching the correspondence of the extracted feature. The proposed algorithm is applied to various images to estimate its performance and. the simulation results present that our algorithm can find the correct correspondence and build an image mosaics.

  • PDF

Affine Model for Generating Stereo Mosaic Image from Video Frames (비디오 프레임 영상의 자유 입체 모자이크 영상 제작을 위한 부등각 모델 연구)

  • Noh, Myoung-Jong;Cho, Woo-Sug;Park, Jun-Ku;Koh, Jin-Woo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.17 no.3
    • /
    • pp.49-56
    • /
    • 2009
  • Recently, a generation of high quality mosaic images from video sequences has been attempted by a variety of investigations. Among the matter of investigation, in this paper, generation on stereo mosaic utilizing airborne-video sequence images is focused upon. The stereo mosaic is made by creating left and right mosaic which are fabricated by front and rear slices having different viewing angle in consecutive video frames. For making the stereo mosaic, motion parameters which are able to define geometric relationship between consecutive video frames are determined. For determining motion parameters, affine model which is able to explain relative motion parameters is applied by this paper. The mosaicing method using relative motion parameters is called by free mosaic. The free mosaic proposed in this paper consists of 4 step processes: image registration with reference to first frame using affine model, front and rear slicing, stitching line definition and image mosaicing. As the result of experiment, the left and right mosaic image, anaglyphic image for stereo mosaic images are showed and analyzed y-parallax for checking accuracy.

  • PDF

Moving Object Preserving Seamline Estimation (이동 객체를 보존하는 시접선 추정 기술)

  • Gwak, Moonsung;Lee, Chanhyuk;Lee, HeeKyung;Cheong, Won-Sik;Yang, Seungjoon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.992-1001
    • /
    • 2019
  • In many applications, images acquired from multiple cameras are stitched to form an image with a wide viewing angle. We propose a method of estimating a seam line using motion information to stitch multiple images without distortion of the moving object. Existing seam estimation techniques usually utilize an energy function based on image gradient information and parallax. In this paper, we propose a seam estimation technique that prevents distortion of moving object by adding temporal motion information, which is calculated from the gradient information of each frame. We also propose a measure to quantify the distortion level of stitched images and to verify the performance differences between the existing and proposed methods.

Grouping Images Based on Camera Sensor for Efficient Image Stitching (효율적인 영상 스티칭을 위한 카메라 센서 정보 기반 영상 그룹화)

  • Im, Jiheon;Lee, Euisang;Kim, Hoejung;Kim, Kyuheon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.06a
    • /
    • pp.256-259
    • /
    • 2017
  • 파노라마 영상은 카메라 시야각의 제한을 극복하여 넓은 시야를 가질 수 있으므로 컴퓨터 비전, 스테레오 카메라 등의 분야에서 효율적으로 연구되고 있다. 파노라마 영상을 생성하기 위해서는 영상 스티칭 기술이 필요하다. 영상 스티칭 기술은 여러 영상에서 추출한 특징점의 디스크립터를 생성하고, 특징점들 간의 유사도를 비교하여 영상들을 이어 붙여 큰 하나의 영상으로 만드는 것이다. 각각의 특징점은 수십 수백차원의 정보를 가지고 있고, 스티칭 할 영상이 많아질수록 데이터 처리 시간이 증가하게 된다. 본 논문에서는 이를 해결 하기 위해서 전처리 과정으로 겹치는 영역이 많을 것이라고 예상되는 영상들을 그룹화 하는 방법을 제안한다. 카메라 센서 정보를 기반으로 영상들을 미리 그룹화 하여 한 번에 스티칭 할 영상의 수를 줄임으로써 데이터 처리 시간을 줄일 수 있다. 후에 계층적으로 스티칭 하여 하나의 큰 파노라마를 만든다. 실험 결과를 통해 제안한 방법이 기존의 스티칭 처리 시간 보다 짧아진 것을 검증하였다.

  • PDF

Arbitrary View Images Generation Using Panoramic-Based Image Morphing For Large-Scale Scenes (대규모 환경에서 파노라믹 기반 영상 모핑을 이용한 임의 시점의 영상 생성)

  • Jeong, Jang-Hyun;Joo, Myung-Ho;Kang, Hang-Bong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.05a
    • /
    • pp.185-188
    • /
    • 2005
  • 영상 기반 렌더링에서 평면에 투영된 사영 영상만을 가지고 3 차원 영상을 재구성 하는 여러 가지 모델링 기법들이 연구되어 왔다. 4D Plenoptic Function 을 사용하는 Light Field Rendering 이나 Lumigraph 방법은 여러 개의 영상으로 새로운 시점의 영상을 생성하는 기법이다. 이러한 방법은 사용자가 가상 세계에서의 항해를 가능하게 하고 2 차원의 정보만으로 3 차원 환경을 구성 할 수 있다. Concentric Mosaic, Plenoptic Stitching, Sea of Image 등은 Light Field 를 이용하여 사용자가 여러 가지 환경에서 항해할 수 있게 하는 기법이다. 특히 Takahashi 는 도시의 거리와 같은 대규모 환경에서의 항해에 관한 연구를 발표하였다. 단일 경로를 따라 파노라마 영상을 획득한 다음 Light Field 방법을 사용해서 새로운 시점의 영상을 생성한다. 하지만 대규모 환경에서 사용자가 이동할 수 있는 경로의 범위는 매우 넓고 경로를 따라 조밀하게 파노라마 영상을 획득해야 하기 때문에 데이터의 양이 많아지고 영상획득의 어려움이 있다. 이러한 단점으로 인하여 참조 영상의 네트워크 전송 시에 네트워크의 부하가 증가될 수 있다. 본 논문에서는 Takahashi 의 방법을 기본으로 파노라마 영상 모핑 방법을 이용하여 임의 시점 (Arbitrary View)의 영상을 렌더링 하는 방법을 제안한다. 파노라마 영상의 획득 간격을 비교적 크게 하면서 파노라마 영상 모핑 기법을 이용하여 중간 영상을 생성한 후 Takahashi 의 방법을 사용하여 임의 영상을 생성하는 방법이다. 적은 수의 파노라마 영상으로 비교적 좋은 임의 시점의 영상을 재구성 할 수 있었다.

  • PDF

Photorealistic Ray-traced Visualization Approach for the Interactive Biomimetic Design of Insect Compound Eyes

  • Nguyen, Tung Lam;Trung, Hieu Tran Doan;Lee, Wooseok;Lee, Hocheol
    • Current Optics and Photonics
    • /
    • v.5 no.6
    • /
    • pp.699-710
    • /
    • 2021
  • In this study, we propose a biomimetic optical structure design methodology for investigating micro-optical mechanisms associated with the compound eyes of insects. With these compound eyes, insects can respond fast while maintaining a wide field of view. Also, considerable research attention has been focused on the insect compound eyes to utilize these benefits. However, their nano micro-structures are complex and challenging to demonstrate in real applications. An effectively integrated design methodology is required considering the manufacturing difficulty. We show that photorealistic ray-traced visualization is an effective method for designing the biomimetic of a micro-compound eye of an insect. We analyze the image formation mechanism and create a three-dimensional computer-aided design model. Then, a ray-trace visualization is applied to observe the optical image formation. Finally, the segmented images are stitched together to generate an image with a wide-angle; the image is assessed for quality. The high structural similarity index (SSIM) value (approximately 0.84 to 0.89) of the stitched image proves that the proposed MATLAB-based image stitching algorithm performs effectively and comparably to the commercial software. The results may be employed for the understanding, researching, and design of advanced optical systems based on biological eyes and for other industrial applications.

Microscopic Image-based Cancer Cell Viability-related Phenotype Extraction (현미경 영상 기반 암세포 생존력 관련 표현형 추출)

  • Misun Kang
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.3
    • /
    • pp.176-181
    • /
    • 2023
  • During cancer treatment, the patient's response to drugs appears differently at the cellular level. In this paper, an image-based cell phenotypic feature quantification and key feature selection method are presented to predict the response of patient-derived cancer cells to a specific drug. In order to analyze the viability characteristics of cancer cells, high-definition microscope images in which cell nuclei are fluorescently stained are used, and individual-level cell analysis is performed. To this end, first, image stitching is performed for analysis of the same environment in units of the well plates, and uneven brightness due to the effects of illumination is adjusted based on the histogram. In order to automatically segment only the cell nucleus region, which is the region of interest, from the improved image, a superpixel-based segmentation technique is applied using the fluorescence expression level and morphological information. After extracting 242 types of features from the image through the segmented cell region information, only the features related to cell viability are selected through the ReliefF algorithm. The proposed method can be applied to cell image-based phenotypic screening to determine a patient's response to a drug.

An Image Mosaic Technique for Images Transmitted by Wireless Sensor Networks (무선 센서 네트워크 영상을 위한 모자이크 기법)

  • Jun, Sang-Eun;Eo, Jin-Woo
    • Journal of IKEEE
    • /
    • v.11 no.4
    • /
    • pp.187-192
    • /
    • 2007
  • Since wireless sensor networks (WSN) have relatively narrow bandwidth and have limited memory space. Mosaic by inlaying images transmitted by adjacent sensors can provide wider field of view and smaller storage memory. Most WSN are used for surveillance purpose, image acquisition period should be sufficiently short, so that mosaic algorithm has to be run in real time. Proposed algorithm is derived by using the fact that position of sensor nodes are fixed and known. Transformation matrix can be calculated by using distance between sensor nodes and distance between sensor nodes and predefined object. Simulation result shows that proposed algorithm provides very short processing time whereas it preserves image quality.

  • PDF

A study on lighting angle for improvement of 360 degree video quality in metaverse (메타버스에서 360° 영상 품질향상을 위한 조명기 투사각연구)

  • Kim, Joon Ho;An, Kyong Sok;Choi, Seong Jhin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.499-505
    • /
    • 2022
  • Recently, the metaverse has been receiving a lot of attention. Metaverse means a virtual space, and various events can be held in this space. In particular, 360-degree video, a format optimized for the metaverse space, is attracting attention. A 360-degree video image is created by stitching images taken with multiple cameras or lenses in all 360-degree directions. When shooting a 360-degree video, a variety of shooting equipment, including a shooting staff to take a picture of a subject in front of the camera, is displayed on the video. Therefore, when shooting a 360-degree video, you have to hide everything except the subject around the camera. There are several problems with this shooting method. Among them, lighting is the biggest problem. This is because it is very difficult to install a fixture that focuses on the subject from behind the camera as in conventional image shooting. This study is an experimental study to find the optimal angle for 360-degree images by adjusting the angle of indoor lighting. We propose a method to record 360-degree video without installing additional lighting. Based on the results of this study, it is expected that experiments will be conducted through more various shooting angles in the future, and furthermore, it is expected that it will be helpful when using 360-degree images in the metaverse space.