• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.029 seconds

Analysis of sideward footprint of Multi-view imagery by sidelap changing (횡중복도 변화에 따른 다각사진 Sideward Footprint 분석)

  • Seo, Sang-Il;Park, Seon-Dong;Kim, Jong-In;Yoon, Jong-Seong
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2010.04a
    • /
    • pp.53-56
    • /
    • 2010
  • An aerial multi-looking camera system equips itself with five separate cameras which enables acquiring one vertical image and four oblique images at the same time. This provides diverse information about the site compared to aerial photographs vertically. However, multi-looking Aerial Camera for building a 3D spatial information don't use a large-size CCD camera, do uses a medium-size CCD camera, if acquiring forward, backward, left and right imagery of Certain objects, Aerial photographing set overlap and sidelap must be considered. Especially, Sideward-looking camera set up by the sidelap to determine whether a particular object can be acquisition Through our research we analyzed of sideward footprint and aerial photographing efficiency of Multi-view imagery by sidelap changing.

  • PDF

Performance evaluation of noise reduction algorithm with median filter using improved thresholding method in pixelated semiconductor gamma camera system: A numerical simulation study

  • Lee, Youngjin
    • Nuclear Engineering and Technology
    • /
    • v.51 no.2
    • /
    • pp.439-443
    • /
    • 2019
  • To improve the noise characteristics, software-based noise reduction algorithms are widely used in cadmium zinc telluride (CZT) pixelated semiconductor gamma camera system. The purpose of this study was to develop an improved median filtering algorithm using a thresholding method for noise reduction in a CZT pixelated semiconductor gamma camera system. The gamma camera system simulated is a CZT pixelated semiconductor detector with a pixel-matched parallel-hole collimator and the spatial resolution phatnom was designed with the Geant4 Application for Tomography Emission (GATE). In addition, a noise reduction algorithm with a median filter using an improved thresholding method is developed and we applied our proposed algorithm to an acquired spatial resolution phantom image. According to the results, the proposed median filter improved the noise characteristics compared to a conventional median filter. In particular, the average for normalized noise power spectrum, contrast to noise ratio, and coefficient of variation results using the proposed median filter were 10, 1.11, and 1.19 times better than results using conventional median filter, respectively. In conclusion, our results show that the proposed median filter using improved the thresholding method results in high imaging performance when applied in a CZT semiconductor gamma camera system.

Virtual portraits from rotating selfies

  • Yongsik Lee;Jinhyuk Jang;SeungjoonYang
    • ETRI Journal
    • /
    • v.45 no.2
    • /
    • pp.291-303
    • /
    • 2023
  • Selfies are a popular form of photography. However, due to physical constraints, the compositions of selfies are limited. We present algorithms for creating virtual portraits with interesting compositions from a set of selfies. The selfies are taken at the same location while the user spins around. The scene is analyzed using multiple selfies to determine the locations of the camera, subject, and background. Then, a view from a virtual camera is synthesized. We present two use cases. After rearranging the distances between the camera, subject, and background, we render a virtual view from a camera with a longer focal length. Following that, changes in perspective and lens characteristics caused by new compositions and focal lengths are simulated. Second, a virtual panoramic view with a larger field of view is rendered, with the user's image placed in a preferred location. In our experiments, virtual portraits with a wide range of focal lengths were obtained using a device equipped with a lens that has only one focal length. The rendered portraits included compositions that would be photographed with actual lenses. Our proposed algorithms can provide new use cases in which selfie compositions are not limited by a camera's focal length or distance from the camera.

Georeferencing of Indoor Omni-Directional Images Acquired by a Rotating Line Camera (회전식 라인 카메라로 획득한 실내 전방위 영상의 지오레퍼런싱)

  • Oh, So-Jung;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.2
    • /
    • pp.211-221
    • /
    • 2012
  • To utilize omni-directional images acquired by a rotating line camera for indoor spatial information services, we should register precisely the images with respect to an indoor coordinate system. In this study, we thus develop a georeferencing method to estimate the exterior orientation parameters of an omni-directional image - the position and attitude of the camera at the acquisition time. First, we derive the collinearity equations for the omni-directional image by geometrically modeling the rotating line camera. We then estimate the exterior orientation parameters using the collinearity equations with indoor control points. The experimental results from the application to real data indicate that the exterior orientation parameters is estimated with the precision of 1.4 mm and $0.05^{\circ}$ for the position and attitude, respectively. The residuals are within 3 and 10 pixels in horizontal and vertical directions, respectively. Particularly, the residuals in the vertical direction retain systematic errors mainly due to the lens distortion, which should be eliminated through a camera calibration process. Using omni-directional images georeferenced precisely with the proposed method, we can generate high resolution indoor 3D models and sophisticated augmented reality services based on the models.

Object Tracking And Elimination Using Lod Edge Maps Generated from Modified Canny Edge Maps (수정된 캐니 에지 맵으로부터 만들어진 LOD 에지 맵을 이용한 물체 추적 및 소거)

  • Park, Ji-Hun;Jang, Yung-Dae;Lee, Dong-Hun;Lee, Jong-Kwan;Ham, Mi-Ok
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.171-182
    • /
    • 2007
  • We propose a simple method for tracking a nonparameterized subject contour in a single video stream with a moving camera and changing background. Then we present a method to eliminate the tracked contour object by replacing with the background scene we get from other frame. First we track the object using LOD (Level-of-Detail) canny edge maps, then we generate background of each image frame and replace the tracked object in a scene by a background image from other frame that is not occluded by the tracked object. Our tracking method is based on level-of-detail (LOD) modified Canny edge maps and graph-based routing operations on the LOD maps. We get more edge pixels along LOD hierarchy. Our accurate tracking is based on reducing effects from irrelevant edges by selecting the stronger edge pixels, thereby relying on the current frame edge pixel as much as possible. The first frame background scene is determined by camera motion, camera movement between two image frames, and other background scenes are computed from the previous background scenes. The computed background scenes are used to eliminate the tracked object from the scene. In order to remove the tracked object, we generate approximated background for the first frame. Background images for subsequent frames are based on the first frame background or previous frame images. This approach is based on computing camera motion. Our experimental results show that our method works nice for moderate camera movement with small object shape changes.

Analysis of Observation Environment with Sky Line and Skyview Factor using Digital Elevation Model (DEM), 3-Dimensional Camera Image and Radiative Transfer Model at Radiation Site, Gangneung-Wonju National University (수치표고모델, 3차원 카메라이미지자료 및 복사모델을 이용한 Sky Line과 Skyview Factor에 따른 강릉원주대학교 복사관측소 관측환경 분석)

  • Jee, Joon-Bum;Zo, Il-Sung;Kim, Bu-Yo;Lee, Kyu-Tae;Jang, Jeong-Pil
    • Atmosphere
    • /
    • v.29 no.1
    • /
    • pp.61-74
    • /
    • 2019
  • To investigate the observational environment, sky line and skyview factor (SVF) are calculated using a digital elevation model (DEM; 10 m spatial resolution) and 3 dimensional (3D) sky image at radiation site, Gangneung-Wonju National University (GWNU). Solar radiation is calculated using GWNU solar radiation model with and without the sky line and the SVF retrieved from the 3D sky image and DEM. When compared with the maximum sky line elevation from Skyview, the result from 3D camera is higher by $3^{\circ}$ and that from DEM is lower by $7^{\circ}$. The SVF calculated from 3D camera, DEM and Skyview is 0.991, 0.998, and 0.993, respectively. When the solar path is analyzed using astronomical solar map with time, the sky line by 3D camera shield the direct solar radiation up to $14^{\circ}$ with solar altitude at winter solstice. The solar radiation is calculated with minutely, and monthly and annual accumulated using the GWNU model. During the summer and winter solstice, the GWNU radiation site is shielded from direct solar radiation by the west mountain 40 and 60 minutes before sunset, respectively. The monthly difference between plane and real surface is up to $29.18M\;m^{-2}$ with 3D camera in November, while that with DEM is $4.87M\;m^{-2}$ in January. The difference in the annual accumulated solar radiation is $208.50M\;m^{-2}$ (2.65%) and $47.96M\;m^{-2}$ (0.63%) with direct solar radiation and $30.93M\;m^{-2}$ (0.58%) and $3.84M\;m^{-2}$ (0.07%) with global solar radiation, respectively.

An Estimation Method for Location Coordinate of Object in Image Using Single Camera and GPS (단일 카메라와 GPS를 이용한 영상 내 객체 위치 좌표 추정 기법)

  • Seung, Teak-Young;Kwon, Gi-Chang;Moon, Kwang-Seok;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.112-121
    • /
    • 2016
  • ADAS(Advanced Driver Assistance Systems) and street furniture information collecting car like as MMS(Mobile Mapping System), they require object location estimation method for recognizing spatial information of object in road images. But, the case of conventional methods, these methods require additional hardware module for gathering spatial information of object and have high computational complexity. In this paper, for a coordinate of road sign in single camera image, a position estimation scheme of object in road images is proposed using the relationship between the pixel and object size in real world. In this scheme, coordinate value and direction are used to get coordinate value of a road sign in images after estimating the equation related on pixel and real size of road sign. By experiments with test video set, it is confirmed that proposed method has high accuracy for mapping estimated object coordinate into commercial map. Therefore, proposed method can be used for MMS in commercial region.

A Visual Calibration Scheme for Off-Line Programming of SCARA Robots (스카라 로봇의 오프라인 프로그래밍을 위한 시각정보 보정기법)

  • Park, Chang-Kyoo;Son, Kwon
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.21 no.1
    • /
    • pp.62-72
    • /
    • 1997
  • High flexibility and productivity using industrial robots are being achieved in manufacturing lines with off-line robot programmings. A good off-line programming system should have functions of robot modelling, trajectory planning, graphical teach-in, kinematic and dynamic simulations. Simulated results, however, can hardly be applied to on-line tasks until any calibration procedure is accompained. This paper proposes a visual calibration scheme in order to provide a calibration tool for our own off-line programming system of SCARA robots. The suggested scheme is based on the position-based visual servoings, and the perspective projection. The scheme requires only one camera as it uses saved kinematic data for three-dimensional visual calibration. Predicted images are generated and then compared with camera images for updating positions and orientations of objects. The scheme is simple and effective enough to be used in real time robot programming.

Model-based Curved Lane Detection using Geometric Relation between Camera and Road Plane (카메라와 도로평면의 기하관계를 이용한 모델 기반 곡선 차선 검출)

  • Jang, Ho-Jin;Baek, Seung-Hae;Park, Soon-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.2
    • /
    • pp.130-136
    • /
    • 2015
  • In this paper, we propose a robust curved lane marking detection method. Several lane detection methods have been proposed, however most of them have considered only straight lanes. Compared to the number of straight lane detection researches, less number of curved-lane detection researches has been investigated. This paper proposes a new curved lane detection and tracking method which is robust to various illumination conditions. First, the proposed methods detect straight lanes using a robust road feature image. Using the geometric relation between a vehicle camera and the road plane, several circle models are generated, which are later projected as curved lane models on the camera images. On the top of the detected straight lanes, the curved lane models are superimposed to match with the road feature image. Then, each curve model is voted based on the distribution of road features. Finally, the curve model with highest votes is selected as the true curve model. The performance and efficiency of the proposed algorithm are shown in experimental results.

Stitching Method of Videos Recorded by Multiple Handheld Cameras (다중 사용자 촬영 영상의 영상 스티칭)

  • Billah, Meer Sadeq;Ahn, Heejune
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.22 no.3
    • /
    • pp.27-38
    • /
    • 2017
  • This Paper Presents a Method for Stitching a Large Number of Images Recorded by a Large Number of Individual Users Through a Cellular Phone Camera at a Venue. In Contrast to 360 Camera Solutions that Use Existing Fixed Rigs, these Conditions must Address New Challenges Such as Time Synchronization, Repeated Transformation Matrix Calculations, and Camera Sensor Mismatch Correction. In this Paper, we Solve this Problem by Updating the Transformation Matrix Using Time Synchronization Method Using Audio, Sensor Mismatch Removal by Color Transfer Method, and Global Operation Stabilization Algorithm. Experimental Results Show that the Proposed Algorithm Shows better Performance in Terms of Computation Speed and Subjective Image Quality than that of Screen Stitching.