• Title/Summary/Keyword: Multi-view camera

Search Result 160, Processing Time 0.032 seconds

Color Pattern Recognition and Tracking for Multi-Object Tracking in Artificial Intelligence Space (인공지능 공간상의 다중객체 구분을 위한 컬러 패턴 인식과 추적)

  • Tae-Seok Jin
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.2_2
    • /
    • pp.319-324
    • /
    • 2024
  • In this paper, the Artificial Intelligence Space(AI-Space) for human-robot interface is presented, which can enable human-computer interfacing, networked camera conferencing, industrial monitoring, service and training applications. We present a method for representing, tracking, and objects(human, robot, chair) following by fusing distributed multiple vision systems in AI-Space. The article presents the integration of color distributions into particle filtering. Particle filters provide a robust tracking framework under ambiguous conditions. We propose to track the moving objects(human, robot, chair) by generating hypotheses not in the image plane but on the top-view reconstruction of the scene.

Optical Design of the DOTIFS Spectrograph

  • Chung, Haeun;Ramaprakash, A.N.
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.2
    • /
    • pp.100.2-100.2
    • /
    • 2014
  • The DOTIFS is a new multi-object Integral Field Spectrograph (IFS) planned to be designed and built by the Inter-University Center for Astronomy and Astrophysics, Pune, India, (IUCAA) for cassegrain side port of the 3.6m Devasthal Optical Telescope (DOT) being constructed by the Aryabhatta Research Institute of Observational Sciences, Nainital. (ARIES) It is a multi-integral field unit (IFU) spectrograph which has 370-740nm wavelength coverage with spectral resolution R~1200-2400. Sixteen IFUs with microlens arrays and fibers can be deployed on 8 arcmin field. Each IFU has $8.7^{{\prime}{\prime}}{\times}7.4^{{\prime}{\prime}}$ field of view with 144 spaxel elements. 2304 fibers coming from IFUs are dispersed by eight identical spectrographs with all refractive and all spherical optics. In this work, we show optical design of the DOTIFS spectrograph. Expected performance and result of tolerance and thermal analysis are also shown. The optics is comprised of f=520mm collimator, broadband filter, dispersion element and f=195mm camera. Pupil size is determined as 130mm from spectral resolution and budget requirements. To maintain good transmission down to 370nm, calcium fluoride elements and high transmission optical glasses have been used. Volume Phase Holographic grating is selected as a dispersion element to maximize the grating efficiency and to minimize the size of the optics. Detailed optics design report had been documented. The design was finalized through optical design review and now ready for order optics.

  • PDF

Moving Object Tracking Using MHI and M-bin Histogram (MHI와 M-bin Histogram을 이용한 이동물체 추적)

  • Oh, Youn-Seok;Lee, Soon-Tak;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.9 no.1
    • /
    • pp.48-55
    • /
    • 2005
  • In this paper, we propose an efficient moving object tracking technique for multi-camera surveillance system. Color CCD cameras used in this system are network cameras with their own IP addresses. Input image is transmitted to the media server through wireless connection among server, bridge, and Access Point (AP). The tracking system sends the received images through the network to the tracking module, and it tracks moving objects in real-time using color matching method. We compose two sets of cameras, and when the object is out of field of view (FOV), we accomplish hand-over to be able to continue tracking the object. When hand-over is performed, we use MHI(Motion History Information) based on color information and M-bin histogram for an exact tracking. By utilizing MHI, we can calculate direction and velocity of the object, and those information helps to predict next location of the object. Therefore, we obtain a better result in speed and stability than using template matching based on only M-bin histogram, and we verified this result by an experiment.

  • PDF

Registration Technique of Partial 3D Point Clouds Acquired from a Multi-view Camera for Indoor Scene Reconstruction (실내환경 복원을 위한 다시점 카메라로 획득된 부분적 3차원 점군의 정합 기법)

  • Kim Sehwan;Woo Woontack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.39-52
    • /
    • 2005
  • In this paper, a registration method is presented to register partial 3D point clouds, acquired from a multi-view camera, for 3D reconstruction of an indoor environment. In general, conventional registration methods require a high computational complexity and much time for registration. Moreover, these methods are not robust for 3D point cloud which has comparatively low precision. To overcome these drawbacks, a projection-based registration method is proposed. First, depth images are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling up holes referring neighboring 3D points. Second, 3D point clouds acquired from two views are projected onto the same image plane, and two-step integer mapping is applied to enable modified KLT (Kanade-Lucas-Tomasi) to find correspondences. Then, fine registration is carried out through minimizing distance errors based on adaptive search range. Finally, we calculate a final color referring colors of corresponding points and reconstruct an indoor environment by applying the above procedure to consecutive scenes. The proposed method not only reduces computational complexity by searching for correspondences on a 2D image plane, but also enables effective registration even for 3D points which have low precision. Furthermore, only a few color and depth images are needed to reconstruct an indoor environment.

Online Multi-view Range Image Registration using Geometric and Photometric Feature Tracking (3차원 기하정보 및 특징점 추적을 이용한 다시점 거리영상의 온라인 정합)

  • Baek, Jae-Won;Moon, Jae-Kyoung;Park, Soon-Yong
    • The KIPS Transactions:PartB
    • /
    • v.14B no.7
    • /
    • pp.493-502
    • /
    • 2007
  • An on-line registration technique is presented to register multi-view range images for the 3D reconstruction of real objects. Using a range camera, we first acquire range images and photometric images continuously. In the range images, we divide object and background regions using a predefined threshold value. For the coarse registration of the range images, the centroid of the images are used. After refining the registration of range images using a projection-based technique, we use a modified KLT(Kanade-Lucas-Tomasi) tracker to match photometric features in the object images. Using the modified KLT tracker, we can track image features fast and accurately. If a range image fails to register, we acquire new range images and try to register them continuously until the registration process resumes. After enough range images are registered, they are integrated into a 3D model in offline step. Experimental results and error analysis show that the proposed method can be used to reconstruct 3D model very fast and accurately.

A Study on the Digital Holographic Image Acquisition Method using Chroma Key Composition (크로마키 합성을 이용한 디지털 홀로그래피 이미지 획득 방법 연구)

  • Kim, Ho-sik;Kwon, Soon-chul;Lee, Seung-hyun
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.3
    • /
    • pp.313-321
    • /
    • 2022
  • As 5G is getting developed, people are getting interested in immersive content. Some predicts that immersive content may be implemented in real life such as holograms, which were only possible in movies. Holograms, which has been studied for a long time since Dennis Gabor published the basic theory in 1948, are constantly developing in a new direction with digital technology. It is developing from a traditional optical hologram, which is produced by recording the interference pattern of light to a computer generated hologram (CGH) and a digital hologram printer. In order to produce a hologram using a digital hologram printer, holographic element (Hogel) image must first be created using multi-view images. There are a method of directly photographing an actual image and a method of modeling an object using 3D graphic production tool and rendering the motion of a virtual camera to acquire a series of multi-view images. In this paper, we propose a new method of getting image, which is one of the visual effect, VFX, producing multi-view images using chroma key composition. We shoot on the green screen of actual object, suggest the overall workflow of composition with 3D computer graphic(CG) and explain the role of each step. We expected that it will be helpful in researching a new method of image acquisition in the future if all or part of the proposed workflow to be applied.

Consider the directional hole filling method for virtual view point synthesis (가상 시점 영상 합성을 위한 방향성 고려 홀 채움 방법)

  • Mun, Ji Hun;Ho, Yo Sung
    • Smart Media Journal
    • /
    • v.3 no.4
    • /
    • pp.28-34
    • /
    • 2014
  • Recently the depth-image-based rendering (DIBR) method is usually used in 3D image application filed. Virtual view image is created by using a known view with associated depth map to make a virtual view point which did not taken by the camera. But, disocclusion area occur because the virtual view point is created using a depth image based image 3D warping. To remove those kind of disocclusion region, many hole filling methods are proposed until now. Constant color region searching, horizontal interpolation, horizontal extrapolation, and variational inpainting techniques are proposed as a hole filling methods. But when using those hole filling method some problem occurred. The different types of annoying artifacts are appear in texture region hole filling procedure. In this paper to solve those problem, the multi-directional extrapolation method is newly proposed for efficiency of expanded hole filling performance. The proposed method is efficient when performing hole filling which complex texture background region. Consideration of directionality for hole filling method use the hole neighbor texture pixel value when estimate the hole pixel value. We can check the proposed hole filling method can more efficiently fill the hole region which generated by virtual view synthesis result.

A study on TV homeshopping brand dinnerware sales space styling effects with camera angle -Focused on consumer preference- (TV홈쇼핑 카메라 앵글에 따른 브랜드 식기 판매 공간의 연출 효과에 관한 연구 -소비자 선호도를 중심으로-)

  • Rhie, Jin-Min;Jang, Young-Soon;Lee, Mi-Yeon
    • Science of Emotion and Sensibility
    • /
    • v.14 no.3
    • /
    • pp.347-360
    • /
    • 2011
  • To find out characteristics of TV home shopping's virtual space deepness styling, this study had analysis characteristics of space deepness which is showed on flat TV screen with actually aired 6 dinnerware sales case, at $C^*$ home shopping, March.2005~November.2010, and survey consumer's emotional verbal image according to space styling character to get space deepness, which were shown on flat TV screen with camera angle, and research mutual relation with consumer preference. Also consumer's typical emotional verbal images for each space styling images for brand dinner ware sales had been extracted with reliability analysis, factor analysis, and multi dimensional scaling MDS using SPSS. Styling characteristics of space deepness were contrast of size, layering, vertical arrangement, and perspective arrangement, and used camera angles were bird's eye view, hi angle, and eye level. Result from the research is, highly marked consumer preferred styling material had a deep corelation to material's main factor and perceived emotional verbal images. Therefore this research could bring forward to new consumer preferred styling characteristics according to camera angle. Furthermore, it will be possible to make a study of preferred styling material through evaluation of quantitative spectators of in this area.

  • PDF

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Characteristics of the Electro-Optical Camera(EOC)

  • Lee, Seung-Hoon;Shim, Hyung-Sik;Paik, Hong-Yul
    • Proceedings of the KSRS Conference
    • /
    • 1998.09a
    • /
    • pp.313-318
    • /
    • 1998
  • Electro-Optical Camera(EOC) is the main payload of Korea Multi-Purpose SATellite(KOMPSAT) with the mission of cartography to build up a digital map of Korean territory including Digital Terrain Elevation Map(DTEM). This instrument which comprises EOC Sensor Assembly and EOC Electronics Assembly produces the panchromatic images of 6.6 m GSD with a swath wider than 17 km by push-broom scanning and spacecraft body pointing in a visible range of wavelength, 510 ~ 730 nm. The high resolution panchromatic image is to be collected for 2 minutes during 98 minutes of orbit cycle covering about 800 km along ground track, over the mission lifetime of 3 years with the functions of programmable rain/offset and on-board image data storage. The image of 8 bit digitization, which is collected by a full reflective type F8.3 triplet without obscuration, is to be transmitted to Ground Station at a rate less than 25 Mbps. EOC was elaborated to have the performance which meets or surpasses its requirements of design phase. The spectral response the modulation transfer function, and the uniformity of all the 2592 pixel of CCD of EOC are illustrated as they were measured for the convenience of end-user. The spectral response was measured with respect to each gain setup of EOC and this is expected to give the capability of generating more accurate panchromatic image to the EOC data users. The modulation transfer function of EOC was measured as greater than 16% at Nyquist frequency over the entire field of view which exceeds its requirement of larger than 10%, The uniformity that shows the relative response of each pixel of CCD was measured at every pixel of the Focal Plane Array of EOC and is illustrated for the data processing.

  • PDF