• Title/Summary/Keyword: 카메라 모델

Search Result 1,041, Processing Time 0.025 seconds

Pan/Tilt Camera System using Real-Time ELSAC and Stop/Go Procedure (실시간 ELSAC을 이용한 Stop/Go 방식의 Pan/Tilt 카메라 시스템)

  • Lee, Suk-Ho
    • Journal of Broadcast Engineering
    • /
    • v.17 no.6
    • /
    • pp.1106-1109
    • /
    • 2012
  • The stability of object tracking in non-stationary camera environment, such as intelligent surveillance system using a pan/tilt camera, is less stable compared with stationary camera environment. This is due to the fact that it is difficult to model a background image in non-stationary environment. In this letter, we propose a non-stationay pan/tilt camera surveillance system which uses a stop/go procedure together with a real-time active contour. The proposed system can track the object stable even in an environment where only a few difference frames can be obtained.

An Energy-Efficient Operating Scheme of Surveillance System by Predicting the Location of Targets (감시 대상의 위치 추정을 통한 감시 시스템의 에너지 효율적 운영 방법)

  • Lee, Kangwook;Lee, Soobin;Lee, Howon;Cho, Dong-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.2
    • /
    • pp.172-180
    • /
    • 2013
  • In this paper, we propose an energy-efficient camera operating scheme to save energy which can be used for mass surveillance cameras. This technique determines how many cameras should be turned on in the consideration of the velocity vector of monitoring targets, which is acquired by DSRC object tracking, the model of the specification of installed cameras, and the road model of installed sites. Also, we address other techniques used to save energy for the surveillance system as well. Throughout performance evaluation, we demonstrate the excellence of our proposed scheme compared with previous approaches.

Mobile Augmented Visualization Technology Using Vive Tracker (포즈 추적 센서를 활용한 모바일 증강 가시화 기술)

  • Lee, Dong-Chun;Kim, Hang-Kee;Lee, Ki-Suk
    • Journal of Korea Game Society
    • /
    • v.21 no.5
    • /
    • pp.41-48
    • /
    • 2021
  • This paper introduces a mobile augmented visualization technology that augments a three-dimensional virtual human body on a mannequin model using two pose(position and rotation) tracking sensors. The conventional camera tracking technology used for augmented visualization has the disadvantage of failing to calculate the camera pose when the camera shakes or moves quickly because it uses the camera image, but using a pose tracking sensor can overcome this disadvantage. Also, even if the position of the mannequin is changed or rotated, augmented visualization is possible using the data of the pose tracking sensor attached to the mannequin, and above all there is no load for camera tracking.

Development of Real-Time Objects Segmentation for Dual-Camera Synthesis in iOS (iOS 기반 실시간 객체 분리 및 듀얼 카메라 합성 개발)

  • Jang, Yoo-jin;Kim, Ji-yeong;Lee, Ju-hyun;Hwang, Jun
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.37-43
    • /
    • 2021
  • In this paper, we study how objects from front and back cameras can be recognized in real time in a mobile environment to segment regions of object pixels and synthesize them through image processing. To this work, we applied DeepLabV3 machine learning model to dual cameras provided by Apple's iOS. We also propose methods using Core Image and Core Graphics libraries from Apple for image synthesis and postprocessing. Furthermore, we improved CPU usage than previous works and compared the throughput rates and results of Depth and DeepLabV3. Finally, We also developed a camera application using these two methods.

Illumination estimation based on valid pixel selection from CCD camera response (CCD카메라 응답으로부터 유효 화소 선택에 기반한 광원 추정)

  • 권오설;조양호;김윤태;송근호;하영호
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.251-258
    • /
    • 2004
  • This paper proposes a method for estimating the illuminant chromaticity using the distributions of the camera responses obtained by a CCD camera in a real-world scene. Illuminant estimation using a highlight method is based on the geometric relation between a body and its surface reflection. In general, the pixels in a highlight region are affected by an illuminant geometric difference, camera quantization errors, and the non-uniformity of the CCD sensor. As such, this leads to inaccurate results if an illuminant is estimated using the pixels of a CCD camera without any preprocessing. Accordingly, to solve this problem the proposed method analyzes the distribution of the CCD camera responses and selects pixels using the Mahalanobis distance in highlight regions. The use of the Mahalanobis distance based on the camera responses enables the adaptive selection of valid pixels among the pixels distributed in the highlight regions. Lines are then determined based on the selected pixels with r-g chromaticity coordinates using a principal component analysis(PCA). Thereafter, the illuminant chromaticity is estimated based on the intersection points of the lines. Experimental results using the proposed method demonstrated a reduced estimation error compared with the conventional method.

Fundamental Education on Film Style I : Focusing on Basic Viewing Education Utilizing Sound and Camera (영화의 양식에 관한 교육 사례 I : 사운드와 카메라를 활용한 감상 및 실습교육을 중심으로)

  • Kim, Gye-Joong
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.2
    • /
    • pp.195-203
    • /
    • 2011
  • This case study is based on a fundamental class actually done in the film and video department in Sungkyul University. It aimed at suggesting supportive role for typical film production class in universities in Korea. The list of film styles mentioned in this text is selected from the actual ones for the class and it is focused on utilization of sound and camera. It is ultimately designed to guide students to actual making films. First of all, for example, with a humble camcoder, students are encouraged to record both image and his/her narration which is directly recorded into the built-in microphone. Also directional microphone could be used to experience various positions of 'point-of-hearing'. Regarding camera movements, only distinctive ones out of typical utilization are selected to be dealt with. The movements created by moving vehicle such as dolly or crane beyond the limit of human ability could bring up high imagination of students on movement, besides this could be also easily applied to them for using hand-held technique instead of vehicle. This attitude acquired through the course is important for gettig over the resistance they might have before actual experiencing machinary use in production.

Omnidirectional Camera Motion Estimation Using Projected Contours (사영 컨투어를 이용한 전방향 카메라의 움직임 추정 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.35-44
    • /
    • 2007
  • Since the omnidirectional camera system with a very large field of view could take many information about environment scene from few images, various researches for calibration and 3D reconstruction using omnidirectional image have been presented actively. Most of line segments of man-made objects we projected to the contours by using the omnidirectional camera model. Therefore, the corresponding contours among images sequences would be useful for computing the camera transformations including rotation and translation. This paper presents a novel two step minimization method to estimate the extrinsic parameters of the camera from the corresponding contours. In the first step, coarse camera parameters are estimated by minimizing an angular error function between epipolar planes and back-projected vectors from each corresponding point. Then we can compute the final parameters minimizing a distance error of the projected contours and the actual contours. Simulation results on the synthetic and real images demonstrated that our algorithm can achieve precise contour matching and camera motion estimation.

Object Region Detection using Multi-Sensor Fusion and Background Estimation (다중센서 융합과 배경 추정을 이용한 물체 영역 검출)

  • 조주현;최해철;이진성;신호철;김성대
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.443-446
    • /
    • 2001
  • 본 논문에서는 센서 융합과 배경 추정 기법을 이용하여 연속된 영상에서 물체 영역을 검출하는 기법을 제안하였다. IR/CCD각각의 카메라로부터 얻은 입력 영상을 정렬하고 융합하는 과정을 거친 후, 각 화소 단위의 배경 모델을 추정하고 시간이 지남에 따라 이를 갱신함으로써 물체 영역을 효과적으로 검출하는 기법을 제시하고 있다. 실험은 차량을 대상으로 하였고, 카메라가 움직이는 상황과 비교적 복잡한 환경에서도 좋은 결과를 얻을 수 있었다.

  • PDF

Texture replacement technique using 3D information (3 차원 정보를 활용한 물체의 텍스처 교체 기법)

  • Kim, Joohyeon;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.06a
    • /
    • pp.229-232
    • /
    • 2017
  • AR 기술과 장비가 발전하며 다양한 분야에 이를 접목한 콘텐츠들이 생겨나고 있다. 가상광고는 기존의 광고 방법으로는 적용될 수 없는 영역에도 광고가 가능함으로써 광고분야에 새로운 패러다임으로 떠오르고 있다. 본 논문에서는 AR 환경의 가상 광고에 활용할 수 있는 기법을 제안함으로써 관련 기술의 상용화에 기여하고자 한다. 제안하는 기법은 RGB-D 카메라를 사용하여 물체의 3 차원 정보를 복원하고, 선택된 영역의 텍스처를 교체하는 기법과 모델 기반의 카메라 추적기술을 활용하여 실시간으로 물체의 텍스처가 교체될 수 있음을 보일 것이다.

  • PDF

Calibration-free 3D Structure Recovery (카메라 파라미터 보정이 필요없는 물체의 3차원 구조 복원)

  • 추창우;표순형;박태준;최병태;정순기
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2001.11a
    • /
    • pp.579-584
    • /
    • 2001
  • 컴퓨터 그래픽스 기술을 이용한 응용분야가 증가함에 따라 물체의 사실적인 모델에 대한 요구가 증가되고 있다. 모델링 시간이 많이 필요한 기존의 3차원 모델링 툴 외에 최근에 사용자의 스케치에 기반한 모델링 방법과 영상기반 모델링, 3차원 스캐너가 발표되었지만, 정확성이 떨어지거나 고가의 장비를 필요로 하는 단점이 있다. 본 논문에서는 정육면체 프레임과 광평면(light plane) 프로젝터, 카메라를 이용한 물체의 3차원 구조 복원 시스템을 제안하고, 실험을 통하여 모델링의 정확도를 분석한다.

  • PDF