• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.031 seconds

Multiple SL-AVS(Small size & Low power Around View System) Synchronization Maintenance Method (다중 SL-AVS 동기화 유지기법)

  • Park, Hyun-Moon;Park, Soo-Huyn;Seo, Hae-Moon;Park, Woo-Chool
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.3
    • /
    • pp.73-82
    • /
    • 2009
  • Due to the many advantages including low price, low power consumption, and miniaturization, the CMOS camera has been utilized in many applications, including mobile phones, the automotive industry, medical sciences and sensoring, robotic controls, and research in the security field. In particular, the 360 degree omni-directional camera when utilized in multi-camera applications has displayed issues of software nature, interface communication management, delays, and a complicated image display control. Other issues include energy management problems, and miniaturization of a multi-camera in the hardware field. Traditional CMOS camera systems are comprised of an embedded system that consists of a high-performance MCU enabling a camera to send and receive images and a multi-layer system similar to an individual control system that consists of the camera's high performance Micro Controller Unit. We proposed the SL-AVS (Small Size/Low power Around-View System) to be able to control a camera while collecting image data using a high speed synchronization technique on the foundation of a single layer low performance MCU. It is an initial model of the omni-directional camera that takes images from a 360 view drawing from several CMOS camera utilizing a 110 degree view. We then connected a single MCU with four low-power CMOS cameras and implemented controls that include synchronization, controlling, and transmit/receive functions of individual camera compared with the traditional system. The synchronization of the respective cameras were controlled and then memorized by handling each interrupt through the MCU. We were able to improve the efficiency of data transmission that minimizes re-synchronization amongst a target, the CMOS camera, and the MCU. Further, depending on the choice of users, respective or groups of images divided into 4 domains were then provided with a target. We finally analyzed and compared the performance of the developed camera system including the synchronization and time of data transfer and image data loss, etc.

Multiple Camera Calibration for Panoramic 3D Virtual Environment (파노라믹 3D가상 환경 생성을 위한 다수의 카메라 캘리브레이션)

  • 김세환;김기영;우운택
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.137-148
    • /
    • 2004
  • In this paper, we propose a new camera calibration method for rotating multi-view cameras to generate image-based panoramic 3D Virtual Environment. Since calibration accuracy worsens with an increase in distance between camera and calibration pattern, conventional camera calibration algorithms are not proper for panoramic 3D VE generation. To remedy the problem, a geometric relationship among all lenses of a multi-view camera is used for intra-camera calibration. Another geometric relationship among multiple cameras is used for inter-camera calibration. First camera parameters for all lenses of each multi-view camera we obtained by applying Tsai's algorithm. In intra-camera calibration, the extrinsic parameters are compensated by iteratively reducing discrepancy between estimated and actual distances. Estimated distances are calculated using extrinsic parameters for every lens. Inter-camera calibration arranges multiple cameras in a geometric relationship. It exploits Iterative Closet Point (ICP) algorithm using back-projected 3D point clouds. Finally, by repeatedly applying intra/inter-camera calibration to all lenses of rotating multi-view cameras, we can obtain improved extrinsic parameters at every rotated position for a middle-range distance. Consequently, the proposed method can be applied to stitching of 3D point cloud for panoramic 3D VE generation. Moreover, it may be adopted in various 3D AR applications.

Measurement of Fiber Board Poisson's Ratio using High-Speed Digital Camera

  • Choi, Seung-Ryul;Choi, Dong-Soo;Oh, Sung-Sik;Park, Suk-Ho;Kim, Jin-Se;Chun, Ho-Hyun
    • Journal of Biosystems Engineering
    • /
    • v.39 no.4
    • /
    • pp.324-329
    • /
    • 2014
  • Purpose: The finite element method (FEM) is advantageous because it can save time and cost by reducing the number of samples and experiments in the effort to identify design factors. In computational problem-solving it is necessary that the exact material properties are input for achieving a reliable analysis. However, in the case of fiber boards, it is difficult to measure their cross-directional material properties because of their small thickness. In previous research studies, the Poisson's ratio was measured by analyzing ultrasonic wave velocities. Recently, the Poisson's ratio was measured using a high-speed digital camera. In this study, we measured the transverse strain of a fiber board and calculated its Poisson's ratio using a high-speed digital camera in order to apply these estimates to a FEM analysis of a fiber board, a corrugated board, and a corrugated box. Methods: Three different fiber board samples were used in a uniaxial tensile test. The longitudinal strain was measured using the Universal Testing Machine. The transverse strain was measured using an image processing method. To calculate the transverse strain, we acquired images of the fiber board before the test onset and before the fracture occurred. Acquired images were processed using the image processing program MATLAB. After the images were converted from color to binary, we calculated the width of the fiber board. Results: The calculated Poisson's ratio ranged between 0.2968-0.4425 (Machine direction, MD) and 0.1619-0.1751 (Cross machine direction, CD). Conclusions: This study demonstrates that measurement of the transverse properties of a fiber board is possible using image processing methods. Correspondingly, these processing methods could be used to measure material properties that are difficult to measure using conventional measuring methodologies that employ strain gauge extensometers.

A Real-Time Spatial DSS for Security Camera Image Monitoring

  • Park, Young-Hwan;Lee, Ook
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1998.10a
    • /
    • pp.413-414
    • /
    • 1998
  • This paper presents a real-time Spatial Decision Support System(SDSS) for security camera image monitoring. Other SDSSs are not real-time systems, i.e., they show the images that are already transformed into data format such as virtual reality. In our system, the image is broadcasted in real-time since the purpose of the security camera needs to do it in real-time. With these real-time images, other systems do not add up anything more; the screen just shows the images from the camera. However in our system, we created a motion detection system so that the supervisor(Judge) of a sec.urity monitoring system does not have to pay attention to it constantly. In other words, we created a judge advising system for the supervisor of the security monitoring system. Most of small objects do not need the supervisor's attention since they could be birds, cats, dogs, etc. if they show up in the screen image. In this new system the system only report the unusual change to the supervisor by calculating the motion and size of objects in the screen. Thus the supervisor can be liberated from the 24-hour concentration duty; instead he/she can be only alerted when the real security threat such as a big moving object like an human intruder appears. Thus this system can be called a real-time Spatial DSS. The utility of this system is proved mathematically by using the concept of entropy. In other words, big objects like human intruders increase the entropy of the screen images significantly therefore the supervisor must be alerted. Thus by proving its utility of the system theoretically, we can claim that our new real-time SDSS is superior to others which do not use our technique.hnique.

  • PDF

The Walkers Tracking Algorithm using Color Informations on Multi-Video Camera (다중 비디오카메라에서 색 정보를 이용한 보행자 추적)

  • 신창훈;이주신
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.5
    • /
    • pp.1080-1088
    • /
    • 2004
  • In this paper, the interesting moving objects tracking algorithm using color information on Multi-Video camera against variance of intensity, shape and background is proposed. Moving objects are detected by using difference image method and integral projection method to background image and objects image only with hue area, after converting RGB color coordination of image which is input from multi-video camera into HSI color coordination. Hue information of the detected moving area are segmented to 24 levels from $0^{\circ}$ to $360^{\circ}$. It is used to the feature parameter of the moving objects that are three segmented hue levels with the highest distribution and difference among three segmented hue levels. To examine propriety of the proposed method, human images with variance of intensity and shape and human images with variance of intensity, shape and background are targeted for moving objects. As surveillance results of the interesting human, hue distribution level variation of the detected interesting human at each camera is under 2 level, and it is confirmed that the interesting human is tracked and surveilled by using feature parameters at cameras, automatically.

Character Shape Distortion Correction of Camera Acquired Document Images (카메라 획득 문서영상에서의 글자모양 왜곡보정)

  • Jang Dae-Geun;Kim Eui-Jeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.4
    • /
    • pp.680-686
    • /
    • 2006
  • Document images captured by scanners have only skewing distortion. But camera captured document images have not only skew but also vignetting effect and geometric distortion. Vignetting effect, which makes the border areas to be darker than the center of the image, make it difficult to separate characters from the document images. But this effect has being decreased, as the lens manufacturing skill is developed. Geometric distortion, occurred by the mismatch of angle and center position between the document image and the camera, make the shape of characters to be distorted, so that the character recognition is more difficult than the case of using scanner. In this paper, we propose a method that can increase the performance of character recognition by correcting the geometric distortion of document images using a linear approximation which changes the quadrilateral region to the rectangle one. The proposed method also determine the quadrilateral transform region automatically, using the alignment of character lines and the skewed angles of characters located in the edges of each character line. Proposed method, therefore, can correct the geometric distortion without getting positional information from camera.

3D Object's shape and motion recovery using stereo image and Paraperspective Camera Model (스테레오 영상과 준원근 카메라 모델을 이용한 객체의 3차원 형태 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.135-142
    • /
    • 2003
  • Robust extraction of 3D object's features, shape and global motion information from 2D image sequence is described. The object's 21 feature points on the pyramid type synthetic object are extracted automatically using color transform technique. The extracted features are used to recover the 3D shape and global motion of the object using stereo paraperspective camera model and sequential SVD(Singuiar Value Decomposition) factorization method. An inherent error of depth recovery due to the paraperspective camera model was removed by using the stereo image analysis. A 30 synthetic object with 21 features reflecting various position was designed and tested to show the performance of proposed algorithm by comparing the recovered shape and motion data with the measured values.

An Camera Information Detection Method for Dynamic Scene (Dynamic scene에 대한 카메라 정보 추출 기법)

  • Ko, Jung-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.275-280
    • /
    • 2013
  • In this paper, a new stereo object extraction algorithm using a block-based MSE (mean square error) algorithm and the configuration parameters of a stereo camera is proposed. That is, by applying the SSD algorithm between the initial reference image and the next stereo input image, location coordinates of a target object in the right and left images are acquired and then with these values, the pan/tilt system is controlled. And using the moving angle of this pan/tilt system and the configulation parameters of the stereo camera system, the mask window size of a target object is adaptively determined. The newly segmented target image is used as a reference image in the next stage and it is automatically updated in the course of target tracking basing on the same procedure. Meanwhile, a target object is under tracking through continuously controlling the convergence and FOV by using the sequentiall extracted location coordinates of a moving target.

Efficient Depth Map Generation for Various Stereo Camera Arrangements (다양한 스테레오 카메라 배열을 위한 효율적인 깊이 지도 생성 방법)

  • Jang, Woo-Seok;Lee, Cheon;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.6A
    • /
    • pp.458-463
    • /
    • 2012
  • In this paper, we propose a direct depth map acquisition method for the convergence camera array as well as the parallel camera array. The conventional methods perform image rectification to reduce complexity and improve accuarcy. However, image rectification may lead to unwanted consequences for the convergence camera array. Thus, the proposed method excludes image rectification and directly extracts depth values using the epipolar constraint. In order to acquire a more accurate depth map, occlusion detection and handling processes are added. Reasonable depth values are assigned to the obtained occlusion region by the distance and color differences from neighboring pixels. Experimental results show that the proposed method has fewer limitations than the conventional methods and generates more accurate depth maps stably.

Pose-invariant Face Recognition using a Cylindrical Model and Stereo Camera (원통 모델과 스테레오 카메라를 이용한 포즈 변화에 강인한 얼굴인식)

  • 노진우;홍정화;고한석
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.929-938
    • /
    • 2004
  • This paper proposes a pose-invariant face recognition method using cylindrical model and stereo camera. We divided this paper into two parts. One is single input image case, the other is stereo input image case. In single input image case, we normalized a face's yaw pose using cylindrical model, and in stereo input image case, we normalized a face's pitch pose using cylindrical model with previously estimated pitch pose angle by the stereo geometry. Also, since we have an advantage that we can utilize two images acquired at the same time, we can increase overall recognition performance by decision-level fusion. Through representative experiments, we achieved an increased recognition rate from 61.43% to 94.76% by the yaw pose transform, and the recognition rate with the proposed method achieves as good as that of the more complicated 3D face model. Also, by using stereo camera system we achieved an increased recognition rate 5.24% more for the case of upper face pose, and 3.34% more by decision-level fusion.