• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.029 seconds

Implementation on Surveillance Camera Optimum Angle Extraction using Polarizing Filter

  • Kim, Jaeseung;Park, Seungseo;Kwon, Soonchul
    • International journal of advanced smart convergence
    • /
    • v.10 no.2
    • /
    • pp.45-52
    • /
    • 2021
  • The surveillance camera market has developed and plays an important role in the field of video surveillance. However, in recent years, the identification of areas requiring surveillance has been limited by reflective light in the surveillance camera market. Cameras using polarization filters are being developed to reduce reflective light and facilitate identification. Programs are required to automatically adjust polarization filters. In this paper, we proposed an optimal angle extraction method from surveillance cameras using polarization filters through histogram analysis. First of all, transformed to grayscale to analyze the specifications of frames in multiple polarized angle images, reducing computational throughput. Then we generated and analyzed a histogram of the corresponding frame to extract the angle when the highlights are the fewest. Experiments with 0˚ and 90˚ showed high performance in extracting optimal angles. At this point, it is hoped this technology would be used for surveillance cameras in place like beach with a lot of reflective light.

USING WEB CAMERA TECHNOLOGY TO MONITOR STEEL CONSTRUCTION

  • Kerry T. Slattery;Amit Kharbanda
    • International conference on construction engineering and project management
    • /
    • 2005.10a
    • /
    • pp.841-844
    • /
    • 2005
  • Computer vision technology can be used to interpret the images captured by web cameras installed on construction sites to automatically quantify the results. This information can be used for quality control, productivity measurement and to direct construction. Steel frame construction is particularly well suited for automatic monitoring as all structural members can be viewed from a small number of camera locations, and three-dimensional computer models of steel structures are frequently available in a standard electronic format. A system is being developed that interprets the 3-D model and directs a camera to look for individual members as regular intervals to determine when each is in place and report the results. Results from a simple lab-scale system are presented along with preliminary full-scale development.

  • PDF

Recognition of Model Cars Using Low-Cost Camera in Smart Toy Games (저가 카메라를 이용한 스마트 장난감 게임을 위한 모형 자동차 인식)

  • Minhye Kang;Won-Kee Hong;Jaepil Ko
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.27-32
    • /
    • 2024
  • Recently, there has been a growing interest in integrating physical toys into video gaming within the game content business. This paper introduces a novel method that leverages low-cost camera as an alternative to using sensor attachments to meet this rising demand. We address the limitations associated with low-cost cameras and propose an optical design tailored to the specific environment of model car recognition. We overcome the inherent limitations of low-cost cameras by proposing an optical design specifically tailored for model car recognition. This approach primarily focuses on recognizing the underside of the car and addresses the challenges associated with this particular perspective. Our method employs a transfer learning model that is specifically trained for this task. We have achieved a 100% recognition rate, highlighting the importance of collecting data under various camera exposures. This paper serves as a valuable case study for incorporating low-cost cameras into vision systems.

Fish Injured Rate Measurement Using Color Image Segmentation Method Based on K-Means Clustering Algorithm and Otsu's Threshold Algorithm

  • Sheng, Dong-Bo;Kim, Sang-Bong;Nguyen, Trong-Hai;Kim, Dae-Hwan;Gao, Tian-Shui;Kim, Hak-Kyeong
    • Journal of Power System Engineering
    • /
    • v.20 no.4
    • /
    • pp.32-37
    • /
    • 2016
  • This paper proposes two measurement methods for injured rate of fish surface using color image segmentation method based on K-means clustering algorithm and Otsu's threshold algorithm. To do this task, the following steps are done. Firstly, an RGB color image of the fish is obtained by the CCD color camera and then converted from RGB to HSI. Secondly, the S channel is extracted from HSI color space. Thirdly, by applying the K-means clustering algorithm to the HSI color space and applying the Otsu's threshold algorithm to the S channel of HSI color space, the binary images are obtained. Fourthly, morphological processes such as dilation and erosion, etc. are applied to the binary image. Fifthly, to count the number of pixels, the connected-component labeling is adopted and the defined injured rate is gotten by calculating the pixels on the labeled images. Finally, to compare the performances of the proposed two measurement methods based on the K-means clustering algorithm and the Otsu's threshold algorithm, the edge detection of the final binary image after morphological processing is done and matched with the gray image of the original RGB image obtained by CCD camera. The results show that the detected edge of injured part by the K-means clustering algorithm is more close to real injured edge than that by the Otsu' threshold algorithm.

Efficient generation of concentric mosaics using image-strip mosaicking (스트립 영상 배치를 이용한 동심원 모자익의 효율적인 생성)

  • Jang, Kyung Ho;Jung, Soon Ki
    • Journal of the Korea Computer Graphics Society
    • /
    • v.7 no.2
    • /
    • pp.29-35
    • /
    • 2001
  • In general, image-based virtual environment is represented by panoramic images created by image mosaic algorithm. The cylindrical panoramic image supports the fixed-viewpoint navigation due to the constraints of construction. Shum proposed concentric mosaics to allow users to navigate freely within a circular area[10]. It is constructed by a sequence of images which is acquired from a regularly rotating camera. Concentric mosaics technique, proposed by Shum, is considered as 3D plenoptic function which is defined three parameters : distance, height and angle. In this paper, we suggest an effective method for creating concentric mosaics, in which we first align a set of strip images on the cylinder plane and stitch the aligned strips to build a panoramic image. The proposed method has no constraints such as regular panning motion of camera. Furthermore, our proposed method minimizes the use of interpolation image to create a novel view images from the concentric mosaics. It allows the result image on a novel view to have better quality with respect to the number of input images.

  • PDF

A System for Measuring 3D Human Bodies Using the Multiple 2D Images (다중 2D 영상을 이용한 3D 인체 계측 시스템)

  • 김창우;최창석;김효숙;강인애;전준현
    • Journal of the Korean Society of Costume
    • /
    • v.53 no.5
    • /
    • pp.1-12
    • /
    • 2003
  • This paper proposes a system for measuring the 3D human bodies using the multiple 2D images. The system establishes the multiple image input circumstance from the digital camera for image measurement. The algorithm considering perspective projection leads us to estimate the 3D human bodies from the multiple 2D images such as frontal. side and rear views. The results of the image measurement is compared those of the direct measurement and the 3D scanner for the total 40 items (12 heights, 15 widths and 13 depths). Three persons measure the 40 items using the three measurement methods. In comparison of the results obtained among the measurement methods and the persons, the results between the image measurement and the 3D scanner are very similar. However, the errors for the direct measurement are relatively larger than those between the image measurement and the 3D scanner. For example, the maximum errors between the image measurement and the 3D scanner are 0.41cm in height, 0.39cm in width and 0.95cm in depth. The errors are acceptable in body measurement. Performance of the image measurement is superior to the direct. because the algorithm estimates the 3D positions using the perspective projection. In above comparison, the image measurement is expected as a new method for measuring the 3D body, since it has the various advantages of the direct measurement and 3D scanner in performance for measurement as well as in the devices, cost, Portability and man power.

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

Implementation of Sharpness-Enhancement Algorithm based on Adaptive-Filter for Mobile-Display Apparatuses (Mobile Display 장치를 위한 Adaptive-Filter 기반형 선명도 향상 알고리즘의 하드웨어 구현)

  • Im, Jeong-Uk;Song, Jin-Gun;Lee, Sung-Jin;Min, Kyoung-Joong;Kang, Bong-Soon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.109-112
    • /
    • 2007
  • Definition-Enhancement of the digitalized image has been being made researches continuously due to application a camera to a mobile-apparatus and the advent of a digital camera. In particular, the inputted image from a sensor goes through the process of ISP(Image Signal Process) prior to output as a visual image. The high-frequency components are offset by LPF(Low Pass Filter) that eliminates the noise of high spatial-frequency at the moment. In this paper, we propose an algorithm that outputs more vivid image by using adaptive-HPF(High Pass Filter) that has apt coefficients for diverse conditions of an image edge, nevertheless we do not employ any Edge-Detection algorithm to enhance a blurred image.

  • PDF

Analysis of DIC Platform and Image Quality with FHD for Displacement Measurement (FHD급 DIC 플랫폼의 변위계측용 영상품질 분석)

  • Park, Jongbae;Kang, Mingoo
    • Journal of Internet Computing and Services
    • /
    • v.19 no.1
    • /
    • pp.105-111
    • /
    • 2018
  • This paper presents the analysis of image quality with FHD(Full HD) resolution camera equipped DIC(Digital Image Correlation) platform for the measurement of the architectural structure's relative displacement. DIC platform was designed based on i.MX6 of Freescale. Displacement measurement based on DIC method, the error is affected by image quality factors as pixel number, brightness, contrast, and SNR[dB](Signal to Noise Ratio). The effect were analyzed. The displacement of ROI(Region Of Interest) area within the image was measured by sub-pixel units based on DIC method. The non-contact telemetry property of DIC method, it can be used to long distance non-contact measurement. The various displacement results was measured and analyzed with the image quality factor adjustment according to the distance(25m, 35m, 50m).

Slow Sync Image Synthesis from Short Exposure Flash Smartphone Images (단노출 플래시 스마트폰 영상에서 저속 동조 영상 생성)

  • Lee, Jonghyeop;Cho, Sunghyun;Lee, Seungyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.3
    • /
    • pp.1-11
    • /
    • 2021
  • Slow sync is a photography technique where a user takes an image with long exposure and a camera flash to enlighten the foreground and background. Unlike short exposure with flash and long exposure without flash, slow sync guarantees the bright foreground and background in the dim environment. However, taking a slow sync image with a smartphone is difficult because the smartphone camera has continuous and weak flash and can not turn on flash if the exposure time is long. This paper proposes a deep learning method that input is a short exposure flash image and output is a slow sync image. We present a deep learning network with a weight map for spatially varying enlightenment. We also propose a dataset that consists of smartphone short exposure flash images and slow sync images for supervised learning. We utilize the linearity of a RAW image to synthesize a slow sync image from short exposure flash and long exposure no-flash images. Experimental results show that our method trained with our dataset synthesizes slow sync images effectively.