• Title/Summary/Keyword: Captured Image

Search Result 978, Processing Time 0.031 seconds

Effects of Depth Map Quantization for Computer-Generated Multiview Images using Depth Image-Based Rendering

  • Kim, Min-Young;Cho, Yong-Joo;Choo, Hyon-Gon;Kim, Jin-Woong;Park, Kyoung-Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2175-2190
    • /
    • 2011
  • This paper presents the effects of depth map quantization for multiview intermediate image generation using depth image-based rendering (DIBR). DIBR synthesizes multiple virtual views of a 3D scene from a 2D image and its associated depth map. However, it needs precise depth information in order to generate reliable and accurate intermediate view images for use in multiview 3D display systems. Previous work has extensively studied the pre-processing of the depth map, but little is known about depth map quantization. In this paper, we conduct an experiment to estimate the depth map quantization that affords acceptable image quality to generate DIBR-based multiview intermediate images. The experiment uses computer-generated 3D scenes, in which the multiview images captured directly from the scene are compared to the multiview intermediate images constructed by DIBR with a number of quantized depth maps. The results showed that there was no significant effect on depth map quantization from 16-bit to 7-bit (and more specifically 96-scale) on DIBR. Hence, a depth map above 7-bit is needed to maintain sufficient image quality for a DIBR-based multiview 3D system.

TELE-OPERATIVE SYSTEM FOR BIOPRODUCTION - REMOTE LOCAL IMAGE PROCESSING FOR OBJECT IDENTIFICATION -

  • Kim, S. C.;H. Hwang;J. E. Son;Park, D. Y.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2000.11b
    • /
    • pp.300-306
    • /
    • 2000
  • This paper introduces a new concept of automation for bio-production with tele-operative system. The proposed system showed practical and feasible way of automation for the volatile bio-production process. Based on the proposition, recognition of the job environment with object identification was performed using computer vision system. A man-machine interactive hybrid decision-making, which utilized a concept of tele-operation was proposed to overcome limitations of the capability of computer in image processing and feature extraction from the complex environment image. Identifying watermelons from the outdoor scene of the cultivation field was selected to realize the proposed concept. Identifying watermelon from the camera image of the outdoor cultivation field is very difficult because of the ambiguity among stems, leaves, shades, and especially fruits covered partly by leaves or stems. The analog signal of the outdoor image was captured and transmitted wireless to the host computer by R.F module. The localized window was formed from the outdoor image by pointing to the touch screen. And then a sequence of algorithms to identify the location and size of the watermelon was performed with the local window image. The effect of the light reflectance of fruits, stems, ground, and leaves were also investigated.

  • PDF

A Method of Forensic Authentication via File Structure and Media Log Analysis of Digital Images Captured by iPhone (아이폰으로 촬영된 디지털 이미지의 파일 구조 및 미디어 로그 분석을 통한 법과학적 진본 확인 방법)

  • Park, Nam In;Lee, Ji Woo;Jeon, Oc-Yeub;Kim, Yong Jin;Lee, Jung Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.4
    • /
    • pp.558-568
    • /
    • 2021
  • The digital image to be accepted as legal evidence, it is important to verify the authentication of the digital image. This study proposes a method of authenticating digital images through three steps of comparing the file structure of digital images taken with iPhone, analyzing the encoding information as well as media logs of the iPhone storing the digital images. For the experiment, digital image samples were acquired from nine iPhones through a camera application built into the iPhone. And the characteristics of file structure and media log were compared between digital images generated on the iPhone and digital images edited through a variety of image editing tools. As a result of examining those registered during the digital image creation process, it was confirmed that differences from the original characteristics occurred in file structure and media logs when manipulating digital images on the iPhone, and digital images take with the iPhone. In this way, it shows that it can prove its forensic authentication in iPhone.

Commercially Available High-Speed Cameras Connected with a Laryngoscope for Capturing the Laryngeal Images (상용화 된 고속카메라와 후두내시경을 이용한 성대촬영 방법의 소개)

  • Nam, Do-Hyun;Choi, Hong-Shik
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.21 no.2
    • /
    • pp.133-138
    • /
    • 2010
  • Background and Objectives : High-speed imaging can be useful in studies of linguistic and artistic singing styles, and laryngeal examination of patients with voice disorders, particularly in irregular vocal fold vibrations. In this study, we introduce new laryngeal imaging systems which are commercially available high speed cameras connected with a laryngoscope. Materials and Method : The laryngeal images were captured from three different types of cameras. First, the adapter was made to connect with laryngoscope and Casio EX-F1 to capture the images using $2{\times}150$ Watt Halogen light source (EndoSTROB) at speeds of 1,200 tps (frame per second)($336{\times}96$). Second, Phantom Miro ex4 was used to capture the digital laryngeal images using Xenon Nova light source 175 Watt (STORZ) at speeds of 1,920 fps ($512{\times}384$). Finally, laryngeal images were captured using MotionXtra N-4 with 250 Watt halogen lamp (Olympus CLH-250) light source at speeds of 2,000tps ($384{\times}400$) by connecting with laryngoscope. All images were transformed into the Kymograph using KIPS (Kay's image processing Software) of Kay Pentex Inc. Results: Casio EX-F1 was too small to adjust the focus and screen size was diminished once the images were captured despite of high resolution images. High quality of color images could be obtained with Phantom Miro ex4 whereas good black and white images from Motion Xtra N-4 Despite of some limitations of illumination problems, limited recording time capacity, and time consuming procedures in Phantom Miro ex4 and Motion Xtra N-4, those portable devices provided high resolution images. Conclusion : All those high speed cameras could capture the laryngeal images by connecting with laryngoscope. High resolution images were able to be captured at the fixed position under the good lightness. Accordingly, these techniques could be applicable to observe the vocal fold vibration properties in the clinical practice.

  • PDF

Machine Vision Based Detection of Disease Damaged Leave of Tomato Plants in a Greenhouse (기계시각장치에 의한 토마토 작물의 병해엽 검출)

  • Lee, Jong-Whan
    • Journal of Biosystems Engineering
    • /
    • v.33 no.6
    • /
    • pp.446-452
    • /
    • 2008
  • Machine vision system was used for analyzing leaf color disorders of tomato plants in a greenhouse. From the day when a few leave of tomato plants had started to wither, a series of images were captured by 4 times during 14 days. Among several color image spaces, Saturation frame in HSI color space was adequate to eliminate a background and Hue frame was good to detect infected disease area and tomato fruits. The processed image ($G{\sqcup}b^*$ image) by OR operation between G frame in RGB color space and $b^*$ frame in $La^*b^*$ color space was useful for image segmentation of a plant canopy area. This study calculated a ratio of the infected area to the plant canopy and manually analyzed leaf color disorders through an image segmentation for Hue frame of a tomato plant image. For automatically analyzing plant leave disease, this study selected twenty-seven color patches on the calibration bars as the corresponding to leaf color disorders. These selected color patches could represent 97% of the infected area analyzed by the manual method. Using only ten color patches among twenty-seven ones could represent over 85% of the infected area. This paper showed a proposed machine vision system may be effective for evaluating various leaf color disorders of plants growing in a greenhouse.

Lab Color Space based Rice Yield Prediction using Low Altitude UAV Field Image

  • Reza, Md Nasim;Na, Inseop;Baek, Sunwook;Lee, In;Lee, Kyeonghwan
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.42-42
    • /
    • 2017
  • Prediction of rice yield during a growing season would be very helpful to magnify rice yield as it also allows better farm practices to maximize yield with greater profit and lesser costs. UAV imagery based automatic detection of rice can be a relevant solution for early prediction of yield. So, we propose an image processing technique to predict rice yield using low altitude UAV images. We proposed $L^*a^*b^*$ color space based image segmentation algorithm. All images were captured using UAV mounted RGB camera. The proposed algorithm was developed to find out rice grain area from the image background. We took RGB image and applied filter to remove noise and converted RGB image to $L^*a^*b^*$ color space. All color information contain in both $a^*$ and $b^*$ layers and by using k-mean clustering classification of these colors were executed. Variation between two colors can be measured and labelling of pixels was completed by cluster index. Image was finally segmented using color. The proposed method showed that rice grain could be segmented and we can recognize rice grains from the UAV images. We can analyze grain areas and by estimating area and volume we could predict rice yield.

  • PDF

WALK-THROUGH VIEW FOR FTV WITH CIRCULAR CAMERA SETUP

  • Uemori, Takeshi;Yendo, Tomohiro;Tanimoto, Masayuki;Fujii, Toshiaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.727-731
    • /
    • 2009
  • In this paper, we propose a method to generate a free viewpoint image using multi-viewpoint images which are taken by cameras arranged circularly. In past times, we have proposed the method to generate a free viewpoint image based on Ray-Space method. However, with that method, we can not generate a walk-through view seen from a virtual viewpoint among objects. The method we propose in this paper realizes the generation of such view. Our method gets information of the positions of objects using shape from silhouette method at first, and selects appropriate cameras which acquired rays needed for generating a virtual image. A free viewpoint image can be generated by collecting rays which pass over the focal point of a virtual camera. However, when the requested ray is not available, it is necessary to interpolate it from neighboring rays. Therefore, we estimate the depth of the objects from a virtual camera and interpolate ray information to generate the image. In the experiments with the virtual sequences which were captured at every 6 degrees, we set the virtual camera at user's choice and generated the image from that viewpoint successfully.

  • PDF

Quantitative analysis of gene expression by fluorescence images using green fluorescence protein

  • Park, Yong-Doo;Kim, Jong-Won;Suh, You-Hun;Min, Byoung-Goo
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.11
    • /
    • pp.475-477
    • /
    • 1997
  • We have analyzed the fluorescence image obtaining from green fluorescence protein (GFP). In order to monitor the fluorescence of specific gene, we used the amyloid precursor protein promoter which has been known to act as a major role in the development of Alzheimer's disease. The promoter from - 3.0 kb to + 100 base pair was inserted into the gene expression monitoring GFP vector purchased from Clontech. This construct was transfected into the PC 12 and fibroblast cells and the fluorescence image was captured by two kinds of methods. One is using cheaper CCD camera and other is SIT-CCD camera. or the higher sensitivity of the fluorescence image, we developed the multiple image grabbing program. As a results, the fluorescence image by conventional CCD camera have the similar sensitivity compared with that of the SIT-camera by applying the multiple image grabbing programs. By this system. it will be possible to construct the fluorescence monitoring system with lower cost. And gene expression in real time by fluorescence image will be possible without changing the fluorescence images.

  • PDF

Implementation of an improved real-time object tracking algorithm using brightness feature information and color information of object

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.5
    • /
    • pp.21-28
    • /
    • 2017
  • As technology related to digital imaging equipment is developed and generalized, digital imaging system is used for various purposes in fields of society. The object tracking technology from digital image data in real time is one of the core technologies required in various fields such as security system and robot system. Among the existing object tracking technologies, cam shift technology is a technique of tracking an object using color information of an object. Recently, digital image data using infrared camera functions are widely used due to various demands of digital image equipment. However, the existing cam shift method can not track objects in image data without color information. Our proposed tracking algorithm tracks the object by analyzing the color if valid color information exists in the digital image data, otherwise it generates the lightness feature information and tracks the object through it. The brightness feature information is generated from the ratio information of the width and the height of the area divided by the brightness. Experimental results shows that our tracking algorithm can track objects in real time not only in general image data including color information but also in image data captured by an infrared camera.

Ambulatory Aid Device for the Visually Handicapped Person Using Image Recognition (화상인식을 이용한 시각장애인용 보행보조장치)

  • Park Sang-Jun;Shin Dong-Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.6
    • /
    • pp.568-572
    • /
    • 2006
  • This paper presents the device of recognizing image of the studded paving blocks, transmitting, the information by vibration to a visually handicapped person. Usually the blind uses the walking stick to recognize the studded paving block. This research uses a PCA (Principal Component Analysis) based image processing approach for recognizing the paving blocks. We classify the studded paving blocks into 5 classes, that is, vertical line block, right-declined line block, left-declined line block, dotted block and flat block. The 8 images for each of 5 classes are captured for each block by 112*120 pixels, then the eigenvectors are obtained in magnitude order of eigenvectors by using principal component analysis. The principal components for images can be calculated using projection of transformation matrix composed of eigenvectors. The classification has been executed using Euclidean's distance, so the block having minimum distance with a image is chosen as matched one. The result of classification is transmitted to the blind by electric vibration signals with different magnitudes and frequencies.