• Title/Summary/Keyword: single image

Search Result 2,244, Processing Time 0.026 seconds

An Enhanced Fuzzy Single Layer Perceptron for Image Recognition (이미지 인식을 위한 개선된 퍼지 단층 퍼셉트론)

  • Lee, Jong-Hee
    • Journal of Korea Multimedia Society
    • /
    • v.2 no.4
    • /
    • pp.490-495
    • /
    • 1999
  • In this paper, a method of improving the learning time and convergence rate is proposed to exploit the advantages of artificial neural networks and fuzzy theory to neuron structure. This method is applied to the XOR Problem, n bit parity problem which is used as the benchmark in neural network structure, and recognition of digit image in the vehicle plate image for practical image application. As a result of the experiments, it does not always guarantee the convergence. However, the network showed improved the teaming time and has the high convergence rate. The proposed network can be extended to an arbitrary layer Though a single layer structure Is considered, the proposed method has a capability of high speed 3earning even on large images.

  • PDF

3D Reconstruction Using the Planar Homograpy (평면 호모그래피를 이용한 3차원 재구성)

  • Yoon Yong-In;Ohk Hyung-Soo;Choi Jong-Soo;Oh Jeong-Su
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.4C
    • /
    • pp.381-390
    • /
    • 2006
  • This paper proposes a new technque of the camera calibration to be computed a homography between the planar patterns taken by a single image to be located at the three planar patterns from uncalibrated images. It is essential to calibrate a camera for 3-dimensional reconstruction from uncalibrated image. Since the proposed method should be computed from the homography among the three planar patterns from a single image, it is implemented to more easily and simply to recover 3D reconstruction of an object than the conventional. Experimental results show the performances of the proposed method are the better than the conventional. We demonstrate examples of recovering 3D reconstruction using the proposed algorithm from uncalibrated images.

Real-time Fluorescence Lifetime Imaging Microscopy Implementation by Analog Mean-Delay Method through Parallel Data Processing

  • Kim, Jayul;Ryu, Jiheun;Gweon, Daegab
    • Applied Microscopy
    • /
    • v.46 no.1
    • /
    • pp.6-13
    • /
    • 2016
  • Fluorescence lifetime imaging microscopy (FLIM) has been considered an effective technique to investigate chemical properties of the specimens, especially of biological samples. Despite of this advantageous trait, researchers in this field have had difficulties applying FLIM to their systems because acquiring an image using FLIM consumes too much time. Although analog mean-delay (AMD) method was introduced to enhance the imaging speed of commonly used FLIM based on time-correlated single photon counting (TCSPC), a real-time image reconstruction using AMD method has not been implemented due to its data processing obstacles. In this paper, we introduce a real-time image restoration of AMD-FLIM through fast parallel data processing by using Threading Building Blocks (TBB; Intel) and octa-core processor (i7-5960x; Intel). Frame rate of 3.8 frames per second was achieved in $1,024{\times}1,024$ resolution with over 4 million lifetime determinations per second and measurement error within 10%. This image acquisition speed is 184 times faster than that of single-channel TCSPC and 9.2 times faster than that of 8-channel TCSPC (state-of-art photon counting rate of 80 million counts per second) with the same lifetime accuracy of 10% and the same pixel resolution.

Precision Analysis of the Depth Measurement System Using a Single Camera with a Rotating Mirror (회전 평면경과 단일 카메라를 이용한 거리측정 시스템의 정밀도 분석)

  • ;;;Chun Shin Lin
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.11
    • /
    • pp.626-633
    • /
    • 2003
  • Theoretical analysis of the depth measurement system with the use of a single camera and a rotating mirror has been done. A camera in front of a rotating mirror acquires a sequence of reflected images, from which depth information is extracted. For an object point at a longer distance, the corresponding pixel in the sequence of images moves at a higher speed. Depth measurement based on such pixel movement is investigated. Since the mirror rotates along an axis that is in parallel with the vertical axis of the image plane, the image of an object will only move horizontally. This eases the task of finding corresponding image points. In this paper, the principle of the depth measurement-based on the relation of the pixel movement speed and the depth of objects have been investigated. Also, necessary mathematics to implement the technique is derived and presented. The factors affecting the measurement precision have been studied. Analysis shows that the measurement error increases with the increase of depth. The rotational angle of the mirror between two image-takings also affects the measurement precision. Experimental results using the real camera-mirror setup are reported.

3D Shape Recovery Using Image Focus through Nonlinear Total Variation (비선형 전변동을 이용한 초점거리 변화 기반의 3 차원 깊이 측정 방법)

  • Mahmood, Muhammad Tariq;Choi, Young Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.12 no.2
    • /
    • pp.27-32
    • /
    • 2013
  • Shape From Focus (SFF) is a passive optical technique to recover 3D structure of an object that utilizes focus information from 2D images of the object taken at different focus levels. Mostly, SFF methods use a single focus measure to compute image focus quality of each pixel in the image sequence. However, it is difficult to recover accurate 3D shape using a single focus measure, as different focus measures perform differently in diverse conditions. In this paper, a nonlinear Total Variation (TV) based approach is proposed for 3D shape recovery. To improve the result of surface reconstruction, several initial depth maps are obtained using different focus measures and the resultant 3D shape is obtained by diffusing them through TV. The proposed method is tested and evaluated by using image sequences of synthetic and real objects. The results and comparative analysis demonstrate the effectiveness of our method.

Demosaicking of Hexagonally-Structured Bayer Color Filter Array (육각형 구조의 베이어 컬러 필터 배열에 대한 디모자익킹)

  • Lee, Kyungme;Yoo, Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.10
    • /
    • pp.1434-1440
    • /
    • 2014
  • This paper describes a demosaicking method for hexagonally-structured color filter array. Demosaicking is essential to acquire color images using color filter array (CFA) in single sensor imaging. Thus, CFA patterns have been discussed in order to improve image quality in single sensor imaging after the Bayer pattern are introduced. Advancements in imaging sensor technology recently introduce a hexagonal CFA pattern. The hexagonal CFA can be considered to be a 45-degree rotational version of the Bayer pattern, thus demosaicking can be implemented by an existing method with backward and forward 45-degree rotations. However, this approach requires heavy computing power and memory in image sensing devices because of the image rotations. To overcome this problem, we proposes a demosaicking method for a hexagonal Bayer CFA without rotations. In addition, we introduce a weighting parameter in our demosaicking method to improve image quality and to unifying exiting method with our method. Experimental results indicate that the proposed method is superior to conventional methods in terms of PSNR. In addition, some optimized values for the weighting parameter are provided experimentally.

Detection of Pupil using Template Matching Based on Genetic Algorithm in Facial Images (얼굴 영상에서 유전자 알고리즘 기반 형판정합을 이용한 눈동자 검출)

  • Lee, Chan-Hee;Jang, Kyung-Shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.7
    • /
    • pp.1429-1436
    • /
    • 2009
  • In this paper, we propose a robust eye detection method using template matching based on genetic algorithm in the single facial image. The previous works for detecting pupil using genetic algorithm had a problem that the detection accuracy is influnced much by the initial population for it's random value. Therefore, their detection result is not consistent. In order to overcome this point we extract local minima in the facial image and generate initial populations using ones that have high fitness with a template. Each chromosome consists of geometrical informations for the template image. Eye position is detected by template matching. Experiment results verify that the proposed eye detection method improve the precision rate and high accuracy in the single facial image.

Evidential Fusion of Multsensor Multichannel Imagery

  • Lee Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.1
    • /
    • pp.75-85
    • /
    • 2006
  • This paper has dealt with a data fusion for the problem of land-cover classification using multisensor imagery. Dempster-Shafer evidence theory has been employed to combine the information extracted from the multiple data of same site. The Dempster-Shafer's approach has two important advantages for remote sensing application: one is that it enables to consider a compound class which consists of several land-cover types and the other is that the incompleteness of each sensor data due to cloud-cover can be modeled for the fusion process. The image classification based on the Dempster-Shafer theory usually assumes that each sensor is represented by a single channel. The evidential approach to image classification, which utilizes a mass function obtained under the assumption of class-independent beta distribution, has been discussed for the multiple sets of mutichannel data acquired from different sensors. The proposed method has applied to the KOMPSAT-1 EOC panchromatic imagery and LANDSAT ETM+ data, which were acquired over Yongin/Nuengpyung area of Korean peninsula. The experiment has shown that it is greatly effective on the applications in which it is hard to find homogeneous regions represented by a single land-cover type in training process.

An image analysis system Design using Arduino sensor and feature point extraction algorithm to prevent intrusion

  • LIM, Myung-Jae;JUNG, Dong-Kun;KWON, Young-Man
    • Korean Journal of Artificial Intelligence
    • /
    • v.9 no.2
    • /
    • pp.23-28
    • /
    • 2021
  • In this paper, we studied a system that can efficiently build security management for single-person households using Arduino, ESP32-CAM and PIR sensors, and proposed an Android app with an internet connection. The ESP32-CAM is an Arduino compatible board that supports both Wi-Fi, Bluetooth, and cameras using an ESP32-based processor. The PCB on-board antenna may be used independently, and the sensitivity may be expanded by separately connecting the external antenna. This system has implemented an Arduino-based Unauthorized intrusion system that can significantly help prevent crimes in single-person households using the combination of PIR sensors, Arduino devices, and smartphones. unauthorized intrusion system, showing the connection between Arduino Uno and ESP32-CAM and with smartphone applications. Recently, if daily quarantine is underway around us and it is necessary to verify the identity of visitors, it is expected that it will help maintain a safety net if this system is applied for the purpose of facial recognition and restricting some access. This technology is widely used to verify that the characters in the two images entered into the system are the same or to determine who the characters in the images are most similar to among those previously stored in the internal database. There is an advantage that it may be implemented in a low-power, low-cost environment through image recognition, comparison, feature point extraction, and comparison.

Remote Distance Measurement from a Single Image by Automatic Detection and Perspective Correction

  • Layek, Md Abu;Chung, TaeChoong;Huh, Eui-Nam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.3981-4004
    • /
    • 2019
  • This paper proposes a novel method for locating objects in real space from a single remote image and measuring actual distances between them by automatic detection and perspective transformation. The dimensions of the real space are known in advance. First, the corner points of the interested region are detected from an image using deep learning. Then, based on the corner points, the region of interest (ROI) is extracted and made proportional to real space by applying warp-perspective transformation. Finally, the objects are detected and mapped to the real-world location. Removing distortion from the image using camera calibration improves the accuracy in most of the cases. The deep learning framework Darknet is used for detection, and necessary modifications are made to integrate perspective transformation, camera calibration, un-distortion, etc. Experiments are performed with two types of cameras, one with barrel and the other with pincushion distortions. The results show that the difference between calculated distances and measured on real space with measurement tapes are very small; approximately 1 cm on an average. Furthermore, automatic corner detection allows the system to be used with any type of camera that has a fixed pose or in motion; using more points significantly enhances the accuracy of real-world mapping even without camera calibration. Perspective transformation also increases the object detection efficiency by making unified sizes of all objects.