• Title/Summary/Keyword: Image detector data

Search Result 206, Processing Time 0.026 seconds

Object Tracking Method using Deep Learning and Kalman Filter (딥 러닝 및 칼만 필터를 이용한 객체 추적 방법)

  • Kim, Gicheol;Son, Sohee;Kim, Minseop;Jeon, Jinwoo;Lee, Injae;Cha, Jihun;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.495-505
    • /
    • 2019
  • Typical algorithms of deep learning include CNN(Convolutional Neural Networks), which are mainly used for image recognition, and RNN(Recurrent Neural Networks), which are used mainly for speech recognition and natural language processing. Among them, CNN is able to learn from filters that generate feature maps with algorithms that automatically learn features from data, making it mainstream with excellent performance in image recognition. Since then, various algorithms such as R-CNN and others have appeared in object detection to improve performance of CNN, and algorithms such as YOLO(You Only Look Once) and SSD(Single Shot Multi-box Detector) have been proposed recently. However, since these deep learning-based detection algorithms determine the success of the detection in the still images, stable object tracking and detection in the video requires separate tracking capabilities. Therefore, this paper proposes a method of combining Kalman filters into deep learning-based detection networks for improved object tracking and detection performance in the video. The detection network used YOLO v2, which is capable of real-time processing, and the proposed method resulted in 7.7% IoU performance improvement over the existing YOLO v2 network and 20 fps processing speed in FHD images.

Digital Holographic Security Identification System (디지털 홀로그래픽 보안 인증 시스템)

  • Kim, Jung-Hoi;Kim, Nam;Jeon, Seok-Hee
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.89-98
    • /
    • 2004
  • In this paper, we implement a digital holographic security card system that combines digital holographic memory using random phase encoded reference beams with electrical biometrics. Digitally encoded data including a document, a picture of face, and a fingerprint are recorded by multiplexing of holographic memory. A random phase mask encoding reference beams are used as a decoded key to protect illegal counterfeit. As a result, we can achieve a raw BER of 3.6${\times}$10-4 and shift selectivity of 4${\mu}{\textrm}{m}$ using the 2D random phase mask. Also, we develop a recording pattern and image processing which are suitable for a low cost reader without a position sensing photo-detector for real time data extraction and remove danger of fraud from unauthorized person by comparing the reconstructed holographic data with the live fingerprint data.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

Evaluation of Angle Optimization on Edge Test Device Setting in Modulation Transfer Function (변조전달함수 방법에서 엣지 장치 설정에 대한 각도 최적화 평가)

  • Min, Jung-Whan;Jeong, Hoi-Woun
    • Journal of radiological science and technology
    • /
    • v.43 no.1
    • /
    • pp.15-21
    • /
    • 2020
  • This study was purpose to evaluation of Modulation Transfer Function in Measurements by using the International electrotechnical commission standard(IEC 62220-1) which were edge device each angle by using edge method. In this study was Aero(Konica, Japan) image receptor which is a indirect Flat panel detector(FPD) was used. The size of matrix 1994 × 2430 (14"× 17" inch) which performed 12 bit processing and pixel pitch is 175 ㎛. The results of shown as MTF measurements at IEC standard. The amount of data seemed reasonable and at an MTF value of 0.1 the spatial frequencies were 2.56 cycles/mm at an angle of 2.4°. MTF value of 0.5 the spatial frequencies were 1.32 cycles/mm at an angle of 2.4°. This study were to evaluate MTF by setting each angle 2.0°~2.8° degrees the most effective optimal edge angle and to suggest the quantitative methods of measuring by using IEC.

A Design of Real-time Automatic Focusing System for Digital Still Camera Using the Passive Sensor Error Minimization (수동 센서의 오차 최소화를 이용한 실시간 DSC 자동초점 시스템 설계)

  • Kim, Geun-Seop;Kim, Deok-Yeong;Kim, Seong-Hwan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.5
    • /
    • pp.203-211
    • /
    • 2002
  • In this paper, the implementation of a new AF(Automatic Focusing) system for a digital still camera is introduced. The proposed system operates in real-time while adjusting focus after the measurement of distance to an object using a passive sensor, which is different from a typical method. In audition, measurement errors were minimized by using the data acquired empirically, and the optimal measuring time was obtained using EV(Exposure Value) which is calculated from CCD luminance signal. Moreover, this system adopted an auxiliary light source for focusing in absolute dark conditions, which is very hard for CCD image Processing. Since this is an open-loop system adjusting focus immediately after the distance measurement, it guarantees real-time operation. The performance of this new AF system was verified by comparing the focusing value curve obtained from AF experiment with the one from the measurement by MF(Manual-Focusing). In both case, edge detector was used for various objects and backgrounds.

Remote monitoring of urban and infrastructural areas

  • Bortoluzzi, Daniele;Casciati, Fabio;Elia, Lorenzo;Faravelli, Lucia
    • Earthquakes and Structures
    • /
    • v.7 no.4
    • /
    • pp.449-462
    • /
    • 2014
  • Seismically induced structural damage, as well as any damage caused by a natural catastrophic event, covers a wide area. This suggests to supervise the event consequences by vision tools. This paper reports the evolution from the results obtained by the project RADATT (RApid Damage Assessment Telematics Tool) funded by the European Commission within FP4. The aim was to supply a rapid and reliable damage detector/estimator for an area where a catastrophic event had occurred. Here, a general open-source methodology for the detection and the estimation of the damage caused by natural catastrophes is developed. The suitable available hazard and vulnerability data and satellite pictures covering the area of interest represent the required bits of information for updated telematics tools able to manage it. As a result the global damage is detected by the simple use of open source software. A case-study to a highly dense agglomerate of buildings is discussed in order to provide the main details of the proposed methodology.

Implementation of Adaptive Movement Control for Waiter Robot using Visual Information

  • Nakazawa, Minoru;Guo, Qinglian;Nagase, Hiroshi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.808-811
    • /
    • 2009
  • Robovie-R2 [1], developed by ATR, is a 110cm high, 60kg weight, two wheel drive, human like robot. It has two arms with dynamic fingers. It also has a position sensitive detector sensor and two cameras as eyes on his head for recognizing his surrounding environment. Recent years, we have carried out a project to integrate new functions into Robovie-R2 so as to make it possible to be used in a dining room in healthcare center for helping serving meal for elderly. As a new function, we have developed software system for adaptive movement control of Robovie-R2 that is primary important since a robot that cannot autonomously control its movement would be a dangerous object to the people in dining room. We used the cameras on Robovie-R2's head to catch environment images, applied our original algorithm for recognizing obstacles such as furniture or people, so as to control Roboie-R2's movement. In this paper, we will focus our algorithm and its results.

  • PDF

Sequential detection simulation of red-tide evolution for geostationary ocean color instrument with realistic optical characteristics

  • Jeong, Soo-Min;Jeong, Yu-Kyeong;Ryu, Dong-Ok;Kim, Seong-Hui;Cho, Seong-Ick;Hong, Jin-Suk;Kim, Sug-Whan
    • Bulletin of the Korean Space Science Society
    • /
    • 2009.10a
    • /
    • pp.49.3-49.3
    • /
    • 2009
  • Geostationary Ocean Colour Imager (GOCI) is the first ocean color instrument that will be operating in a geostationary orbit from 2010. GOCI will provide the crucial information of ocean environment around the Korean peninsula in high spatial and temporal resolutions at eight visible bands. We report an on-going development of imaging and radiometric performance prediction model for GOCI with realistic data for reflectance, transmittance, absorption, wave-front error and scattering properties for its optical elements. For performance simulation, Monte Carlo based ray tracing technique was used along the optical path starting from the Sun to the final detector plane for a fixed solar zenith angle. This was then followed by simulation of red-tide evolution detection and their radiance estimation, following the in-orbit operational sequence. The simulation results proves the GOCI flight model is capable of detecting both image and radiance originated from the key ocean phenomena including red tide. The model details and computational process are discussed with implications to other earth observation instruments.

  • PDF

Fast Shape Matching Algorithm Based on the Improved Douglas-Peucker Algorithm (개량 Douglas-Peucker 알고리즘 기반 고속 Shape Matching 알고리즘)

  • Sim, Myoung-Sup;Kwak, Ju-Hyun;Lee, Chang-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.10
    • /
    • pp.497-502
    • /
    • 2016
  • Shape Contexts Recognition(SCR) is a technology recognizing shapes such as figures and objects, greatly supporting technologies such as character recognition, motion recognition, facial recognition, and situational recognition. However, generally SCR makes histograms for all contours and maps the extracted contours one to one to compare Shape A and B, which leads to slow progress speed. Thus, this paper has made simple yet more effective algorithm with optimized contour, finding the outlines according to shape figures and using the improved Douglas-Peucker algorithm and Harris corner detector. With this improved method, progress speed is recognized as faster.

Common Optical System for the Fusion of Three-dimensional Images and Infrared Images

  • Kim, Duck-Lae;Jung, Bo Hee;Kong, Hyun-Bae;Ok, Chang-Min;Lee, Seung-Tae
    • Current Optics and Photonics
    • /
    • v.3 no.1
    • /
    • pp.8-15
    • /
    • 2019
  • We describe a common optical system that merges a LADAR system, which generates a point cloud, and a more traditional imaging system operating in the LWIR, which generates image data. The optimum diameter of the entrance pupil was determined by analysis of detection ranges of the LADAR sensor, and the result was applied to design a common optical system using LADAR sensors and LWIR sensors; the performance of these sensors was then evaluated. The minimum detectable signal of the $128{\times}128-pixel$ LADAR detector was calculated as 20.5 nW. The detection range of the LADAR optical system was calculated to be 1,000 m, and according to the results, the optimum diameter of the entrance pupil was determined to be 15.7 cm. The modulation transfer function (MTF) in relation to the diffraction limit of the designed common optical system was analyzed and, according to the results, the MTF of the LADAR optical system was 98.8% at the spatial frequency of 5 cycles per millimeter, while that of the LWIR optical system was 92.4% at the spatial frequency of 29 cycles per millimeter. The detection, recognition, and identification distances of the LWIR optical system were determined to be 5.12, 2.82, and 1.96 km, respectively.