• Title/Summary/Keyword: Optical feature

Search Result 405, Processing Time 0.024 seconds

Human Action Recognition Based on An Improved Combined Feature Representation

  • Zhang, Ning;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1473-1480
    • /
    • 2018
  • The extraction and recognition of human motion characteristics need to combine biometrics to determine and judge human behavior in the movement and distinguish individual identities. The so-called biometric technology, the specific operation is the use of the body's inherent biological characteristics of individual identity authentication, the most noteworthy feature is the invariance and uniqueness. In the past, the behavior recognition technology based on the single characteristic was too restrictive, in this paper, we proposed a mixed feature which combined global silhouette feature and local optical flow feature, and this combined representation was used for human action recognition. And we will use the KTH database to train and test the recognition system. Experiments have been very desirable results.

A New Ocular Torsion Measurement Method Using Iterative Optical Flow

  • Lee InBum;Choi ByungHun;Kim SangSik;Park Kwang Suk
    • Journal of Biomedical Engineering Research
    • /
    • v.26 no.3
    • /
    • pp.133-138
    • /
    • 2005
  • This paper presents a new method for measuring ocular torsion using the optical flow. Images of the iris were cropped and transformed into rectangular images that were orientation invariant. Feature points of the iris region were selected from a reference and a target image, and the shift of each feature was calculated using the iterative Lucas-Kanade method. The feature points were selected according to the strength of the corners on the iris image. The accuracy of the algorithm was tested using printed eye images. In these images, torsion was measured with $0.15^{\circ}$ precision. The proposed method shows robustness even with the gaze directional changes and pupillary reflex environment of real-time processing.

Review on the Optical tin]k Technologies for the Gigabit-per-second Wavelength-Division-Multiplexing Passive Optical Networks (WDM-PONs) (기가급 WDM-PON을 위한 광기술 분석)

  • Park Tae-Sang;Park Kun-Youl;Kim Jin-Hee;Yoon Ho-Sung
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2006.08a
    • /
    • pp.11-14
    • /
    • 2006
  • This paper reviews the optical link technologies which have been developed for gigabit-per-second wavelength-division-multiplexing passive optical networks (WDM-PONs). Similarly to the 100Mb/s WDM-PON systems which have been deployed for trial services by KT, the most important requirement for 1Gb/s WDM-PON is wavelength independence (colorless feature) of its ONU/ONTs, which makes possible convenient operation and cost-effective maintenance with minimum inventory cost. Among various methods to implement such colorless feature, four promising candidates for gigabit WDM-PON are analyzed with their own development issues and their expected performances are compared.

  • PDF

Feature Extraction of Off-line Handwritten Characters Based on Optical Neural Field (시각 신경계 반응 모델에 근거한 필기체 off-line 문자에서의 특징 추출)

  • Hong, Keong-Ho;Jeong, Eun-Hwa;Ahn, Byung-Chul
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.12
    • /
    • pp.3530-3538
    • /
    • 1999
  • In this paper, we propose a novel method for feature extraction of off-line handwritten characters recognition based on human optical neural field model. The proposed feature extraction system divide into three parts ; 1) smoothing process, 2) removing boundaries(boundary lines), 3) extracting feature information. The proposed system first removes rough pixels which are easy to occur in handwritten characters. The system then extracts and removes the boundary information which have no influence on characters recognition. Finally, the feature information for off-line handwritten characters recognition is extracted. With PE2 Hangul database, we perform feature extraction experiments for off-line handwritten characters recognition. In the experiment results, the proposed system based on optical neural field shows that can extract the feature information of off-line handwritten characters including curve lines, circles, quadrangles and so on.

  • PDF

Antiblurry Dejitter Image Stabilization Method of Fuzzy Video for Driving Recorders

  • Xiong, Jing-Ying;Dai, Ming;Zhao, Chun-Lei;Wang, Ruo-Qiu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.6
    • /
    • pp.3086-3103
    • /
    • 2017
  • Video images captured by vehicle cameras often contain blurry or dithering frames due to inadvertent motion from bumps in the road or by insufficient illumination during the morning or evening, which greatly reduces the perception of objects expression and recognition from the records. Therefore, a real-time electronic stabilization method to correct fuzzy video from driving recorders has been proposed. In the first stage of feature detection, a coarse-to-fine inspection policy and a scale nonlinear diffusion filter are proposed to provide more accurate keypoints. Second, a new antiblurry binary descriptor and a feature point selection strategy for unintentional estimation are proposed, which brought more discriminative power. In addition, a new evaluation criterion for affine region detectors is presented based on the percentage interval of repeatability. The experiments show that the proposed method exhibits improvement in detecting blurry corner points. Moreover, it improves the performance of the algorithm and guarantees high processing speed at the same time.

Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion (천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정)

  • Shin, Ok-Shik;Park, Chan-Gook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.1
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.

A New Feature-Based Visual SLAM Using Multi-Channel Dynamic Object Estimation (다중 채널 동적 객체 정보 추정을 통한 특징점 기반 Visual SLAM)

  • Geunhyeong Park;HyungGi Jo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.65-71
    • /
    • 2024
  • An indirect visual SLAM takes raw image data and exploits geometric information such as key-points and line edges. Due to various environmental changes, SLAM performance may decrease. The main problem is caused by dynamic objects especially in highly crowded environments. In this paper, we propose a robust feature-based visual SLAM, building on ORB-SLAM, via multi-channel dynamic objects estimation. An optical flow and deep learning-based object detection algorithm each estimate different types of dynamic object information. Proposed method incorporates two dynamic object information and creates multi-channel dynamic masks. In this method, information on actually moving dynamic objects and potential dynamic objects can be obtained. Finally, dynamic objects included in the masks are removed in feature extraction part. As a results, proposed method can obtain more precise camera poses. The superiority of our ORB-SLAM was verified to compared with conventional ORB-SLAM by the experiment using KITTI odometry dataset.

OptiNeural System for Optical Pattern Classification

  • Kim, Myung-Soo
    • Journal of Electrical Engineering and information Science
    • /
    • v.3 no.3
    • /
    • pp.342-347
    • /
    • 1998
  • An OptiNeural system is developed for optical pattern classification. It is a novel hybrid system which consists of an optical processor and a multilayer neural network. It takes advantages of two dimensional processing capability of an optical processor and nonlinear mapping capability of a neural network. The optical processor with a binary phase only filter is used as a preprocessor for feature extraction and the neural network is used as a decision system through mapping. OptiNeural system is trained for optical pattern classification by use of a simulated annealing algorithm. Its classification performance for grey tone texture patterns is excellent, while a conventional optical system shows poor classification performance.

  • PDF

Quantum electrodynamical formulation of photochemical acid generation and its implications on optical lithography

  • Seungjin Lee
    • ETRI Journal
    • /
    • v.46 no.5
    • /
    • pp.774-782
    • /
    • 2024
  • The photochemical acid generation is refined from the first principles of quantum electrodynamics. First, we briefly review the formulation of the quantum theory of light based on the quantum electrodynamics framework to establish the probability of acid generation at a given spacetime point. The quantum mechanical acid generation is then combined with the deprotection mechanism to obtain a probabilistic description of the deprotection density directly related to feature formation in a photoresist. A statistical analysis of the random deprotection density is presented to reveal the leading characteristics of stochastic feature formation.

Fast Natural Feature Tracking Using Optical Flow (광류를 사용한 빠른 자연특징 추적)

  • Bae, Byung-Jo;Park, Jong-Seung
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.345-354
    • /
    • 2010
  • Visual tracking techniques for Augmented Reality are classified as either a marker tracking approach or a natural feature tracking approach. Marker-based tracking algorithms can be efficiently implemented sufficient to work in real-time on mobile devices. On the other hand, natural feature tracking methods require a lot of computationally expensive procedures. Most previous natural feature tracking methods include heavy feature extraction and pattern matching procedures for each of the input image frame. It is difficult to implement real-time augmented reality applications including the capability of natural feature tracking on low performance devices. The required computational time cost is also in proportion to the number of patterns to be matched. To speed up the natural feature tracking process, we propose a novel fast tracking method based on optical flow. We implemented the proposed method on mobile devices to run in real-time and be appropriately used with mobile augmented reality applications. Moreover, during tracking, we keep up the total number of feature points by inserting new feature points proportional to the number of vanished feature points. Experimental results showed that the proposed method reduces the computational cost and also stabilizes the camera pose estimation results.