• Title/Summary/Keyword: Auto recognition

Search Result 175, Processing Time 0.027 seconds

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

A New Application of Human Visual Simulated Images in Optometry Services

  • Chang, Lin-Song;Wu, Bo-Wen
    • Journal of the Optical Society of Korea
    • /
    • v.17 no.4
    • /
    • pp.328-335
    • /
    • 2013
  • Due to the rapid advancement of auto-refractor technology, most optometry shops provide refraction services. Despite their speed and convenience, the measurement values provided by auto-refractors include a significant degree of error due to psychological and physical factors. Therefore, there is a need for repetitive testing to obtain a smaller mean error value. However, even repetitive testing itself might not be sufficient to ensure accurate measurements. Therefore, research on a method of measurement that can complement auto-refractor measurements and provide confirmation of refraction results needs to be conducted. The customized optometry model described herein can satisfy the above requirements. With existing technologies, using human eye measurement devices to obtain relevant individual optical feature parameters is no longer difficult, and these parameters allow us to construct an optometry model for individual eyeballs. They also allow us to compute visual images produced from the optometry model using the CODE V macro programming language before recognizing the diffraction effects visual images with the neural network algorithm to obtain the accurate refractive diopter. This study attempts to combine the optometry model with the back-propagation neural network and achieve a double check recognition effect by complementing the auto-refractor. Results show that the accuracy achieved was above 98% and that this application could significantly enhance the service quality of refraction.

Automatic conversion of machining data by the recognition of press mold (프레스 금형의 특징형상 인식에 의한 가공데이터 자동변환)

  • 최홍태;반갑수;이석희
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1994.04a
    • /
    • pp.703-712
    • /
    • 1994
  • This paper presents an automatic conversion of machining data from the orthographic views of press mold by feature recognition rule. The system includes following 6 modules : separation of views, function support, dimension text recognition, feature recognition, dimension text check and feature processing modules. The characteristic of this system is that with minimum user intervention, it recognizes basic features such as holes, slots, pockets and clamping parts and thus automatically converts CAD drawing details of press mold into machining data using 2D CAD system instead of using an expensive 3D Modeler. The system is developed by using IBM-PC in the environment of AutoCAD R12, AutoLISP and MetaWare High C. Performance of the system is verified as a good interfacing of CAD and CAM when applied to a lot of sample drawings.

CAD/CAPP System based on Manufacturing Feature Recognition (제조특징인식에 의한 CAD/CAPP 시스템)

  • Cho, Kyu-Kab;Kim, Suk-Jae
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.8 no.1
    • /
    • pp.105-115
    • /
    • 1991
  • This paper describes an integrated CAD and CAPP system for prismatic parts of injection mold which generates a complete process plan automatically from CAD data of a part without human intervention. This system employs Auto CAD as a CAD model and GS-CAPP as an automatic process planning system for injection mold. The proposed CAD/CAPP system consists of three modules such as CAD data conversion module, manufacturing feature recognition module, and CAD/CAPP interface module. CAD data conversion module transforms design data of AutoCAD into three dimensional part data. Manufacturing feature recognition module extracts specific manufacturing features of a part using feature recognition rule base. Each feature can be recognized by combining geometry, position and size of the feature. CAD/CAPP interface module links manufacturing feature codes and other head data to automatic process planning system. The CAD/CAPP system can improve the efficiency of process planning activities and reduce the time required for process planning. This system can provide a basis for the development of part feature based design by analyzing manufacturing features.

  • PDF

Lidar Based Object Recognition and Classification (자율주행을 위한 라이다 기반 객체 인식 및 분류)

  • Byeon, Yerim;Park, Manbok
    • Journal of Auto-vehicle Safety Association
    • /
    • v.12 no.4
    • /
    • pp.23-30
    • /
    • 2020
  • Recently, self-driving research has been actively studied in various institutions. Accurate recognition is important because information about surrounding objects is needed for safe autonomous driving. This study mainly deals with the signal processing of LiDAR among sensors for object recognition. LiDAR is a sensor that is widely used for high recognition accuracy. First, we clustered and tracked objects by predicting relative position and speed of objects. The characteristic points of all objects were extracted using point cloud data of each objects through proposed algorithm. The Classification between vehicle and pedestrians is estimated using number of characteristic points and distances among characteristic points. The algorithm for classifying cars and pedestrians was implemented and verified using test vehicle equipped with LiDAR sensors. The accuracy of proposed object classification algorithm was about 97%. The classification accuracy was improved by about 13.5% compared with deep learning based algorithm.

A Study on Iris Image Restoration Based on Focus Value of Iris Image (홍채 영상 초점 값에 기반한 홍채 영상 복원 연구)

  • Kang Byung-Jun;Park Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.30-39
    • /
    • 2006
  • Iris recognition is that identifies a user based on the unique iris texture patterns which has the functionalities of dilating or contracting pupil region. Iris recognition systems extract the iris pattern in iris image captured by iris recognition camera. Therefore performance of iris recognition is affected by the quality of iris image which includes iris pattern. If iris image is blurred, iris pattern is transformed. It causes FRR(False Rejection Error) to be increased. Optical defocusing is the main factor to make blurred iris images. In conventional iris recognition camera, they use two kinds of focusing methods such as lilted and auto-focusing method. In case of fixed focusing method, the users should repeatedly align their eyes in DOF(Depth of Field), while the iris recognition system acquires good focused is image. Therefore it can give much inconvenience to the users. In case of auto-focusing method, the iris recognition camera moves focus lens with auto-focusing algorithm for capturing the best focused image. However, that needs additional H/W equipment such as distance measuring sensor between users and camera lens, and motor to move focus lens. Therefore the size and cost of iris recognition camera are increased and this kind of camera cannot be used for small sized mobile device. To overcome those problems, we propose method to increase DOF by iris image restoration algorithm based on focus value of iris image. When we tested our proposed algorithm with BM-ET100 made by Panasonic, we could increase operation range from 48-53cm to 46-56cm.

Development of Automatic Feature Recognition System for CAD/CAPP Interface (CAD/CAPP 인터페이스를 위한 형상특징의 자동인식시스템 개발)

  • 오수철;조규갑
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.16 no.1
    • /
    • pp.31-40
    • /
    • 1992
  • This paper presents an automatic feature recognition system for recognizing and extracting feature information needed for the process planning input from a 3D CAD system. A given part is modeled by using the AutoCAD and feature information is automatically extracted from the AutoCAD database. The type of parts considered in this study is prismatic parts composed of faces perpendicular to the X, Y, Z axes and the types of features recognized by the proposed system are through steps, blind steps, through slots, blind slots, and pockets. Features are recognized by using the concept of convex points and concave points. Case studies are implemented to evaluate feasibilities of the function of the proposed system. The developed system is programmed by using Turbo Pascal on the IBM PC/AT on which the AutoCAD and the proposed system are implemented.

A Study on Real-time Vehicle Recognition and Tracking in Car Video (차량에 장착되어 있는 영상의 전방의 차량 인식 및 추적에 관한 연구)

  • Park, Daehyuck;Lee, Jung-hun;Seo, Jeong Goo;Kim, Jihyung;Jin, Seogsig;Yun, Tae-sup;Lee, Hye;Xu, Bin;Lim, Younghwan
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.07a
    • /
    • pp.254-257
    • /
    • 2015
  • 차량 인식 기술은 운전자에게 차량 충돌과 같은 위험요소를 사전에 인식시키거나 차량을 자동으로 제어하는 기술로 각광 받고 있다. 본 논문에서는 입력 영상에서 차량이 나타날 수 있는 관심 영역을 설정한 다음 미리 학습된 검출기를 통한 Haar-like와 Adaboost 알고리즘으로 차량 후보 영역을 검출하고 중복된 영역을 제거하기 위인식 기술해 클러스터링 기법을 적용하고, 칼만필터로 프레임 영상에서 차량을 추적 하고, 다시 중복된 영역에 대해 클러스터링 기법을 적용하는 방법을 제안하였다.

  • PDF

A Study on Real-time Pedestrian Recognition and Tracking in Car Video (차량에 장착되어 있는 영상의 주변의 보행자를 인식 및 추적을 위한 연구)

  • Park, Daehyuck;Lee, Jung-hun;Yun, Tae-sup;Seo, Jeong Goo;Kim, Jihyung;Lee, Hye;Xu, Bin;Jin, Seogsig;Lim, Younghwan
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.07a
    • /
    • pp.258-261
    • /
    • 2015
  • 본 논문에서는 주행 중에 보행자의 인식 및 추적을 위해서 차량에서 촬영된 영상정보를 이용하여 주변의 보행자를 찾고, 사고 위험성이 있는 보행자를 인식하기 위해서 보행자 파악 및 보행자와의 거리를 측정하기 위한 연구를 하고자 한다. 본 논문에서는 차량에 정착된 카메라를 통한 보행자 인식 기술에 대해 연구 하였다. 제안한 방법은 보행자 인식 단계에서 Cascasde HOG, Haar-like 알고리즘을 적용하였고, 추적 단계에서 칼만 필터와 클러스터링 기법을 결합하여 실시간으로 보행자를 인식 및 추적하였다.

  • PDF

Appearance-based Object Recognition Using Higher Order Local Auto Correlation Feature Information (고차 국소 자동 상관 특징 정보를 이용한 외관 기반 객체 인식)

  • Kang, Myung-A
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.7
    • /
    • pp.1439-1446
    • /
    • 2011
  • This paper describes the algorithm that lowers the dimension, maintains the object recognition and significantly reduces the eigenspace configuration time by combining the higher correlation feature information and Principle Component Analysis. Since the suggested method doesn't require a lot of computation than the method using existing geometric information or stereo image, the fact that it is very suitable for building the real-time system has been proved through the experiment. In addition, since the existing point to point method which is a simple distance calculation has many errors, in this paper to improve recognition rate the recognition error could be reduced by using several successive input images as a unit of recognition with K-Nearest Neighbor which is the improved Class to Class method.