• Title/Summary/Keyword: Parts Image Recognition

Search Result 163, Processing Time 0.029 seconds

Real-Time Object Recognition Using Local Features (지역 특징을 사용한 실시간 객체인식)

  • Kim, Dae-Hoon;Hwang, Een-Jun
    • Journal of IKEEE
    • /
    • v.14 no.3
    • /
    • pp.224-231
    • /
    • 2010
  • Automatic detection of objects in images has been one of core challenges in the areas such as computer vision and pattern analysis. Especially, with the recent deployment of personal mobile devices such as smart phone, such technology is required to be transported to them. Usually, these smart phone users are equipped with devices such as camera, GPS, and gyroscope and provide various services through user-friendly interface. However, the smart phones fail to give excellent performance due to limited system resources. In this paper, we propose a new scheme to improve object recognition performance based on pre-computation and simple local features. In the pre-processing, we first find several representative parts from similar type objects and classify them. In addition, we extract features from each classified part and train them using regression functions. For a given query image, we first find candidate representative parts and compare them with trained information to recognize objects. Through experiments, we have shown that our proposed scheme can achieve resonable performance.

Development of an HTM Network Training System for Recognition of Molding Parts (부품 이미지 인식을 위한 HTM 네트워크 훈련 시스템 개발)

  • Lee, Dae-Han;Bae, Sun-Gap;Seo, Dae-Ho;Kang, Hyun-Syug;Bae, Jong-Min
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.11
    • /
    • pp.1643-1656
    • /
    • 2010
  • It is necessary to develop a system to judge inferiority of goods to minimize the loss at small factories in which produces various kinds of goods with small amounts. That system can be developed based on HTM theory. HTM is a model to apply the operation principles of the neocortex in human brain to the machine learning. We have to build the trained HTM network to use the HTM-based machine learning system. It requires the knowledge for the HTM theory. This paper presents the design and implementation of the training system to support the development of HTM networks which recognize the molding parts to judge its badness. This training system allows field technicians to train the HTM network with high accuracy without the knowledge of the HTM theory. It also can be applied to any kind of the HTM-based judging systems for molding parts.

Multimodal biometrics system using PDA under ubiquitous environments (유비쿼터스 환경에서 PDA를 이용한 다중생체인식 시스템 구현)

  • Kwon Man-Jun;Yang Dong-Hwa;Kim Yong-Sam;Lee Dae-Jong;Chun Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.4
    • /
    • pp.430-435
    • /
    • 2006
  • In this paper, we propose a method based on multimodal biometrics system using the face and signature under ubiquitous computing environments. First, the face and signature images are obtained by PDA and then these images with user ID and name are transmitted via WLAN(Wireless LAN) to the server and finally the PDA receives verification result from the server. The multimodal biometrics recognition system consists of two parts. In client part located in PDA, user interface program executes the user registration and verification process. The server consisting of the PCA and LDA algorithm shows excellent face recognition performance and the signature recognition method based on the Kernel PCA and LDA algorithm for signature image projected to vertical and horizontal axes by grid partition method. The proposed algorithm is evaluated with several face and signature images and shows better recognition and verification results than previous unimodal biometrics recognition techniques.

Development of Hole Expansion Test for Sheet Materials Using Pattern-Recognition Technique (형태 인식 기술을 이용한 판재의 홀 확장성 평가 시스템 개발)

  • Jang, Seung Hyun;Kim, Chan Il;Yang, Seung Han;Kim, Young Suk
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.37 no.2
    • /
    • pp.161-168
    • /
    • 2013
  • Nowadays, one of the most interested area of automobile industry is the production of vehicle which has collision safety and ability to produce less amount of $CO_2$. The achievement of such a dual performance is done by choosing the materials like dual phase steel, ferrite bainite steel, etc. These steels have been used in automotive chassis and body parts, and also used to be formed by hole flanging to meet the goal of strength and design requirement. The formability of sheet material was experimented by hole expansion test and the judgement relies on human eye and his experience. This manual judgement involves many errors and large deviation. This paper develops the automatic crack recognition system which finds a crack based on CCD image to complement the problem of the current method depending on human's sense.

Automatic Person Identification using Multiple Cues

  • Swangpol, Danuwat;Chalidabhongse, Thanarat
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1202-1205
    • /
    • 2005
  • This paper describes a method for vision-based person identification that can detect, track, and recognize person from video using multiple cues: height and dressing colors. The method does not require constrained target's pose or fully frontal face image to identify the person. First, the system, which is connected to a pan-tilt-zoom camera, detects target using motion detection and human cardboard model. The system keeps tracking the moving target while it is trying to identify whether it is a human and identify who it is among the registered persons in the database. To segment the moving target from the background scene, we employ a version of background subtraction technique and some spatial filtering. Once the target is segmented, we then align the target with the generic human cardboard model to verify whether the detected target is a human. If the target is identified as a human, the card board model is also used to segment the body parts to obtain some salient features such as head, torso, and legs. The whole body silhouette is also analyzed to obtain the target's shape information such as height and slimness. We then use these multiple cues (at present, we uses shirt color, trousers color, and body height) to recognize the target using a supervised self-organization process. We preliminary tested the system on a set of 5 subjects with multiple clothes. The recognition rate is 100% if the person is wearing the clothes that were learned before. In case a person wears new dresses the system fail to identify. This means height is not enough to classify persons. We plan to extend the work by adding more cues such as skin color, and face recognition by utilizing the zoom capability of the camera to obtain high resolution view of face; then, evaluate the system with more subjects.

  • PDF

Vanishing point-based 3D object detection method for improving traffic object recognition accuracy

  • Jeong-In, Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.93-101
    • /
    • 2023
  • In this paper, we propose a method of creating a 3D bounding box for an object using a vanishing point to increase the accuracy of object recognition in an image when recognizing an traffic object using a video camera. Recently, when vehicles captured by a traffic video camera is to be detected using artificial intelligence, this 3D bounding box generation algorithm is applied. The vertical vanishing point (VP1) and horizontal vanishing point (VP2) are derived by analyzing the camera installation angle and the direction of the image captured by the camera, and based on this, the moving object in the video subject to analysis is specified. If this algorithm is applied, it is easy to detect object information such as the location, type, and size of the detected object, and when applied to a moving type such as a car, it is tracked to determine the location, coordinates, movement speed, and direction of each object by tracking it. Able to know. As a result of application to actual roads, tracking improved by 10%, in particular, the recognition rate and tracking of shaded areas (extremely small vehicle parts hidden by large cars) improved by 100%, and traffic data analysis accuracy was improved.

Development of Learning Algorithm using Brain Modeling of Hippocampus for Face Recognition (얼굴인식을 위한 해마의 뇌모델링 학습 알고리즘 개발)

  • Oh, Sun-Moon;Kang, Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.55-62
    • /
    • 2005
  • In this paper, we propose the face recognition system using HNMA(Hippocampal Neuron Modeling Algorithm) which can remodel the cerebral cortex and hippocampal neuron as a principle of a man's brain in engineering, then it can learn the feature-vector of the face images very fast and construct the optimized feature each image. The system is composed of two parts. One is feature-extraction and the other is teaming and recognition. In the feature extraction part, it can construct good-classified features applying PCA(Principal Component Analysis) and LDA(Linear Discriminants Analysis) in order. In the learning part, it cm table the features of the image data which are inputted according to the order of hippocampal neuron structure to reaction-pattern according to the adjustment of a good impression in the dentate gyrus region and remove the noise through the associate memory in the CA3 region. In the CA1 region receiving the information of the CA3, it can make long-term memory learned by neuron. Experiments confirm the each recognition rate, that are face changes, pose changes and low quality image. The experimental results show that we can compare a feature extraction and learning method proposed in this paper of any other methods, and we can confirm that the proposed method is superior to existing methods.

The Moderating Effect of Self-Regulatory Efficacy in the Relationship between Self-image and Perception of the Nurse's Image of Nursing Students (간호대학생의 자아상과 간호사 이미지 지각과의 관계에서 자기조절 효능감의 조절효과)

  • Ha, Yun Ju;Min, Soon;Kim, Eun A
    • The Journal of Korean Academic Society of Nursing Education
    • /
    • v.19 no.3
    • /
    • pp.405-412
    • /
    • 2013
  • Purpose: The purpose of this study was to examine how the self-image of nursing students can affect the perception of the nurse and to verify the moderating effect of self-regulatory efficacy on perception of the nurse and self image of nursing students. Methods: This study was carried out by 768 nursing students of 10 Universities from all parts of the country from June 18, 2012 to July 13, 2012. Data was analyzed by the SPSS (frequency, ANOVA, and hierarchial multiple regression analysis) program. The moderating effect of self-regulatory efficacy in the relationship between self-image and the nurse's perception of nursing students was measured. Results: The result of male students shows that self-regulatory efficacy affects the perception of the nurse. In the case of female students, their self-image and self-regulatory efficacy were statistically significant. In addition, the moderating effect of self-regulatory efficacy was statistically significant. Conclusions: Colleges of nursing are in need of providing chances to students to attend a class related to understanding how to encourage a positive self image. In addition, a positive self image can effect a nurse's identity. As the moderating effect of self-regulatory efficacy is proved to be effective, nursing students should request a way to achieve recognition with professionalism during nurse's training.

A Pilot Study on Outpainting-powered Pet Pose Estimation (아웃페인팅 기반 반려동물 자세 추정에 관한 예비 연구)

  • Gyubin Lee;Youngchan Lee;Wonsang You
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.69-75
    • /
    • 2023
  • In recent years, there has been a growing interest in deep learning-based animal pose estimation, especially in the areas of animal behavior analysis and healthcare. However, existing animal pose estimation techniques do not perform well when body parts are occluded or not present. In particular, the occlusion of dog tail or ear might lead to a significant degradation of performance in pet behavior and emotion recognition. In this paper, to solve this intractable problem, we propose a simple yet novel framework for pet pose estimation where pet pose is predicted on an outpainted image where some body parts hidden outside the input image are reconstructed by the image inpainting network preceding the pose estimation network, and we performed a preliminary study to test the feasibility of the proposed approach. We assessed CE-GAN and BAT-Fill for image outpainting, and evaluated SimpleBaseline for pet pose estimation. Our experimental results show that pet pose estimation on outpainted images generated using BAT-Fill outperforms the existing methods of pose estimation on outpainting-less input image.

Using Skeleton Vector Information and RNN Learning Behavior Recognition Algorithm (스켈레톤 벡터 정보와 RNN 학습을 이용한 행동인식 알고리즘)

  • Kim, Mi-Kyung;Cha, Eui-Young
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.598-605
    • /
    • 2018
  • Behavior awareness is a technology that recognizes human behavior through data and can be used in applications such as risk behavior through video surveillance systems. Conventional behavior recognition algorithms have been performed using the 2D camera image device or multi-mode sensor or multi-view or 3D equipment. When two-dimensional data was used, the recognition rate was low in the behavior recognition of the three-dimensional space, and other methods were difficult due to the complicated equipment configuration and the expensive additional equipment. In this paper, we propose a method of recognizing human behavior using only CCTV images without additional equipment using only RGB and depth information. First, the skeleton extraction algorithm is applied to extract points of joints and body parts. We apply the equations to transform the vector including the displacement vector and the relational vector, and study the continuous vector data through the RNN model. As a result of applying the learned model to various data sets and confirming the accuracy of the behavior recognition, the performance similar to that of the existing algorithm using the 3D information can be verified only by the 2D information.