• Title/Summary/Keyword: mobile vision system

Search Result 292, Processing Time 0.041 seconds

A Comprehensive Survey of Lightweight Neural Networks for Face Recognition (얼굴 인식을 위한 경량 인공 신경망 연구 조사)

  • Yongli Zhang;Jaekyung Yang
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.1
    • /
    • pp.55-67
    • /
    • 2023
  • Lightweight face recognition models, as one of the most popular and long-standing topics in the field of computer vision, has achieved vigorous development and has been widely used in many real-world applications due to fewer number of parameters, lower floating-point operations, and smaller model size. However, few surveys reviewed lightweight models and reimplemented these lightweight models by using the same calculating resource and training dataset. In this survey article, we present a comprehensive review about the recent research advances on the end-to-end efficient lightweight face recognition models and reimplement several of the most popular models. To start with, we introduce the overview of face recognition with lightweight models. Then, based on the construction of models, we categorize the lightweight models into: (1) artificially designing lightweight FR models, (2) pruned models to face recognition, (3) efficient automatic neural network architecture design based on neural architecture searching, (4) Knowledge distillation and (5) low-rank decomposition. As an example, we also introduce the SqueezeFaceNet and EfficientFaceNet by pruning SqueezeNet and EfficientNet. Additionally, we reimplement and present a detailed performance comparison of different lightweight models on the nine different test benchmarks. At last, the challenges and future works are provided. There are three main contributions in our survey: firstly, the categorized lightweight models can be conveniently identified so that we can explore new lightweight models for face recognition; secondly, the comprehensive performance comparisons are carried out so that ones can choose models when a state-of-the-art end-to-end face recognition system is deployed on mobile devices; thirdly, the challenges and future trends are stated to inspire our future works.

Design and Implementation of an Emotion Recognition System using Physiological Signal (생체신호를 이용한 감정인지시스템의 설계 및 구현)

  • O, Ji-Soo;Kang, Jeong-Jin;Lim, Myung-Jae;Lee, Ki-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.57-62
    • /
    • 2010
  • Recently in the mobile market, the communication technology which bases on the sense of sight, sound, and touch has been developed. However, human beings uses all five - vision, auditory, palatory, olfactory, and tactile - senses to communicate. Therefore, the current paper presents a technology which enables individuals to be aware of other people's emotions through a machinery device. This is achieved by the machine perceiving the tone of the voice, body temperature, pulse, and other biometric signals to recognize the emotion the dispatching individual is experiencing. Once the emotion is recognized, a scent is emitted to the receiving individual. A system which coordinates the emission of scent according to emotional changes is proposed.

A Study on the Characteristics of Methods for Experiencing Contents and Network Technologies in the Exhibition space applied with Location Based Service - Focus on T.um as the Public Exhibition Center for a Telecommunication Company - (위치기반서비스(LBS) 적용 전시관의 콘텐츠 체험방식과 기술특성에 관한 연구 - 이동통신 기업홍보관 티움(T.um)을 중심으로 -)

  • Yi, Joo-Hyoung
    • Korean Institute of Interior Design Journal
    • /
    • v.19 no.5
    • /
    • pp.173-181
    • /
    • 2010
  • Opened on November 2008, as the public exhibition center of a telecommunication company, T.um is dedicated for delivering the future ubiquitous technologies and business vision of the company leading domestic mobile communication business to the global expected clients and business partners. Since the public opening, not only over 18,000 audiences in 112 nations have been visiting T.um, but also the public media have been releasing news regarding the ubiquitous museum constantly. By the reasons, T.um is regarded as a successful case for public exhibition centers. The most distinguished quality of the museum is established by the Location Based Service technology in the initial construction stage. A visitor in anyplace of T.um can be detected by digital devices equipped GPS systems. The LBS system in T.um allows visitors to get the information of relevant technologies as well as the process of how to operating each content at his own spots by smart phone of which wireless network systems make it possible. This study is focusing on analyzing and defining the T.um special qualities in terms of technologies to provide the basic data for following exhibition space projects based on LBS. The special method of experiencing contents can be designed by utilizing the network system applied to T.um in the planning stage.

Real-Time Face Tracking Algorithm Robust to illumination Variations (조명 변화에 강인한 실시간 얼굴 추적 알고리즘)

  • Lee, Yong-Beom;You, Bum-Jae;Lee, Seong-Whan;Kim, Kwang-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3037-3040
    • /
    • 2000
  • Real-Time object tracking has emerged as an important component in several application areas including machine vision. surveillance. Human-Computer Interaction. image-based control. and so on. And there has been developed various algorithms for a long time. But in many cases. they have showed limited results under uncontrolled situation such as illumination changes or cluttered background. In this paper. we present a novel. computationally efficient algorithm for tracking human face robustly under illumination changes and cluttered backgrounds. Previous algorithms usually defines color model as a 2D membership function in a color space without consideration for illumination changes. Our new algorithm developed here. however. constructs a 3D color model by analysing plenty of images acquired under various illumination conditions. The algorithm described is applied to a mobile head-eye robot and experimented under various uncontrolled environments. It can track an human face more than 100 frames per second excluding image acquisition time.

  • PDF

Non-Marker Based Mobile Augmented Reality Technology Using Image Recognition (이미지 인식을 이용한 비마커 기반 모바일 증강현실 기법 연구)

  • Jo, Hui-Joon;Kim, Dae-Won
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.258-266
    • /
    • 2011
  • AR(Augmented Reality) technology is now easily shown around us with respect to its applicable areas' being spreaded into various shapes since the usage is simply generalized and many-sided. Currently existing camera vision based AR used marker based methods rather than using real world's informations. For the marker based AR technology, there are limitations on applicable areas and its environmental properties that a user could immerse into the usage of application program. In this paper, we proposed a novel AR method which users could recognize objects from the real world's data and the related 3-dimensional contents are also displayed. Those are done using image processing skills and a smart mobile embedded camera for terminal based AR implementations without any markers. Object recognition is done from the comparison of pre-registered and referenced images. In this process, we tried to minimize the amount of computations of similarity measurements for improving working speed by considering features of smart mobile devices. Additionally, the proposed method is designed to perform reciprocal interactions through touch events using smart mobile devices after the 3-dimensional contents are displayed on the screen. Since then, a user is able to acquire object related informations through a web browser with respect to the user's choice. With the system described in this paper, we analyzed and compared a degree of object recognition, working speed, recognition error for functional differences to the existing AR technologies. The experimental results are presented and verified in smart mobile environments to be considered as an alternate and appropriate AR technology.

Indoor Location Positioning System for Image Recognition based LBS (영상인식 기반의 위치기반서비스를 위한 실내위치인식 시스템)

  • Kim, Jong-Bae
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.49-62
    • /
    • 2008
  • This paper proposes an indoor location positioning system for the image recognition based LBS. The proposed system is a vision-based location positioning system that is implemented the augmented reality by overlaying the location results with the view of the user. For implementing, the proposed system uses the pattern matching and location model to recognize user location from images taken by a wearable mobile PC with camera. In the proposed system, the system uses the pattern matching and location model for recognizing a personal location in image sequences. The system is estimated user location by the image sequence matching and marker detection methods, and is recognized user location by using the pre-defined location model. To detect marker in image sequences, the proposed system apply to the adaptive thresholding method, and by using the location model to recognize a location, the system can be obtained more accurate and efficient results. Experimental results show that the proposed system has both quality and performance to be used as an indoor location-based services(LBS) for visitors in various environments.

  • PDF

The Obstacle Avoidance Algorithm of Mobile Robot using Line Histogram Intensity (Line Histogram Intensity를 이용한 이동로봇의 장애물 회피 알고리즘)

  • 류한성;최중경;구본민;박무열;방만식
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1365-1373
    • /
    • 2002
  • In this paper, we present two types of vision algorithm that mobile robot has CCD camera. for obstacle avoidance. This is simple algorithm that compare with grey level from input images. Also, The mobile robot depend on image processing and move command from PC host. we has been studied self controlled mobile robot system with CCD camera. This system consists of digital signal processor, step motor, RF module and CCD camera. we used wireless RF module for movable command transmitting between robot and host PC. This robot go straight until recognize obstacle from input image that preprocessed by edge detection, converting, thresholding. And it could avoid the obstacle when recognize obstacle by line histogram intensity. Host PC measurement wave from various line histogram each 20 pixel. This histogram is (x, y) value of pixel. For example, first line histogram intensity wave from (0, 0) to (0, 197) and last wave from (280, 0) to (2n, 197. So we find uniform wave region and nonuniform wave region. The period of uniform wave is obstacle region. we guess that algorithm is very useful about moving robot for obstacle avoidance.

A Micro-robotic Platform for Micro/nano Assembly: Development of a Compact Vision-based 3 DOF Absolute Position Sensor (마이크로/나노 핸들링을 위한 마이크로 로보틱 플랫폼: 비전 기반 3자유도 절대위치센서 개발)

  • Lee, Jae-Ha;Breguet, Jean Marc;Clavel, Reymond;Yang, Seung-Han
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.27 no.1
    • /
    • pp.125-133
    • /
    • 2010
  • A versatile micro-robotic platform for micro/nano scale assembly has been demanded in a variety of application areas such as micro-biology and nanotechnology. In the near future, a flexible and compact platform could be effectively used in a scanning electron microscope chamber. We are developing a platform that consists of miniature mobile robots and a compact positioning stage with multi degree-of-freedom. This paper presents the design and the implementation of a low-cost and compact multi degree of freedom position sensor that is capable of measuring absolute translational and rotational displacement. The proposed sensor is implemented by using a CMOS type image sensor and a target with specific hole patterns. Experimental design based on statistics was applied to finding optimal design of the target. Efficient algorithms for image processing and absolute position decoding are discussed. Simple calibration to eliminate the influence of inaccuracy of the fabricated target on the measuring performance also presented. The developed sensor was characterized by using a laser interferometer. It can be concluded that the sensor system has submicron resolution and accuracy of ${\pm}4{\mu}m$ over full travel range. The proposed vision-based sensor is cost-effective and used as a compact feedback device for implementation of a micro robotic platform.

Comparison of LoG and DoG for 3D reconstruction in haptic systems (햅틱스 시스템용 3D 재구성을 위한 LoG 방법과 DoG 방법의 성능 분석)

  • Sung, Mee-Young;Kim, Ki-Kwon
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.711-721
    • /
    • 2012
  • The objective of this study is to propose an efficient 3D reconstruction method for developing a stereo-vision-based haptics system which can replace "robotic eyes" and "robotic touch." The haptic rendering for 3D images requires to capture depth information and edge information of stereo images. This paper proposes the 3D reconstruction methods using LoG(Laplacian of Gaussian) algorithm and DoG(Difference of Gaussian) algorithm for edge detection in addition to the basic 3D depth extraction method for better haptic rendering. Also, some experiments are performed for evaluating the CPU time and the error rates of those methods. The experimental results lead us to conclude that the DoG method is more efficient for haptic rendering. This paper may contribute to investigate the effective methods for 3D image reconstruction such as in improving the performance of mobile patrol robots.

Indoor Localization by Matching of the Types of Vertices (모서리 유형의 정합을 이용한 실내 환경에서의 자기위치검출)

  • Ahn, Hyun-Sik
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.6
    • /
    • pp.65-72
    • /
    • 2009
  • This paper presents a vision based localization method for indoor mobile robots using the types of vertices from a monocular image. In the images captured from a camera of a robot, the types of vertices are determined by searching vertical edges and their branch edges with a geometric constraints. For obtaining correspondence between the comers of a 2-D map and the vertex of images, the type of vertices and geometrical constraints induced from a geometric analysis. The vertices are matched with the comers by a heuristic method using the type and position of the vertices and the comers. With the matched pairs, nonlinear equations derived from the perspective and rigid transformations are produced. The pose of the robot is computed by solving the equations using a least-squares optimization technique. Experimental results show that the proposed localization method is effective and applicable to the localization of indoor environments.