• Title/Summary/Keyword: embedded vision

Search Result 175, Processing Time 0.032 seconds

EVALUATION OF SPEED AND ACCURACY FOR COMPARISON OF TEXTURE CLASSIFICATION IMPLEMENTATION ON EMBEDDED PLATFORM

  • Tou, Jing Yi;Khoo, Kenny Kuan Yew;Tay, Yong Haur;Lau, Phooi Yee
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.89-93
    • /
    • 2009
  • Embedded systems are becoming more popular as many embedded platforms have become more affordable. It offers a compact solution for many different problems including computer vision applications. Texture classification can be used to solve various problems, and implementing it in embedded platforms will help in deploying these applications into the market. This paper proposes to deploy the texture classification algorithms onto the embedded computer vision (ECV) platform. Two algorithms are compared; grey level co-occurrence matrices (GLCM) and Gabor filters. Experimental results show that raw GLCM on MATLAB could achieves 50ms, being the fastest algorithm on the PC platform. Classification speed achieved on PC and ECV platform, in C, is 43ms and 3708ms respectively. Raw GLCM could achieve only 90.86% accuracy compared to the combination feature (GLCM and Gabor filters) at 91.06% accuracy. Overall, evaluating all results in terms of classification speed and accuracy, raw GLCM is more suitable to be implemented onto the ECV platform.

  • PDF

Performance Evaluation of Efficient Vision Transformers on Embedded Edge Platforms (임베디드 엣지 플랫폼에서의 경량 비전 트랜스포머 성능 평가)

  • Minha Lee;Seongjae Lee;Taehyoun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.3
    • /
    • pp.89-100
    • /
    • 2023
  • Recently, on-device artificial intelligence (AI) solutions using mobile devices and embedded edge devices have emerged in various fields, such as computer vision, to address network traffic burdens, low-energy operations, and security problems. Although vision transformer deep learning models have outperformed conventional convolutional neural network (CNN) models in computer vision, they require more computations and parameters than CNN models. Thus, they are not directly applicable to embedded edge devices with limited hardware resources. Many researchers have proposed various model compression methods or lightweight architectures for vision transformers; however, there are only a few studies evaluating the effects of model compression techniques of vision transformers on performance. Regarding this problem, this paper presents a performance evaluation of vision transformers on embedded platforms. We investigated the behaviors of three vision transformers: DeiT, LeViT, and MobileViT. Each model performance was evaluated by accuracy and inference time on edge devices using the ImageNet dataset. We assessed the effects of the quantization method applied to the models on latency enhancement and accuracy degradation by profiling the proportion of response time occupied by major operations. In addition, we evaluated the performance of each model on GPU and EdgeTPU-based edge devices. In our experimental results, LeViT showed the best performance in CPU-based edge devices, and DeiT-small showed the highest performance improvement in GPU-based edge devices. In addition, only MobileViT models showed performance improvement on EdgeTPU. Summarizing the analysis results through profiling, the degree of performance improvement of each vision transformer model was highly dependent on the proportion of parts that could be optimized in the target edge device. In summary, to apply vision transformers to on-device AI solutions, either proper operation composition and optimizations specific to target edge devices must be considered.

Design and Application of Vision Box Based on Embedded System (Embedded System 기반 Vision Box 설계와 적용)

  • Lee, Jong-Hyeok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.8
    • /
    • pp.1601-1607
    • /
    • 2009
  • Vision system is an object recognition system analyzing image information captured through camera. Vision system can be applied to various fields, and automobile types recognition is one of them. There have been many research about algorithm of automobile types recognition. But have complex calculation processing. so they need long processing time. In this paper, we designed vision box based on embedded system. and suggested automobile types recognition system using the vision box. As a result of pretesting, this system achieves 100% rate of recognition at the optimal condition. But when condition is changed by lighting and angle, recognition is available but pattern score is lowered. Also, it is observed that the proposed system satisfy the criteria of processing time and recognition rate in industrial field.

An embedded vision system based on an analog VLSI Optical Flow vision sensor

  • Becanovic, Vlatako;Matsuo, Takayuki;Stocker, Alan A.
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.285-288
    • /
    • 2005
  • We propose a novel programmable miniature vision module based on a custom designed analog VLSI (aVLSI) chip. The vision module consists of the optical flow vision sensor embedded with commercial off-the-shelves digital hardware; in our case is the Intel XScale PXA270 processor enforced with a programmable gate array device. The aVLSI sensor provides gray-scale imager data as well as smooth optical flow estimates, thus each pixel gives a triplet of information that can be continuously read out as three independent images. The particular computational architecture of the custom designed sensor, which is fully parallel and also analog, allows for efficient real-time estimations of the smooth optical flow. The Intel XScale PXA270 controls the sensor read-out and furthermore allows, together with the programmable gate array, for additional higher level processing of the intensity image and optical flow data. It also provides the necessary standard interface such that the module can be easily programmed and integrated into different vision systems, or even form a complete stand-alone vision system itself. The low power consumption, small size and flexible interface of the proposed vision module suggests that it could be particularly well suited as a vision system in an autonomous robotics platform and especially well suited for educational projects in the robotic sciences.

  • PDF

Design and Implementation of Vision Box Based on Embedded Platform (Embedded Platform 기반 Vision Box 설계 및 구현)

  • Kim, Pan-Kyu;Lee, Jong-Hyeok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.1
    • /
    • pp.191-197
    • /
    • 2007
  • Vision system is an object recognition system analyzing image information captured through camera. Vision system can be applied to various fields, and vehicle recognition is ole of them. There have been many proposals about algorithm of vehicle recognition. But have complex calculation processing. So they need long processing time and sometimes they make problems. In this research we suggested vehicle type recognition system using vision bpx based on embedded platform. As a result of testing this system achieves 100% rate of recognition at the optimal condition. But when condition is changed by lighting, noise and angle, rate of recognition is decreased as pattern score is lowered and recognition speed is slowed.

Trends in Low-Power On-Device Vision SW Framework Technology (저전력 온디바이스 비전 SW 프레임워크 기술 동향)

  • Lee, M.S.;Bae, S.Y.;Kim, J.S.;Seok, J.S.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.2
    • /
    • pp.56-64
    • /
    • 2021
  • Many computer vision algorithms are computationally expensive and require a lot of computing resources. Recently, owing to machine learning technology and high-performance embedded systems, vision processing applications, such as object detection, face recognition, and visual inspection, are widely used. However, on-devices need to use their resources to handle powerful vision works with low power consumption in heterogeneous environments. Consequently, global manufacturers are trying to lock many developers into their ecosystem, providing integrated low-power chips and dedicated vision libraries. Khronos Group-an international standard organization-has released the OpenVX standard for high-performance/low-power vision processing in heterogeneous on-device systems. This paper describes vision libraries for the embedded systems and presents the OpenVX standard along with related trends for on-device vision system.

Design Vision Box base on Embedded Platform (Embedded Platform을 기반으로 하는 Vision Box 설계)

  • Kim, Pan-Kyu;Hoang, Tae-Moon;Park, Sang-Su;Lee, Jong-Hyeok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.1103-1106
    • /
    • 2005
  • The purpose of this research is design of Vision Box which can capture an image which inputs through the camera and understand movement of object included in captured image. In design, we tried to satisfy user's requirements. We made Vision Box can analyze movement of object in image captured by camera without additional instruments. In addition, it is possible that communicate with PLC and operate Vision Box by remote control. We could verify the Vision Box capability by using it to automobile engine pattern analysis. We expect the Vision Box will be used various industrial fields.

  • PDF

Designing Specific Object Tracking Robots with Enhanced Functionality (향상된 기능을 가진 특정 개체 추적 로봇 설계)

  • Kim, Ki-Sik;Lee, Jeong-Hun;Jeong, Young-Bin;Lee, Seung-Hyeon;Dong, Hong-Suk;Hwang, Kwang-il
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.80-83
    • /
    • 2019
  • 지능형 로봇 기술은 더 나은 생활을 위한 현대 기술의 집약체이다. 산업, 생활, 정밀 기술 등 다양한 분야에서 응용이 가능한 확장성 넓은 분야이다. 해당 분야의 추적 기술은 LIDAR를 활용하는 방향으로 활발한 연구가 진행 중이다. LIDAR는 사방의 거리를 정확하게 측정할 수 있는 유용한 센서지만, LIDAR만으로는 로봇의 성능을 최대화할 수는 없다. 본 논문은 LIDAR 추적을 연장하여 Vision 기술의 융합에 관련하여 서술한다. Vision 기술의 융합을 통한 향상된 기능을 가지는 추적 로봇 설계 방법을 제안한다.

Investigation on the Real-Time Environment Recognition System Based on Stereo Vision for Moving Object (스테레오 비전 기반의 이동객체용 실시간 환경 인식 시스템)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Lee, Jong-Hun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.3
    • /
    • pp.143-150
    • /
    • 2008
  • In this paper, we investigate a real-time environment recognition system based on stereo vision for moving object. This system consists of stereo matching, obstacle detection and distance estimation. In stereo matching part, depth maps can be obtained real road images captured adjustable baseline stereo vision system using belief propagation(BP) algorithm. In detection part, various obstacles are detected using only depth map in case of both v-disparity and column detection method under the real road environment. Finally in estimation part, asymmetric parabola fitting with NCC method improves estimation of obstacle detection. This stereo vision system can be applied to many applications such as unmanned vehicle and robot.

  • PDF

Comparative Analysis of Cost Aggregation Algorithms in Stereo Vision (스테레오 비전에서 비용 축적 알고리즘의 비교 분석)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.15 no.1
    • /
    • pp.47-51
    • /
    • 2016
  • Human visual system infers 3D vision through stereo disparity in the stereoscopic images, and stereo visioning are recently being used in consumer electronics which has resulted in much research in the application field. Basically, stereo vision system consists of four processes, which are cost computation, cost aggregation, disparity calculation, and disparity refinement. In this paper, we present and evaluate the existing various methods, focusing on cost aggregation for stereo vision system to comparatively analyze the performance of their algorithms for a given set of resources. Experiments show that Normalized Cross Correlation and Zero-Mean Normalized Cross Correlation provide higher accuracy, however they are computationally heavy for embedded system in the real time systems. Sum of Absolute Difference and Sum of Squared Difference are more suitable selection for embedded system, but they should be required on improvement to apply to the real world system.