• Title/Summary/Keyword: Neural network image recognition model

Search Result 176, Processing Time 0.025 seconds

Object Recognition Method for Industrial Intelligent Robot (산업용 지능형 로봇의 물체 인식 방법)

  • Kim, Kye Kyung;Kang, Sang Seung;Kim, Joong Bae;Lee, Jae Yeon;Do, Hyun Min;Choi, Taeyong;Kyung, Jin Ho
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.30 no.9
    • /
    • pp.901-908
    • /
    • 2013
  • The introduction of industrial intelligent robot using vision sensor has been interested in automated factory. 2D and 3D vision sensors have used to recognize object and to estimate object pose, which is for packaging parts onto a complete whole. But it is not trivial task due to illumination and various types of objects. Object image has distorted due to illumination that has caused low reliability in recognition. In this paper, recognition method of complex shape object has been proposed. An accurate object region has detected from combined binary image, which has achieved using DoG filter and local adaptive binarization. The object has recognized using neural network, which is trained with sub-divided object class according to object type and rotation angle. Predefined shape model of object and maximal slope have used to estimate the pose of object. The performance has evaluated on ETRI database and recognition rate of 96% has obtained.

Compression and Performance Evaluation of CNN Models on Embedded Board (임베디드 보드에서의 CNN 모델 압축 및 성능 검증)

  • Moon, Hyeon-Cheol;Lee, Ho-Young;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.200-207
    • /
    • 2020
  • Recently, deep neural networks such as CNN are showing excellent performance in various fields such as image classification, object recognition, visual quality enhancement, etc. However, as the model size and computational complexity of deep learning models for most applications increases, it is hard to apply neural networks to IoT and mobile environments. Therefore, neural network compression algorithms for reducing the model size while keeping the performance have been being studied. In this paper, we apply few compression methods to CNN models and evaluate their performances in the embedded environment. For evaluate the performance, the classification performance and inference time of the original CNN models and the compressed CNN models on the image inputted by the camera are evaluated in the embedded board equipped with QCS605, which is a customized AI chip. In this paper, a few CNN models of MobileNetV2, ResNet50, and VGG-16 are compressed by applying the methods of pruning and matrix decomposition. The experimental results show that the compressed models give not only the model size reduction of 1.3~11.2 times at a classification performance loss of less than 2% compared to the original model, but also the inference time reduction of 1.2~2.21 times, and the memory reduction of 1.2~3.8 times in the embedded board.

Real-Time Neural Network for Information Propagation of Model Objects in Remote Position (원격지 모형 물체에 대한 정보 전송을 위한 실시간 신경망)

  • Seul, Nam-O
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.6
    • /
    • pp.44-51
    • /
    • 2007
  • For real-time recognizing of model objects in remote position a new Neural Networks algorithm is proposed. The proposed neural networks technique is the real time computation methods through the inter-node diffusion. In the networks, a node corresponds to a state in the quantized input space. Each node is composed of a processing unit and fixed weights from its neighbor nodes as well as its input terminal. The most reliable algorithm derived for real time recognition of objects, is a dynamic programming based algorithm based on sequence matching techniques that would process the data as it arrives and could therefore provide continuously updated neighbor information estimates. Through several simulation experiments, real time reconstruction of the nonlinear image information is processed. 1-D LIPN hardware has been composed and various experiments with static and dynamic signals have been implemented.

Parameter Analysis for Super-Resolution Network Model Optimization of LiDAR Intensity Image (LiDAR 반사 강도 영상의 초해상화 신경망 모델 최적화를 위한 파라미터 분석)

  • Seungbo Shim
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.5
    • /
    • pp.137-147
    • /
    • 2023
  • LiDAR is used in autonomous driving and various industrial fields to measure the size and distance of an object. In addition, the sensor also provides intensity images based on the amount of reflected light. This has a positive effect on sensor data processing by providing information on the shape of the object. LiDAR guarantees higher performance as the resolution increases but at an increased cost. These conditions also apply to LiDAR intensity images. Expensive equipment is essential to acquire high-resolution LiDAR intensity images. This study developed artificial intelligence to improve low-resolution LiDAR intensity images into high-resolution ones. Therefore, this study performed parameter analysis for the optimal super-resolution neural network model. The super-resolution algorithm was trained and verified using 2,500 LiDAR intensity images. As a result, the resolution of the intensity images were improved. These results can be applied to the autonomous driving field and help improve driving environment recognition and obstacle detection performance

Adaptive Data Mining Model using Fuzzy Performance Measures (퍼지 성능 측정자를 이용한 적응 데이터 마이닝 모델)

  • Rhee, Hyun-Sook
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.541-546
    • /
    • 2006
  • Data Mining is the process of finding hidden patterns inside a large data set. Cluster analysis has been used as a popular technique for data mining. It is a fundamental process of data analysis and it has been Playing an important role in solving many problems in pattern recognition and image processing. If fuzzy cluster analysis is to make a significant contribution to engineering applications, much more attention must be paid to fundamental decision on the number of clusters in data. It is related to cluster validity problem which is how well it has identified the structure that Is present in the data. In this paper, we design an adaptive data mining model using fuzzy performance measures. It discovers clusters through an unsupervised neural network model based on a fuzzy objective function and evaluates clustering results by a fuzzy performance measure. We also present the experimental results on newsgroup data. They show that the proposed model can be used as a document classifier.

A comparison of deep-learning models to the forecast of the daily solar flare occurrence using various solar images

  • Shin, Seulki;Moon, Yong-Jae;Chu, Hyoungseok
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.42 no.2
    • /
    • pp.61.1-61.1
    • /
    • 2017
  • As the application of deep-learning methods has been succeeded in various fields, they have a high potential to be applied to space weather forecasting. Convolutional neural network, one of deep learning methods, is specialized in image recognition. In this study, we apply the AlexNet architecture, which is a winner of Imagenet Large Scale Virtual Recognition Challenge (ILSVRC) 2012, to the forecast of daily solar flare occurrence using the MatConvNet software of MATLAB. Our input images are SOHO/MDI, EIT $195{\AA}$, and $304{\AA}$ from January 1996 to December 2010, and output ones are yes or no of flare occurrence. We consider other input images which consist of last two images and their difference image. We select training dataset from Jan 1996 to Dec 2000 and from Jan 2003 to Dec 2008. Testing dataset is chosen from Jan 2001 to Dec 2002 and from Jan 2009 to Dec 2010 in order to consider the solar cycle effect. In training dataset, we randomly select one fifth of training data for validation dataset to avoid the over-fitting problem. Our model successfully forecasts the flare occurrence with about 0.90 probability of detection (POD) for common flares (C-, M-, and X-class). While POD of major flares (M- and X-class) forecasting is 0.96, false alarm rate (FAR) also scores relatively high(0.60). We also present several statistical parameters such as critical success index (CSI) and true skill statistics (TSS). All statistical parameters do not strongly depend on the number of input data sets. Our model can immediately be applied to automatic forecasting service when image data are available.

  • PDF

Line-Segment Feature Analysis Algorithm for Handwritten-Digits Data Reduction (필기체 숫자 데이터 차원 감소를 위한 선분 특징 분석 알고리즘)

  • Kim, Chang-Min;Lee, Woo-Beom
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.4
    • /
    • pp.125-132
    • /
    • 2021
  • As the layers of artificial neural network deepens, and the dimension of data used as an input increases, there is a problem of high arithmetic operation requiring a lot of arithmetic operation at a high speed in the learning and recognition of the neural network (NN). Thus, this study proposes a data dimensionality reduction method to reduce the dimension of the input data in the NN. The proposed Line-segment Feature Analysis (LFA) algorithm applies a gradient-based edge detection algorithm using median filters to analyze the line-segment features of the objects existing in an image. Concerning the extracted edge image, the eigenvalues corresponding to eight kinds of line-segment are calculated, using 3×3 or 5×5-sized detection filters consisting of the coefficient values, including [0, 1, 2, 4, 8, 16, 32, 64, and 128]. Two one-dimensional 256-sized data are produced, accumulating the same response values from the eigenvalue calculated with each detection filter, and the two data elements are added up. Two LFA256 data are merged to produce 512-sized LAF512 data. For the performance evaluation of the proposed LFA algorithm to reduce the data dimension for the recognition of handwritten numbers, as a result of a comparative experiment, using the PCA technique and AlexNet model, LFA256 and LFA512 showed a recognition performance respectively of 98.7% and 99%.

Smartphone-based structural crack detection using pruned fully convolutional networks and edge computing

  • Ye, X.W.;Li, Z.X.;Jin, T.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.141-151
    • /
    • 2022
  • In recent years, the industry and research communities have focused on developing autonomous crack inspection approaches, which mainly include image acquisition and crack detection. In these approaches, mobile devices such as cameras, drones or smartphones are utilized as sensing platforms to acquire structural images, and the deep learning (DL)-based methods are being developed as important crack detection approaches. However, the process of image acquisition and collection is time-consuming, which delays the inspection. Also, the present mobile devices such as smartphones can be not only a sensing platform but also a computing platform that can be embedded with deep neural networks (DNNs) to conduct on-site crack detection. Due to the limited computing resources of mobile devices, the size of the DNNs should be reduced to improve the computational efficiency. In this study, an architecture called pruned crack recognition network (PCR-Net) was developed for the detection of structural cracks. A dataset containing 11000 images was established based on the raw images from bridge inspections. A pruning method was introduced to reduce the size of the base architecture for the optimization of the model size. Comparative studies were conducted with image processing techniques (IPTs) and other DNNs for the evaluation of the performance of the proposed PCR-Net. Furthermore, a modularly designed framework that integrated the PCR-Net was developed to realize a DL-based crack detection application for smartphones. Finally, on-site crack detection experiments were carried out to validate the performance of the developed system of smartphone-based detection of structural cracks.

A Review of Deep Learning Research

  • Mu, Ruihui;Zeng, Xiaoqin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.1738-1764
    • /
    • 2019
  • With the advent of big data, deep learning technology has become an important research direction in the field of machine learning, which has been widely applied in the image processing, natural language processing, speech recognition and online advertising and so on. This paper introduces deep learning techniques from various aspects, including common models of deep learning and their optimization methods, commonly used open source frameworks, existing problems and future research directions. Firstly, we introduce the applications of deep learning; Secondly, we introduce several common models of deep learning and optimization methods; Thirdly, we describe several common frameworks and platforms of deep learning; Finally, we introduce the latest acceleration technology of deep learning and highlight the future work of deep learning.

Building Detection by Convolutional Neural Network with Infrared Image, LiDAR Data and Characteristic Information Fusion (적외선 영상, 라이다 데이터 및 특성정보 융합 기반의 합성곱 인공신경망을 이용한 건물탐지)

  • Cho, Eun Ji;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.635-644
    • /
    • 2020
  • Object recognition, detection and instance segmentation based on DL (Deep Learning) have being used in various practices, and mainly optical images are used as training data for DL models. The major objective of this paper is object segmentation and building detection by utilizing multimodal datasets as well as optical images for training Detectron2 model that is one of the improved R-CNN (Region-based Convolutional Neural Network). For the implementation, infrared aerial images, LiDAR data, and edges from the images, and Haralick features, that are representing statistical texture information, from LiDAR (Light Detection And Ranging) data were generated. The performance of the DL models depends on not only on the amount and characteristics of the training data, but also on the fusion method especially for the multimodal data. The results of segmenting objects and detecting buildings by applying hybrid fusion - which is a mixed method of early fusion and late fusion - results in a 32.65% improvement in building detection rate compared to training by optical image only. The experiments demonstrated complementary effect of the training multimodal data having unique characteristics and fusion strategy.