• Title/Summary/Keyword: Image deep learning

Search Result 1,828, Processing Time 0.028 seconds

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Using Skeleton Vector Information and RNN Learning Behavior Recognition Algorithm (스켈레톤 벡터 정보와 RNN 학습을 이용한 행동인식 알고리즘)

  • Kim, Mi-Kyung;Cha, Eui-Young
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.598-605
    • /
    • 2018
  • Behavior awareness is a technology that recognizes human behavior through data and can be used in applications such as risk behavior through video surveillance systems. Conventional behavior recognition algorithms have been performed using the 2D camera image device or multi-mode sensor or multi-view or 3D equipment. When two-dimensional data was used, the recognition rate was low in the behavior recognition of the three-dimensional space, and other methods were difficult due to the complicated equipment configuration and the expensive additional equipment. In this paper, we propose a method of recognizing human behavior using only CCTV images without additional equipment using only RGB and depth information. First, the skeleton extraction algorithm is applied to extract points of joints and body parts. We apply the equations to transform the vector including the displacement vector and the relational vector, and study the continuous vector data through the RNN model. As a result of applying the learned model to various data sets and confirming the accuracy of the behavior recognition, the performance similar to that of the existing algorithm using the 3D information can be verified only by the 2D information.

FRS-OCC: Face Recognition System for Surveillance Based on Occlusion Invariant Technique

  • Abbas, Qaisar
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.288-296
    • /
    • 2021
  • Automated face recognition in a runtime environment is gaining more and more important in the fields of surveillance and urban security. This is a difficult task keeping in mind the constantly volatile image landscape with varying features and attributes. For a system to be beneficial in industrial settings, it is pertinent that its efficiency isn't compromised when running on roads, intersections, and busy streets. However, recognition in such uncontrolled circumstances is a major problem in real-life applications. In this paper, the main problem of face recognition in which full face is not visible (Occlusion). This is a common occurrence as any person can change his features by wearing a scarf, sunglass or by merely growing a mustache or beard. Such types of discrepancies in facial appearance are frequently stumbled upon in an uncontrolled circumstance and possibly will be a reason to the security systems which are based upon face recognition. These types of variations are very common in a real-life environment. It has been analyzed that it has been studied less in literature but now researchers have a major focus on this type of variation. Existing state-of-the-art techniques suffer from several limitations. Most significant amongst them are low level of usability and poor response time in case of any calamity. In this paper, an improved face recognition system is developed to solve the problem of occlusion known as FRS-OCC. To build the FRS-OCC system, the color and texture features are used and then an incremental learning algorithm (Learn++) to select more informative features. Afterward, the trained stack-based autoencoder (SAE) deep learning algorithm is used to recognize a human face. Overall, the FRS-OCC system is used to introduce such algorithms which enhance the response time to guarantee a benchmark quality of service in any situation. To test and evaluate the performance of the proposed FRS-OCC system, the AR face dataset is utilized. On average, the FRS-OCC system is outperformed and achieved SE of 98.82%, SP of 98.49%, AC of 98.76% and AUC of 0.9995 compared to other state-of-the-art methods. The obtained results indicate that the FRS-OCC system can be used in any surveillance application.

Object Detection Algorithm for Explaining Products to the Visually Impaired (시각장애인에게 상품을 안내하기 위한 객체 식별 알고리즘)

  • Park, Dong-Yeon;Lim, Soon-Bum
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.10
    • /
    • pp.1-10
    • /
    • 2022
  • Visually impaired people have very difficulty using retail stores due to the absence of braille information on products and any other support system. In this paper, we propose a basic algorithm for a system that recognizes products in retail stores and explains them as a voice. First, the deep learning model detects hand objects and product objects in the input image. Then, it finds a product object that most overlapping hand object by comparing the coordinate information of each detected object. We determine that this is a product selected by the user, and the system read the nutritional information of the product as Text-To-Speech. As a result of the evaluation, we confirmed a high performance of the learning model. The proposed algorithm can be actively used to build a system that supports the use of retail stores for the visually impaired.

Pattern Analysis of Apartment Price Using Self-Organization Map (자기조직화지도를 통한 아파트 가격의 패턴 분석)

  • Lee, Jiyoung;Ryu, Jae Pil
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.27-33
    • /
    • 2021
  • With increasing interest in key areas of the 4th industrial revolution such as artificial intelligence, deep learning and big data, scientific approaches have developed in order to overcome the limitations of traditional decision-making methodologies. These scientific techniques are mainly used to predict the direction of financial products. In this study, the factors of apartment prices, which are of high social interest, were analyzed through SOM. For this analysis, we extracted the real prices of the apartments and selected a total of 16 input variables that would affect these prices. The data period was set from 1986 to 2021. As a result of examining the characteristics of the variables during the rising and faltering periods of the apartment prices, it was found that the statistical tendencies of the input variables of the rising and the faltering periods were clearly distinguishable. I hope this study will help us analyze the status of the real estate market and study future predictions through image learning.

Adaptive Face Mask Detection System based on Scene Complexity Analysis

  • Kang, Jaeyong;Gwak, Jeonghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.5
    • /
    • pp.1-8
    • /
    • 2021
  • Coronavirus disease 2019 (COVID-19) has affected the world seriously. Every person is required for wearing a mask properly in a public area to prevent spreading the virus. However, many people are not wearing a mask properly. In this paper, we propose an efficient mask detection system. In our proposed system, we first detect the faces of input images using YOLOv5 and classify them as the one of three scene complexity classes (Simple, Moderate, and Complex) based on the number of detected faces. After that, the image is fed into the Faster-RCNN with the one of three ResNet (ResNet-18, 50, and 101) as backbone network depending on the scene complexity for detecting the face area and identifying whether the person is wearing the mask properly or not. We evaluated our proposed system using public mask detection datasets. The results show that our proposed system outperforms other models.

Prediction of Doodle Images Using Neural Networks

  • Hae-Chan Lee;Kyu-Cheol Cho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.5
    • /
    • pp.29-38
    • /
    • 2023
  • Doodles, often possess irregular shapes and patterns, making it challenging for artificial intelligence to mechanically recognize and predict patterns in random doodles. Unlike humans who can effortlessly recognize and predict doodles even when circles are imperfect or lines are not perfectly straight, artificial intelligence requires learning from given training data to recognize and predict doodles. In this paper, we leverage a diverse dataset of doodle images from individuals of various nationalities, cultures, left-handedness, and right-handedness. After training two neural networks, we determine which network offers higher accuracy and is more suitable for doodle image prediction. The motivation behind predicting doodle images using artificial intelligence lies in providing a unique perspective on human expression and intent through the utilization of neural networks. For instance, by using the various images generated by artificial intelligence based on human-drawn doodles, we expect to foster diversity in artistic expression and expand the creative domain.

A Study on Vehicle Number Recognition Technology in the Side Using Slope Correction Algorithm (기울기 보정 알고리즘을 이용한 측면에서의 차량 번호 인식 기술 연구)

  • Lee, Jaebeom;Jang, Jongwook;Jang, Sungjin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.465-468
    • /
    • 2022
  • The incidence of traffic accidents is increasing every year, and Korea is among the top OECD countries. In order to improve this, various road traffic laws are being implemented, and various traffic control methods using equipment such as unmanned speed cameras and traffic control cameras are being applied. However, as drivers avoid crackdowns by detecting the location of traffic control cameras in advance through navigation, a mobile crackdown system that can be cracked down is needed, and research is needed to increase the recognition rate of vehicle license plates on the side of the road for accurate crackdown. This paper proposes a method to improve the vehicle number recognition rate on the road side by applying a gradient correction algorithm using image processing. In addition, custom data learning was conducted using a CNN-based YOLO algorithm to improve character recognition accuracy. It is expected that the algorithm can be used for mobile traffic control cameras without restrictions on the installation location.

  • PDF

A Study on the Surface Damage Detection Method of the Main Tower of a Special Bridge Using Drones and A.I. (드론과 A.I.를 이용한 특수교 주탑부 표면 손상 탐지 방법 연구)

  • Sungjin Lee;Bongchul Joo;Jungho Kim;Taehee Lee
    • Journal of Korean Society of Disaster and Security
    • /
    • v.16 no.4
    • /
    • pp.129-136
    • /
    • 2023
  • A special offshore bridge with a high pylon has special structural features.Special offshore bridges have inspection blind spots that are difficult to visually inspect. To solve this problem, safety inspection methods using drones are being studied. In this study, image data of the pylon of a special offshore bridge was acquired using a drone. In addition, an artificial intelligence algorithm was developed to detect damage to the pylon surface. The AI algorithm utilized a deep learning network with different structures. The algorithm applied the stacking ensemble learning method to build a model that formed the ensemble and collect the results.

Derivation of Green Coverage Ratio Based on Deep Learning Using MAV and UAV Aerial Images (유·무인 항공영상을 이용한 심층학습 기반 녹피율 산정)

  • Han, Seungyeon;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1757-1766
    • /
    • 2021
  • The green coverage ratio is the ratio of the land area to green coverage area, and it is used as a practical urban greening index. The green coverage ratio is calculated based on the land cover map, but low spatial resolution and inconsistent production cycle of land cover map make it difficult to calculate the correct green coverage area and analyze the precise green coverage. Therefore, this study proposes a new method to calculate green coverage area using aerial images and deep neural networks. Green coverage ratio can be quickly calculated using manned aerial images acquired by local governments, but precise analysis is difficult because components of image such as acquisition date, resolution, and sensors cannot be selected and modified. This limitation can be supplemented by using an unmanned aerial vehicle that can mount various sensors and acquire high-resolution images due to low-altitude flight. In this study, we proposed a method to calculate green coverage ratio from manned or unmanned aerial images, and experimentally verified the proposed method. Aerial images enable precise analysis by high resolution and relatively constant cycles, and deep learning can automatically detect green coverage area in aerial images. Local governments acquire manned aerial images for various purposes every year and we can utilize them to calculate green coverage ratio quickly. However, acquired manned aerial images may be difficult to accurately analyze because details such as acquisition date, resolution, and sensors cannot be selected. These limitations can be supplemented by using unmanned aerial vehicles that can mount various sensors and acquire high-resolution images due to low-altitude flight. Accordingly, the green coverage ratio was calculated from the two aerial images, and as a result, it could be calculated with high accuracy from all green types. However, the green coverage ratio calculated from manned aerial images had limitations in complex environments. The unmanned aerial images used to compensate for this were able to calculate a high accuracy of green coverage ratio even in complex environments, and more precise green area detection was possible through additional band images. In the future, it is expected that the rust rate can be calculated effectively by using the newly acquired unmanned aerial imagery supplementary to the existing manned aerial imagery.