• Title/Summary/Keyword: Learned images

Search Result 208, Processing Time 0.024 seconds

Mongolian Car Plate Recognition using Neural Network

  • Ragchaabazar, Bud;Kim, SooHyung;Na, In Seop
    • Smart Media Journal
    • /
    • v.2 no.4
    • /
    • pp.20-26
    • /
    • 2013
  • This paper presents an approach to Mongolian car plate recognition using artificial neural network. Our proposed method consists of two steps: detection and recognition. In detection step, we implement Flood fill algorithm. In recognition step we proceed to segment the plate for each Cyrillic character, and use an Artificial Neural Network (ANN) machine - learning algorithm to recognize the character. We have learned the theory of ANN and implemented it without using any library. A total of 150 vehicles images obtained from community entrance gates have been tested. The recognition algorithm shows an accuracy rate of 89.75%.

  • PDF

Neural-Network and Log-Polar Sampling Based Associative Pattern Recognizer for Aircraft Images (신경 회로망과 Log-Polar Sampling 기법을 사용한 항공기 영상의 연상 연식)

  • 김종오;김인철;진성일
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.12
    • /
    • pp.59-67
    • /
    • 1991
  • In this paper, we aimed to develop associative pattern recognizer based on neural network for aircraft identification. For obtaining invariant feature space description of an object regardless of its scale change and rotation, Log-polar sampling technique recently developed partly due to its similarity to the human visual system was introduced with Fourier transform post-processing. In addition to the recognition results, image recall was associatively performed and also used for the visualization of the recognition reliability. The multilayer perceptron model was learned by backpropagation algorithm.

  • PDF

Identification of cranial nerve ganglia using sectioned images and three-dimensional models of a cadaver

  • Kim, Chung Yoh;Park, Jin Seo;Chung, Beom Sun
    • The Korean Journal of Pain
    • /
    • v.35 no.3
    • /
    • pp.250-260
    • /
    • 2022
  • Background: Cranial nerve ganglia, which are prone to viral infections and tumors, are located deep in the head, so their detailed anatomy is difficult to understand using conventional cadaver dissection. For locating the small ganglia in medical images, their sectional anatomy should be learned by medical students and doctors. The purpose of this study is to elucidate cranial ganglia anatomy using sectioned images and three-dimensional (3D) models of a cadaver. Methods: One thousand two hundred and forty-six sectioned images of a male cadaver were examined to identify the cranial nerve ganglia. Using the real color sectioned images, real color volume model having a voxel size of 0.4 × 0.4 × 0.4 mm was produced. Results: The sectioned images and 3D models can be downloaded for free from a webpage, anatomy.dongguk.ac.kr/ganglia. On the images and model, all the cranial nerve ganglia and their whole course were identified. In case of the facial nerve, the geniculate, pterygopalatine, and submandibular ganglia were clearly identified. In case of the glossopharyngeal nerve, the superior, inferior, and otic ganglia were found. Thanks to the high resolution and real color of the sectioned images and volume models, detailed observation of the ganglia was possible. Since the volume models can be cut both in orthogonal planes and oblique planes, advanced sectional anatomy of the ganglia can be explained concretely. Conclusions: The sectioned images and 3D models will be helpful resources for understanding cranial nerve ganglia anatomy, for performing related surgical procedures.

Parallel Fuzzy Inference Method for Large Volumes of Satellite Images

  • Lee, Sang-Gu
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.1 no.1
    • /
    • pp.119-124
    • /
    • 2001
  • In this pattern recognition on the large volumes of remote sensing satellite images, the inference time is much increased. In the case of the remote sensing data [5] having 4 wavebands, the 778 training patterns are learned. Each land cover pattern is classified by using 159, 900 patterns including the trained patterns. For the fuzzy classification, the 778 fuzzy rules are generated. Each fuzzy rule has 4 fuzzy variables in the condition part. Therefore, high performance parallel fuzzy inference system is needed. In this paper, we propose a novel parallel fuzzy inference system on T3E parallel computer. In this, fuzzy rules are distributed and executed simultaneously. The ONE_To_ALL algorithm is used to broadcast the fuzzy input to the all nodes. The results of the MIN/MAX operations are transferred to the output processor by the ALL_TO_ONE algorithm. By parallel processing of the fuzzy rules, the parallel fuzzy inference algorithm extracts match parallelism and achieves a good speed factor. This system can be used in a large expert system that ha many inference variables in the condition and the consequent part.

  • PDF

A Straight-Line Detecting Algorithm Using a Self-Organizing Map (자기조직화지도를 이용한 직선 추출 알고리즘)

  • Lee Moon-Kyu
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2002.05a
    • /
    • pp.886-893
    • /
    • 2002
  • The standard Hough transform has been dominantly used to detect straight lines in an image. However, massive storage requirement and low precision in estimating line parameters due to the quantization of parameter space are the major drawbacks of the Hough transform technique. In this paper, to overcome the drawbacks, an iterative algorithm based on a self-organizing map is presented. The self-organizing map can be adaptively learned such that image points are clustered by prominent lines. Through the procedure of the algorithm, a set of lines are sequentially detected one at a time. Computational results for synthetically generated images are given. The promise of the algorithm is also demonstrated with its application to two natural images of inserts.

  • PDF

Fall Situation Recognition by Body Centerline Detection using Deep Learning

  • Kim, Dong-hyeon;Lee, Dong-seok;Kwon, Soon-kak
    • Journal of Multimedia Information System
    • /
    • v.7 no.4
    • /
    • pp.257-262
    • /
    • 2020
  • In this paper, a method of detecting the emergency situations such as body fall is proposed by using color images. We detect body areas and key parts of a body through a pre-learned Mask R-CNN in the images captured by a camera. Then we find the centerline of the body through the joint points of both shoulders and feet. Also, we calculate an angle to the center line and then calculate the amount of change in the angle per hour. If the angle change is more than a certain value, then it is decided as a suspected fall. Also, if the suspected fall state persists for more than a certain frame, then it is determined as a fall situation. Simulation results show that the proposed method can detect body fall situation accurately.

Performance Comparison of Deep Learning Model Loss Function for Scaffold Defect Detection (인공지지체 불량 검출을 위한 딥러닝 모델 손실 함수의 성능 비교)

  • Song Yeon Lee;Yong Jeong Huh
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.40-44
    • /
    • 2023
  • The defect detection based on deep learning requires minimal loss and high accuracy to pinpoint product defects. In this paper, we confirm the loss rate of deep learning training based on disc-shaped artificial scaffold images. It is intended to compare the performance of Cross-Entropy functions used in object detection algorithms. The model was constructed using normal, defective artificial scaffold images and category cross entropy and sparse category cross entropy. The data was repeatedly learned five times using each loss function. The average loss rate, average accuracy, final loss rate, and final accuracy according to the loss function were confirmed.

  • PDF

Implementation of the Stone Classification with AI Algorithm Based on VGGNet Neural Networks (VGGNet을 활용한 석재분류 인공지능 알고리즘 구현)

  • Choi, Kyung Nam
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.32-38
    • /
    • 2021
  • Image classification through deep learning on the image from photographs has been a very active research field for the past several years. In this paper, we propose a method of automatically discriminating stone images from domestic source through deep learning, which is to use Python's hash library to scan 300×300 pixel photo images of granites such as Hwangdeungseok, Goheungseok, and Pocheonseok, performing data preprocessing to create learning images by examining duplicate images for each stone, removing duplicate images with the same hash value as a result of the inspection, and deep learning by stone. In addition, to utilize VGGNet, the size of the images for each stone is resized to 224×224 pixels, learned in VGG16 where the ratio of training and verification data for learning is 80% versus 20%. After training of deep learning, the loss function graph and the accuracy graph were generated, and the prediction results of the deep learning model were output for the three kinds of stone images.

Simulation and Colorization between Gray-scale Images and Satellite SAR Images Using GAN (GAN을 이용한 흑백영상과 위성 SAR 영상간의 모의 및 컬러화)

  • Jo, Su Min;Heo, Jun Hyuk;Eo, Yang Dam
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.1
    • /
    • pp.125-132
    • /
    • 2024
  • Optical satellite images are being used for national security and collection of information, and their utilization is increasing. However, it acquires low-quality images that are not suitable for the user's requirement due to weather conditions and time constraints. In this paper, a deep learning-based conversion of image and colorization model referring to high-resolution SAR images was created to simulate the occluded area with clouds of optical satellite images. The model was experimented according to the type of algorithm applied and input data, and each simulated images was compared and analyzed. In particular, the amount of pixel value information between the input black-and-white image and the SAR image was similarly constructed to overcome the problem caused by the relatively lack of color information. As a result of the experiment, the histogram distribution of the simulated image learned with the Gray-scale image and the high-resolution SAR image was relatively similar to the original image. In addition, the RMSE value was about 6.9827 and the PSNR value was about 31.3960 calculated for quantitative analysis.

Hierarchical Neural Network for Real-time Medicine-bottle Classification (실시간 약통 분류를 위한 계층적 신경회로망)

  • Kim, Jung-Joon;Kim, Tae-Hun;Ryu, Gang-Soo;Lee, Dae-Sik;Lee, Jong-Hak;Park, Kil-Houm
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.3
    • /
    • pp.226-231
    • /
    • 2013
  • In The matching algorithm for automatic packaging of drugs is essential to determine whether the canister can exactly refill the suitable medicine. In this paper, we propose a hierarchical neural network with the upper and lower layers which can perform real-time processing and classification of many types of medicine bottles to prevent accidental medicine disaster. A few number of low-dimensional feature vector are extracted from the label images presenting medicine-bottle information. By using the extracted feature vectors, the lower layer of MLP(Multi-layer Perceptron) neural networks is learned. Then, the output of the learned middle layer of the MLP is used as the input to the upper layer of the MLP learning. The proposed hierarchical neural network shows good classification performance and real- time operation in the test of up to 30 degrees rotated to the left and right images of 100 different medicine bottles.