• Title/Summary/Keyword: Automated Feature Extraction

Search Result 61, Processing Time 0.028 seconds

Medical Image Analysis Using Artificial Intelligence

  • Yoon, Hyun Jin;Jeong, Young Jin;Kang, Hyun;Jeong, Ji Eun;Kang, Do-Young
    • Progress in Medical Physics
    • /
    • v.30 no.2
    • /
    • pp.49-58
    • /
    • 2019
  • Purpose: Automated analytical systems have begun to emerge as a database system that enables the scanning of medical images to be performed on computers and the construction of big data. Deep-learning artificial intelligence (AI) architectures have been developed and applied to medical images, making high-precision diagnosis possible. Materials and Methods: For diagnosis, the medical images need to be labeled and standardized. After pre-processing the data and entering them into the deep-learning architecture, the final diagnosis results can be obtained quickly and accurately. To solve the problem of overfitting because of an insufficient amount of labeled data, data augmentation is performed through rotation, using left and right flips to artificially increase the amount of data. Because various deep-learning architectures have been developed and publicized over the past few years, the results of the diagnosis can be obtained by entering a medical image. Results: Classification and regression are performed by a supervised machine-learning method and clustering and generation are performed by an unsupervised machine-learning method. When the convolutional neural network (CNN) method is applied to the deep-learning layer, feature extraction can be used to classify diseases very efficiently and thus to diagnose various diseases. Conclusions: AI, using a deep-learning architecture, has expertise in medical image analysis of the nerves, retina, lungs, digital pathology, breast, heart, abdomen, and musculo-skeletal system.

Technical Trend Analysis of Fingerprint Classification (지문분류 기술 동향 분석)

  • Jung, Hye-Wuk;Lee, Seung
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.9
    • /
    • pp.132-144
    • /
    • 2017
  • The fingerprint classification of categorizing fingerprints by classes should be used in order to improve the processing speed and accuracy in a fingerprint recognition system using a large database. The fingerprint classification methods extract features from the fingerprint ridges of a fingerprint and classify the fingerprint using learning and reasoning techniques based on the classes defined according to the flow and shape of the fingerprint ridges. In earlier days, many researches have been conducted using NIST database acquired by pressing or rolling finger against a paper. However, as automated systems using live-scan scanners for fingerprint recognition have become popular, researches using fingerprint images obtained by live-scan scanners, such as fingerprint data provided by FVC, are increasing. And these days the methods of fingerprint classification using Deep Learning have proposed. In this paper, we investigate the trends of fingerprint classification technology and compare the classification performance of the technology. We desire to assist fingerprint classification research with increasing large fingerprint database in improving the performance by mentioning the necessity of fingerprint classification research with consideration for fingerprint images based on live-scan scanners and analyzing fingerprint classification using deep learning.

Accuracy Estimation of Electro-optical Camera (EOC) on KOMPSAT-1

  • Park, Woon-Yong;Hong, Sun-Houn;Song, Youn-Kyung
    • Korean Journal of Geomatics
    • /
    • v.2 no.1
    • /
    • pp.47-55
    • /
    • 2002
  • Remote sensing is the science and art of obtaining information about an object, area or phenomenon through the analysis of data acquired by a device that is not in contact with the object, area, or phenomenon under investigation./sup 1)/ EOC (Electro -Optical Camera) sensor loaded on the KOMPSAT-1 (Korea Multi- Purpose Satellite-1) performs the earth remote sensing operation. EOC can get high-resolution images of ground distance 6.6m during photographing; it is possible to get a tilt image by tilting satellite body up to 45 degrees at maximum. Accordingly, the device developed in this study enables to obtain images by photographing one pair of tilt image for the same point from two different planes. KOMPSAT-1 aims to obtain a Korean map with a scale of 1:25,000 with high resolution. The KOMPSAT-1 developed automated feature extraction system based on stereo satellite image. It overcomes the limitations of sensor and difficulties associated with preprocessing quite effectively. In case of using 6, 7 and 9 ground control points, which are evenly spread in image, with 95% of reliability for horizontal and vertical position, 3-dimensional positioning was available with accuracy of 6.0752m and 9.8274m. Therefore, less than l0m of design accuracy in KOMPSAT-1 was achieved. Also the ground position error of ortho-image, with reliability of 95%, is 17.568m. And elevation error showing 36.82m was enhanced. The reason why elevation accuracy was not good compared with the positioning accuracy used stereo image was analyzed as a problem of image matching system. Ortho-image system is advantageous if accurate altitude and production of digital elevation model are desired. The Korean map drawn on a scale of 1: 25,000 by using the new technique of KOMPSAT-1 EOC image adopted in the present study produces accurate result compared to existing mapping techniques involving high costs with less efficiency.

  • PDF

A Study on Automation about Painting the Letters to Road Surface

  • Lee, Kyong-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.1
    • /
    • pp.75-84
    • /
    • 2018
  • In this study, the researchers attempted to automate the process of painting the characters on the road surface, which is currently done by manual labor, by using the information and communication technology. Here are the descriptions of how we put in our efforts to achieve such a goal. First, we familiarized ourselves with the current regulations about painting letters or characters on the road, with reference to Road Mark Installation Management Manual of the National Police Agency. Regarding the graphemes, we adopted a new one using connection components, in Gothic print characters which was within the range of acceptance according to the aforementioned manual. We also made it possible for the automated program to recognize the graphemes by means of the feature dots of the isolated dots, end dots, 2-line gathering dots, and gathering dots of 3 lines or more. Regarding the database, we built graphemes database for plotting information, classified the characters by means of the arrangement information of the graphemes and the layers that the graphemes form within the characters, and last but not least, made the character shape information database for character plotting by using such data. We measured the layers and the arrangement information of the graphemes consisting the characters by using the information of: 1) the information of the position of the center of gravity, and 2) the information of the graphemes that was acquired through vertical exploration from the center of gravity in each grapheme. We identified and compared the group to which each character of the database belonged, and recognized the characters through the use of the information gathered using this method. We analyzed the input characters using the aforementioned analysis method and database, and then converted into plotting information. It was shown that the plotting was performed after the correction.

FRS-OCC: Face Recognition System for Surveillance Based on Occlusion Invariant Technique

  • Abbas, Qaisar
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.288-296
    • /
    • 2021
  • Automated face recognition in a runtime environment is gaining more and more important in the fields of surveillance and urban security. This is a difficult task keeping in mind the constantly volatile image landscape with varying features and attributes. For a system to be beneficial in industrial settings, it is pertinent that its efficiency isn't compromised when running on roads, intersections, and busy streets. However, recognition in such uncontrolled circumstances is a major problem in real-life applications. In this paper, the main problem of face recognition in which full face is not visible (Occlusion). This is a common occurrence as any person can change his features by wearing a scarf, sunglass or by merely growing a mustache or beard. Such types of discrepancies in facial appearance are frequently stumbled upon in an uncontrolled circumstance and possibly will be a reason to the security systems which are based upon face recognition. These types of variations are very common in a real-life environment. It has been analyzed that it has been studied less in literature but now researchers have a major focus on this type of variation. Existing state-of-the-art techniques suffer from several limitations. Most significant amongst them are low level of usability and poor response time in case of any calamity. In this paper, an improved face recognition system is developed to solve the problem of occlusion known as FRS-OCC. To build the FRS-OCC system, the color and texture features are used and then an incremental learning algorithm (Learn++) to select more informative features. Afterward, the trained stack-based autoencoder (SAE) deep learning algorithm is used to recognize a human face. Overall, the FRS-OCC system is used to introduce such algorithms which enhance the response time to guarantee a benchmark quality of service in any situation. To test and evaluate the performance of the proposed FRS-OCC system, the AR face dataset is utilized. On average, the FRS-OCC system is outperformed and achieved SE of 98.82%, SP of 98.49%, AC of 98.76% and AUC of 0.9995 compared to other state-of-the-art methods. The obtained results indicate that the FRS-OCC system can be used in any surveillance application.

Structural health monitoring data anomaly detection by transformer enhanced densely connected neural networks

  • Jun, Li;Wupeng, Chen;Gao, Fan
    • Smart Structures and Systems
    • /
    • v.30 no.6
    • /
    • pp.613-626
    • /
    • 2022
  • Guaranteeing the quality and integrity of structural health monitoring (SHM) data is very important for an effective assessment of structural condition. However, sensory system may malfunction due to sensor fault or harsh operational environment, resulting in multiple types of data anomaly existing in the measured data. Efficiently and automatically identifying anomalies from the vast amounts of measured data is significant for assessing the structural conditions and early warning for structural failure in SHM. The major challenges of current automated data anomaly detection methods are the imbalance of dataset categories. In terms of the feature of actual anomalous data, this paper proposes a data anomaly detection method based on data-level and deep learning technique for SHM of civil engineering structures. The proposed method consists of a data balancing phase to prepare a comprehensive training dataset based on data-level technique, and an anomaly detection phase based on a sophisticatedly designed network. The advanced densely connected convolutional network (DenseNet) and Transformer encoder are embedded in the specific network to facilitate extraction of both detail and global features of response data, and to establish the mapping between the highest level of abstractive features and data anomaly class. Numerical studies on a steel frame model are conducted to evaluate the performance and noise immunity of using the proposed network for data anomaly detection. The applicability of the proposed method for data anomaly classification is validated with the measured data of a practical supertall structure. The proposed method presents a remarkable performance on data anomaly detection, which reaches a 95.7% overall accuracy with practical engineering structural monitoring data, which demonstrates the effectiveness of data balancing and the robust classification capability of the proposed network.

A Review on Advanced Methodologies to Identify the Breast Cancer Classification using the Deep Learning Techniques

  • Bandaru, Satish Babu;Babu, G. Rama Mohan
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.420-426
    • /
    • 2022
  • Breast cancer is among the cancers that may be healed as the disease diagnosed at early times before it is distributed through all the areas of the body. The Automatic Analysis of Diagnostic Tests (AAT) is an automated assistance for physicians that can deliver reliable findings to analyze the critically endangered diseases. Deep learning, a family of machine learning methods, has grown at an astonishing pace in recent years. It is used to search and render diagnoses in fields from banking to medicine to machine learning. We attempt to create a deep learning algorithm that can reliably diagnose the breast cancer in the mammogram. We want the algorithm to identify it as cancer, or this image is not cancer, allowing use of a full testing dataset of either strong clinical annotations in training data or the cancer status only, in which a few images of either cancers or noncancer were annotated. Even with this technique, the photographs would be annotated with the condition; an optional portion of the annotated image will then act as the mark. The final stage of the suggested system doesn't need any based labels to be accessible during model training. Furthermore, the results of the review process suggest that deep learning approaches have surpassed the extent of the level of state-of-of-the-the-the-art in tumor identification, feature extraction, and classification. in these three ways, the paper explains why learning algorithms were applied: train the network from scratch, transplanting certain deep learning concepts and constraints into a network, and (another way) reducing the amount of parameters in the trained nets, are two functions that help expand the scope of the networks. Researchers in economically developing countries have applied deep learning imaging devices to cancer detection; on the other hand, cancer chances have gone through the roof in Africa. Convolutional Neural Network (CNN) is a sort of deep learning that can aid you with a variety of other activities, such as speech recognition, image recognition, and classification. To accomplish this goal in this article, we will use CNN to categorize and identify breast cancer photographs from the available databases from the US Centers for Disease Control and Prevention.

Automated Algorithm for Super Resolution(SR) using Satellite Images (위성영상을 이용한 Super Resolution(SR)을 위한 자동화 알고리즘)

  • Lee, S-Ra-El;Ko, Kyung-Sik;Park, Jong-Won
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.2
    • /
    • pp.209-216
    • /
    • 2018
  • High-resolution satellite imagery is used in diverse fields such as meteorological observation, topography observation, remote sensing (RS), military facility monitoring and protection of cultural heritage. In satellite imagery, low-resolution imagery can take place depending on the conditions of hardware (e.g., optical system, satellite operation altitude, image sensor, etc.) even though the images were obtained from the same satellite imaging system. Once a satellite is launched, the adjustment of the imaging system cannot be done to improve the resolution of the degraded images. Therefore, there should be a way to improve resolution, using the satellite imagery. In this study, a super resolution (SR) algorithm was adopted to improve resolution, using such low-resolution satellite imagery. The SR algorithm is an algorithm which enhances image resolution by matching multiple low-resolution images. In satellite imagery, however, it is difficult to get several images on the same region. To take care of this problem, this study performed the SR algorithm by calibrating geometric changes on images after applying automatic extraction of feature points and projection transform. As a result, a clear edge was found just like the SR results in which feature points were manually obtained.

Development of a Web-based Presentation Attitude Correction Program Centered on Analyzing Facial Features of Videos through Coordinate Calculation (좌표계산을 통해 동영상의 안면 특징점 분석을 중심으로 한 웹 기반 발표 태도 교정 프로그램 개발)

  • Kwon, Kihyeon;An, Suho;Park, Chan Jung
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.2
    • /
    • pp.10-21
    • /
    • 2022
  • In order to improve formal presentation attitudes such as presentation of job interviews and presentation of project results at the company, there are few automated methods other than observation by colleagues or professors. In previous studies, it was reported that the speaker's stable speech and gaze processing affect the delivery power in the presentation. Also, there are studies that show that proper feedback on one's presentation has the effect of increasing the presenter's ability to present. In this paper, considering the positive aspects of correction, we developed a program that intelligently corrects the wrong presentation habits and attitudes of college students through facial analysis of videos and analyzed the proposed program's performance. The proposed program was developed through web-based verification of the use of redundant words and facial recognition and textualization of the presentation contents. To this end, an artificial intelligence model for classification was developed, and after extracting the video object, facial feature points were recognized based on the coordinates. Then, using 4000 facial data, the performance of the algorithm in this paper was compared and analyzed with the case of facial recognition using a Teachable Machine. Use the program to help presenters by correcting their presentation attitude.

Method of Biological Information Analysis Based-on Object Contextual (대상객체 맥락 기반 생체정보 분석방법)

  • Kim, Kyung-jun;Kim, Ju-yeon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.41-43
    • /
    • 2022
  • In order to prevent and block infectious diseases caused by the recent COVID-19 pandemic, non-contact biometric information acquisition and analysis technology is attracting attention. The invasive and attached biometric information acquisition method accurately has the advantage of measuring biometric information, but has a risk of increasing contagious diseases due to the close contact. To solve these problems, the non-contact method of extracting biometric information such as human fingerprints, faces, iris, veins, voice, and signatures with automated devices is increasing in various industries as data processing speed increases and recognition accuracy increases. However, although the accuracy of the non-contact biometric data acquisition technology is improved, the non-contact method is greatly influenced by the surrounding environment of the object to be measured, which is resulting in distortion of measurement information and poor accuracy. In this paper, we propose a context-based bio-signal modeling technique for the interpretation of personalized information (image, signal, etc.) for bio-information analysis. Context-based biometric information modeling techniques present a model that considers contextual and user information in biometric information measurement in order to improve performance. The proposed model analyzes signal information based on the feature probability distribution through context-based signal analysis that can maximize the predicted value probability.

  • PDF