• Title/Summary/Keyword: Deep Learning System

Search Result 1,738, Processing Time 0.034 seconds

A Study on Field Compost Detection by Using Unmanned AerialVehicle Image and Semantic Segmentation Technique based Deep Learning (무인항공기 영상과 딥러닝 기반의 의미론적 분할 기법을 활용한 야적퇴비 탐지 연구)

  • Kim, Na-Kyeong;Park, Mi-So;Jeong, Min-Ji;Hwang, Do-Hyun;Yoon, Hong-Joo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.367-378
    • /
    • 2021
  • Field compost is a representative non-point pollution source for livestock. If the field compost flows into the water system due to rainfall, nutrients such as phosphorus and nitrogen contained in the field compost can adversely affect the water quality of the river. In this paper, we propose a method for detecting field compost using unmanned aerial vehicle images and deep learning-based semantic segmentation. Based on 39 ortho images acquired in the study area, about 30,000 data were obtained through data augmentation. Then, the accuracy was evaluated by applying the semantic segmentation algorithm developed based on U-net and the filtering technique of Open CV. As a result of the accuracy evaluation, the pixel accuracy was 99.97%, the precision was 83.80%, the recall rate was 60.95%, and the F1-Score was 70.57%. The low recall compared to precision is due to the underestimation of compost pixels when there is a small proportion of compost pixels at the edges of the image. After, It seems that accuracy can be improved by combining additional data sets with additional bands other than the RGB band.

Learning efficiency checking system by measuring human motion detection (사람의 움직임 감지를 측정한 학습 능률 확인 시스템)

  • Kim, Sukhyun;Lee, Jinsung;Yu, Eunsang;Park, Seon-u;Kim, Eung-Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.290-293
    • /
    • 2021
  • In this paper, we implement a learning efficiency verification system to inspire learning motivation and help improve concentration by detecting the situation of the user studying. To this aim, data on learning attitude and concentration are measured by extracting the movement of the user's face or body through a real-time camera. The Jetson board was used to implement the real-time embedded system, and a convolutional neural network (CNN) was implemented for image recognition. After detecting the feature part of the object using a CNN, motion detection is performed. The captured image is shown in a GUI written in PYQT5, and data is collected by sending push messages when each of the actions is obstructed. In addition, each function can be executed on the main screen made with the GUI, and functions such as a statistical graph that calculates the collected data, To do list, and white noise are performed. Through learning efficiency checking system, various functions including data collection and analysis of targets were provided to users.

  • PDF

Aerial Scene Labeling Based on Convolutional Neural Networks (Convolutional Neural Networks기반 항공영상 영역분할 및 분류)

  • Na, Jong-Pil;Hwang, Seung-Jun;Park, Seung-Je;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.6
    • /
    • pp.484-491
    • /
    • 2015
  • Aerial scene is greatly increased by the introduction and supply of the image due to the growth of digital optical imaging technology and development of the UAV. It has been used as the extraction of ground properties, classification, change detection, image fusion and mapping based on the aerial image. In particular, in the image analysis and utilization of deep learning algorithm it has shown a new paradigm to overcome the limitation of the field of pattern recognition. This paper presents the possibility to apply a more wide range and various fields through the segmentation and classification of aerial scene based on the Deep learning(ConvNet). We build 4-classes image database consists of Road, Building, Yard, Forest total 3000. Each of the classes has a certain pattern, the results with feature vector map come out differently. Our system consists of feature extraction, classification and training. Feature extraction is built up of two layers based on ConvNet. And then, it is classified by using the Multilayer perceptron and Logistic regression, the algorithm as a classification process.

Animal Face Classification using Dual Deep Convolutional Neural Network

  • Khan, Rafiul Hasan;Kang, Kyung-Won;Lim, Seon-Ja;Youn, Sung-Dae;Kwon, Oh-Jun;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.4
    • /
    • pp.525-538
    • /
    • 2020
  • A practical animal face classification system that classifies animals in image and video data is considered as a pivotal topic in machine learning. In this research, we are proposing a novel method of fully connected dual Deep Convolutional Neural Network (DCNN), which extracts and analyzes image features on a large scale. With the inclusion of the state of the art Batch Normalization layer and Exponential Linear Unit (ELU) layer, our proposed DCNN has gained the capability of analyzing a large amount of dataset as well as extracting more features than before. For this research, we have built our dataset containing ten thousand animal faces of ten animal classes and a dual DCNN. The significance of our network is that it has four sets of convolutional functions that work laterally with each other. We used a relatively small amount of batch size and a large number of iteration to mitigate overfitting during the training session. We have also used image augmentation to vary the shapes of the training images for the better learning process. The results demonstrate that, with an accuracy rate of 92.0%, the proposed DCNN outruns its counterparts while causing less computing costs.

Design and Implementation of Hashtag Recommendation System Based on Image Label Extraction using Deep Learning (딥러닝을 이용한 이미지 레이블 추출 기반 해시태그 추천 시스템 설계 및 구현)

  • Kim, Seon-Min;Cho, Dae-Soo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.4
    • /
    • pp.709-716
    • /
    • 2020
  • In social media, when posting a post, tag information of an image is generally used because the search is mainly performed using a tag. Users want to expose the post to many people by attaching the tag to the post. Also, the user has trouble posting the tag to be tagged along with the post, and posts that have not been tagged are also posted. In this paper, we propose a method to find an image similar to the input image, extract the label attached to the image, find the posts on instagram, where the label exists as a tag, and recommend other tags in the post. In the proposed method, the label is extracted from the image through the model of the convolutional neural network (CNN) deep learning technique, and the instagram is crawled with the extracted label to sort and recommended tags other than the label. We can see that it is easy to post an image using the recommended tag, increase the exposure of the search, and derive high accuracy due to fewer search errors.

The Sentence Similarity Measure Using Deep-Learning and Char2Vec (딥러닝과 Char2Vec을 이용한 문장 유사도 판별)

  • Lim, Geun-Young;Cho, Young-Bok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.10
    • /
    • pp.1300-1306
    • /
    • 2018
  • The purpose of this study is to see possibility of Char2Vec as alternative of Word2Vec that most famous word embedding model in Sentence Similarity Measure Problem by Deep-Learning. In experiment, we used the Siamese Ma-LSTM recurrent neural network architecture for measure similarity two random sentences. Siamese Ma-LSTM model was implemented with tensorflow. We train each model with 200 epoch on gpu environment and it took about 20 hours. Then we compared Word2Vec based model training result with Char2Vec based model training result. as a result, model of based with Char2Vec that initialized random weight record 75.1% validation dataset accuracy and model of based with Word2Vec that pretrained with 3 million words and phrase record 71.6% validation dataset accuracy. so Char2Vec is suitable alternate of Word2Vec to optimize high system memory requirements problem.

Development and testing of a composite system for bridge health monitoring utilising computer vision and deep learning

  • Lydon, Darragh;Taylor, S.E.;Lydon, Myra;Martinez del Rincon, Jesus;Hester, David
    • Smart Structures and Systems
    • /
    • v.24 no.6
    • /
    • pp.723-732
    • /
    • 2019
  • Globally road transport networks are subjected to continuous levels of stress from increasing loading and environmental effects. As the most popular mean of transport in the UK the condition of this civil infrastructure is a key indicator of economic growth and productivity. Structural Health Monitoring (SHM) systems can provide a valuable insight to the true condition of our aging infrastructure. In particular, monitoring of the displacement of a bridge structure under live loading can provide an accurate descriptor of bridge condition. In the past B-WIM systems have been used to collect traffic data and hence provide an indicator of bridge condition, however the use of such systems can be restricted by bridge type, assess issues and cost limitations. This research provides a non-contact low cost AI based solution for vehicle classification and associated bridge displacement using computer vision methods. Convolutional neural networks (CNNs) have been adapted to develop the QUBYOLO vehicle classification method from recorded traffic images. This vehicle classification was then accurately related to the corresponding bridge response obtained under live loading using non-contact methods. The successful identification of multiple vehicle types during field testing has shown that QUBYOLO is suitable for the fine-grained vehicle classification required to identify applied load to a bridge structure. The process of displacement analysis and vehicle classification for the purposes of load identification which was used in this research adds to the body of knowledge on the monitoring of existing bridge structures, particularly long span bridges, and establishes the significant potential of computer vision and Deep Learning to provide dependable results on the real response of our infrastructure to existing and potential increased loading.

Deep Learning based Image Recognition Models for Beef Sirloin Classification (딥러닝 이미지 인식 기술을 활용한 소고기 등심 세부 부위 분류)

  • Han, Jun-Hee;Jung, Sung-Hun;Park, Kyungsu;Yu, Tae-Sun
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.3
    • /
    • pp.1-9
    • /
    • 2021
  • This research examines deep learning based image recognition models for beef sirloin classification. The sirloin of beef can be classified as the upper sirloin, the lower sirloin, and the ribeye, whereas during the distribution process they are often simply unified into the sirloin region. In this work, for detailed classification of beef sirloin regions we develop a model that can learn image information in a reasonable computation time using the MobileNet algorithm. In addition, to increase the accuracy of the model we introduce data augmentation methods as well, which amplifies the image data collected during the distribution process. This data augmentation enables to consider a larger size of training data set by which the accuracy of the model can be significantly improved. The data generated during the data proliferation process was tested using the MobileNet algorithm, where the test data set was obtained from the distribution processes in the real-world practice. Through the computational experiences we confirm that the accuracy of the suggested model is up to 83%. We expect that the classification model of this study can contribute to providing a more accurate and detailed information exchange between suppliers and consumers during the distribution process of beef sirloin.

Rock Classification Prediction in Tunnel Excavation Using CNN (CNN 기법을 활용한 터널 암판정 예측기술 개발)

  • Kim, Hayoung;Cho, Laehun;Kim, Kyu-Sun
    • Journal of the Korean Geotechnical Society
    • /
    • v.35 no.9
    • /
    • pp.37-45
    • /
    • 2019
  • Quick identification of the condition of tunnel face and optimized determination of support patterns during tunnel excavation in underground construction projects help engineers prevent tunnel collapse and safely excavate tunnels. This study investigates a CNN technique for quick determination of rock quality classification depending on the condition of tunnel face, and presents the procedure for rock quality classification using a deep learning technique and the improved method for accurate prediction. The VGG16 model developed by tens of thousands prestudied images was used for deep learning, and 1,469 tunnel face images were used to classify the five types of rock quality condition. In this study, the prediction accuracy using this technique was up to 83.9%. It is expected that this technique can be used for an error-minimizing rock quality classification system not depending on experienced professionals in rock quality rating.

A Development on Deep Learning-based Detecting Technology of Rebar Placement for Improving Building Supervision Efficiency (감리업무 효율성 향상을 위한 딥러닝 기반 철근배근 디텍팅 기술 개발)

  • Park, Jin-Hui;Kim, Tae-Hoon;Choo, Seung-Yeon
    • Journal of the Architectural Institute of Korea Planning & Design
    • /
    • v.36 no.5
    • /
    • pp.93-103
    • /
    • 2020
  • The purpose of this study is to suggest a supervisory way to improve the efficiency of Building Supervision using Deep Learning, especially object detecting technology. Since the establishment of the Building Supervision system in Korea, it has been changed and improved many times systematically, but it is hard to find any improvement in terms of implementing methods. Therefore, the Supervision is until now the area where a lot of money, time and manpower are needed. This might give a room for superficial, formal and documentary supervision that could lead to faulty construction. This study suggests a way of Building Supervision which is more automatic and effective so that it can lead to save the time, effort and money. And the way is to detect the hoop-bars of a column and count the number of it automatically. For this study, we made a hoop-bar detecting network by transfor learnning of YOLOv2 network through MATLAB. Among many training experiments, relatively most accurate network was selected, and this network was able to detect rebar placement in building site pictures with the accuracy of 92.85% for similar images to those used in trainings, and 90% or more for new images at specific distance. It was also able to count the number of hoop-bars. The result showed the possibility of automatic Building Supervision and its efficiency improvement.