• Title/Summary/Keyword: deep machine learning

Search Result 1,085, Processing Time 0.031 seconds

Forecasting the Precipitation of the Next Day Using Deep Learning (딥러닝 기법을 이용한 내일강수 예측)

  • Ha, Ji-Hun;Lee, Yong Hee;Kim, Yong-Hyuk
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.2
    • /
    • pp.93-98
    • /
    • 2016
  • For accurate precipitation forecasts the choice of weather factors and prediction method is very important. Recently, machine learning has been widely used for forecasting precipitation, and artificial neural network, one of machine learning techniques, showed good performance. In this paper, we suggest a new method for forecasting precipitation using DBN, one of deep learning techniques. DBN has an advantage that initial weights are set by unsupervised learning, so this compensates for the defects of artificial neural networks. We used past precipitation, temperature, and the parameters of the sun and moon's motion as features for forecasting precipitation. The dataset consists of observation data which had been measured for 40 years from AWS in Seoul. Experiments were based on 8-fold cross validation. As a result of estimation, we got probabilities of test dataset, so threshold was used for the decision of precipitation. CSI and Bias were used for indicating the precision of precipitation. Our experimental results showed that DBN performed better than MLP.

Pedestrian Classification using CNN's Deep Features and Transfer Learning (CNN의 깊은 특징과 전이학습을 사용한 보행자 분류)

  • Chung, Soyoung;Chung, Min Gyo
    • Journal of Internet Computing and Services
    • /
    • v.20 no.4
    • /
    • pp.91-102
    • /
    • 2019
  • In autonomous driving systems, the ability to classify pedestrians in images captured by cameras is very important for pedestrian safety. In the past, after extracting features of pedestrians with HOG(Histogram of Oriented Gradients) or SIFT(Scale-Invariant Feature Transform), people classified them using SVM(Support Vector Machine). However, extracting pedestrian characteristics in such a handcrafted manner has many limitations. Therefore, this paper proposes a method to classify pedestrians reliably and effectively using CNN's(Convolutional Neural Network) deep features and transfer learning. We have experimented with both the fixed feature extractor and the fine-tuning methods, which are two representative transfer learning techniques. Particularly, in the fine-tuning method, we have added a new scheme, called M-Fine(Modified Fine-tuning), which divideslayers into transferred parts and non-transferred parts in three different sizes, and adjusts weights only for layers belonging to non-transferred parts. Experiments on INRIA Person data set with five CNN models(VGGNet, DenseNet, Inception V3, Xception, and MobileNet) showed that CNN's deep features perform better than handcrafted features such as HOG and SIFT, and that the accuracy of Xception (threshold = 0.5) isthe highest at 99.61%. MobileNet, which achieved similar performance to Xception and learned 80% fewer parameters, was the best in terms of efficiency. Among the three transfer learning schemes tested above, the performance of the fine-tuning method was the best. The performance of the M-Fine method was comparable to or slightly lower than that of the fine-tuningmethod, but higher than that of the fixed feature extractor method.

A Deep Learning Performance Comparison of R and Tensorflow (R과 텐서플로우 딥러닝 성능 비교)

  • Sung-Bong Jang
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.487-494
    • /
    • 2023
  • In this study, performance comparison was performed on R and TensorFlow, which are free deep learning tools. In the experiment, six types of deep neural networks were built using each tool, and the neural networks were trained using the 10-year Korean temperature dataset. The number of nodes in the input layer of the constructed neural network was set to 10, the number of output layers was set to 5, and the hidden layer was set to 5, 10, and 20 to conduct experiments. The dataset includes 3600 temperature data collected from Gangnam-gu, Seoul from March 1, 2013 to March 29, 2023. For performance comparison, the future temperature was predicted for 5 days using the trained neural network, and the root mean square error (RMSE) value was measured using the predicted value and the actual value. Experiment results shows that when there was one hidden layer, the learning error of R was 0.04731176, and TensorFlow was measured at 0.06677193, and when there were two hidden layers, R was measured at 0.04782134 and TensorFlow was measured at 0.05799060. Overall, R was measured to have better performance. We tried to solve the difficulties in tool selection by providing quantitative performance information on the two tools to users who are new to machine learning.

Recent Trends and Prospects of 3D Content Using Artificial Intelligence Technology (인공지능을 이용한 3D 콘텐츠 기술 동향 및 향후 전망)

  • Lee, S.W.;Hwang, B.W.;Lim, S.J.;Yoon, S.U.;Kim, T.J.;Kim, K.N.;Kim, D.H;Park, C.J.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.15-22
    • /
    • 2019
  • Recent technological advances in three-dimensional (3D) sensing devices and machine learning such as deep leaning has enabled data-driven 3D applications. Research on artificial intelligence has developed for the past few years and 3D deep learning has been introduced. This is the result of the availability of high-quality big data, increases in computing power, and development of new algorithms; before the introduction of 3D deep leaning, the main targets for deep learning were one-dimensional (1D) audio files and two-dimensional (2D) images. The research field of deep leaning has extended from discriminative models such as classification/segmentation/reconstruction models to generative models such as those including style transfer and generation of non-existing data. Unlike 2D learning, it is not easy to acquire 3D learning data. Although low-cost 3D data acquisition sensors have become increasingly popular owing to advances in 3D vision technology, the generation/acquisition of 3D data is still very difficult. Even if 3D data can be acquired, post-processing remains a significant problem. Moreover, it is not easy to directly apply existing network models such as convolution networks owing to the various ways in which 3D data is represented. In this paper, we summarize technological trends in AI-based 3D content generation.

Suggestions for the Development of RegTech Based Ontology and Deep Learning Technology to Interpret Capital Market Regulations (레그테크 기반의 자본시장 규제 해석 온톨로지 및 딥러닝 기술 개발을 위한 제언)

  • Choi, Seung Uk;Kwon, Oh Byung
    • The Journal of Information Systems
    • /
    • v.30 no.1
    • /
    • pp.65-84
    • /
    • 2021
  • Purpose Based on the development of artificial intelligence and big data technologies, the RegTech has been emerged to reduce regulatory costs and to enable efficient supervision by regulatory bodies. The word RegTech is a combination of regulation and technology, which means using the technological methods to facilitate the implementation of regulations and to make efficient surveillance and supervision of regulations. The purpose of this study is to describe the recent adoption of RegTech and to provide basic examples of applying RegTech to capital market regulations. Design/methodology/approach English-based ontology and deep learning technologies are quite developed in practice, and it will not be difficult to expand it to European or Latin American languages that are grammatically similar to English. However, it is not easy to use it in most Asian languages such as Korean, which have different grammatical rules. In addition, in the early stages of adoption, companies, financial institutions and regulators will not be familiar with this machine-based reporting system. There is a need to establish an ecosystem which facilitates the adoption of RegTech by consulting and supporting the stakeholders. In this paper, we provide a simple example that shows a procedure of applying RegTech to recognize and interpret Korean language-based capital market regulations. Specifically, we present the process of converting sentences in regulations into a meta-language through the morpheme analyses. We next conduct deep learning analyses to determine whether a regulatory sentence exists in each regulatory paragraph. Findings This study illustrates the applicability of RegTech-based ontology and deep learning technologies in Korean-based capital market regulations.

Defect Diagnosis and Classification of Machine Parts Based on Deep Learning

  • Kim, Hyun-Tae;Lee, Sang-Hyeop;Wesonga, Sheilla;Park, Jang-Sik
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.25 no.2_1
    • /
    • pp.177-184
    • /
    • 2022
  • The automatic defect sorting function of machinery parts is being introduced to the automation of the manufacturing process. In the final stage of automation of the manufacturing process, it is necessary to apply computer vision rather than human visual judgment to determine whether there is a defect. In this paper, we introduce a deep learning method to improve the classification performance of typical mechanical parts, such as welding parts, galvanized round plugs, and electro galvanized nuts, based on the results of experiments. In the case of poor welding, the method to further increase the depth of layer of the basic deep learning model was effective, and in the case of a circular plug, the surrounding data outside the defective target area affected it, so it could be solved through an appropriate pre-processing technique. Finally, in the case of a nut plated with zinc, since it receives data from multiple cameras due to its three-dimensional structure, it is greatly affected by lighting and has a problem in that it also affects the background image. To solve this problem, methods such as two-dimensional connectivity were applied in the object segmentation preprocessing process. Although the experiments suggested that the proposed methods are effective, most of the provided good/defective images data sets are relatively small, which may cause a learning balance problem of the deep learning model, so we plan to secure more data in the future.

A Model of Recursive Hierarchical Nested Triangle for Convergence from Lower-layer Sibling Practices (하위 훈련 성과 융합을 위한 순환적 계층 재귀 모델)

  • Moon, Hyo-Jung
    • Journal of Digital Contents Society
    • /
    • v.19 no.2
    • /
    • pp.415-423
    • /
    • 2018
  • In recent years, Computer-based learning, such as machine learning and deep learning in the computer field, is attracting attention. They start learning from the lowest level and propagate the result to the highest level to calculate the final result. Research literature has shown that systematic learning and growth can yield good results. However, systematic models based on systematic models are hard to find, compared to various and extensive research attempts. To this end, this paper proposes the first TNT(Transitive Nested Triangle)model, which is a growth and fusion model that can be used in various aspects. This model can be said to be a recursive model in which each function formed through geometric forms an organic hierarchical relationship, and the result is used again as they grow and converge to the top. That is, it is an analytical method called 'Horizontal Sibling Merges and Upward Convergence'. This model is applicable to various aspects. In this study, we focus on explaining the TNT model.

Prediction of Power Consumptions Based on Gated Recurrent Unit for Internet of Energy (에너지 인터넷을 위한 GRU기반 전력사용량 예측)

  • Lee, Dong-gu;Sun, Young-Ghyu;Sim, Is-sac;Hwang, Yu-Min;Kim, Sooh-wan;Kim, Jin-Young
    • Journal of IKEEE
    • /
    • v.23 no.1
    • /
    • pp.120-126
    • /
    • 2019
  • Recently, accurate prediction of power consumption based on machine learning techniques in Internet of Energy (IoE) has been actively studied using the large amount of electricity data acquired from advanced metering infrastructure (AMI). In this paper, we propose a deep learning model based on Gated Recurrent Unit (GRU) as an artificial intelligence (AI) network that can effectively perform pattern recognition of time series data such as the power consumption, and analyze performance of the prediction based on real household power usage data. In the performance analysis, performance comparison between the proposed GRU-based learning model and the conventional learning model of Long Short Term Memory (LSTM) is described. In the simulation results, mean squared error (MSE), mean absolute error (MAE), forecast skill score, normalized root mean square error (RMSE), and normalized mean bias error (NMBE) are used as performance evaluation indexes, and we confirm that the performance of the prediction of the proposed GRU-based learning model is greatly improved.

Image Classification of Damaged Bolts using Convolution Neural Networks (합성곱 신경망을 이용한 손상된 볼트의 이미지 분류)

  • Lee, Soo-Byoung;Lee, Seok-Soon
    • Journal of Aerospace System Engineering
    • /
    • v.16 no.4
    • /
    • pp.109-115
    • /
    • 2022
  • The CNN (Convolution Neural Network) algorithm which combines a deep learning technique, and a computer vision technology, makes image classification feasible with the high-performance computing system. In this thesis, the CNN algorithm is applied to the classification problem, by using a typical deep learning framework of TensorFlow and machine learning techniques. The data set required for supervised learning is generated with the same type of bolts. some of which have undamaged threads, but others have damaged threads. The learning model with less quantity data showed good classification performance on detecting damage in a bolt image. Additionally, the model performance is reviewed by altering the quantity of convolution layers, or applying selectively the over and under fitting alleviation algorithm.

Prediction of the DO concentration using the machine learning algorithm: case study in Oncheoncheon, Republic of Korea

  • Lim, Heesung;An, Hyunuk;Choi, Eunhyuk;Kim, Yeonsu
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.4
    • /
    • pp.1029-1037
    • /
    • 2020
  • The machine learning algorithm has been widely used in water-related fields such as water resources, water management, hydrology, atmospheric science, water quality, water level prediction, weather forecasting, water discharge prediction, water quality forecasting, etc. However, water quality prediction studies based on the machine learning algorithm are limited compared to other water-related applications because of the limited water quality data. Most of the previous water quality prediction studies have predicted monthly water quality, which is useful information but not enough from a practical aspect. In this study, we predicted the dissolved oxygen (DO) using recurrent neural network with long short-term memory model recurrent neural network long-short term memory (RNN-LSTM) algorithms with hourly- and daily-datasets. Bugok Bridge in Oncheoncheon, located in Busan, where the data was collected in real time, was selected as the target for the DO prediction. The 10-month (temperature, wind speed, and relative humidity) data were used as time prediction inputs, and the 5-year (temperature, wind speed, relative humidity, and rainfall) data were used as the daily forecast inputs. Missing data were filled by linear interpolation. The prediction model was coded based on TensorFlow, an open-source library developed by Google. The performance of the RNN-LSTM algorithm for the hourly- or daily-based water quality prediction was tested and analyzed. Research results showed that the hourly data for the water quality is useful for machine learning, and the RNN-LSTM algorithm has potential to be used for hourly- or daily-based water quality forecasting.