• Title/Summary/Keyword: Deep Learning System

Search Result 1,738, Processing Time 0.028 seconds

An Empirical Study on the Cryptocurrency Investment Methodology Combining Deep Learning and Short-term Trading Strategies (딥러닝과 단기매매전략을 결합한 암호화폐 투자 방법론 실증 연구)

  • Yumin Lee;Minhyuk Lee
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.377-396
    • /
    • 2023
  • As the cryptocurrency market continues to grow, it has developed into a new financial market. The need for investment strategy research on the cryptocurrency market is also emerging. This study aims to conduct an empirical analysis on an investment methodology of cryptocurrency that combines short-term trading strategy and deep learning. Daily price data of the Ethereum was collected through the API of Upbit, the Korean cryptocurrency exchange. The investment performance of the experimental model was analyzed by finding the optimal parameters based on past data. The experimental model is a volatility breakout strategy(VBS), a Long Short Term Memory(LSTM) model, moving average cross strategy and a combined model. VBS is a short-term trading strategy that buys when volatility rises significantly on a daily basis and sells at the closing price of the day. LSTM is suitable for time series data among deep learning models, and the predicted closing price obtained through the prediction model was applied to the simple trading rule. The moving average cross strategy determines whether to buy or sell when the moving average crosses. The combined model is a trading rule made by using derived variables of the VBS and LSTM model using AND/OR for the buy conditions. The result shows that combined model is better investment performance than the single model. This study has academic significance in that it goes beyond simple deep learning-based cryptocurrency price prediction and improves investment performance by combining deep learning and short-term trading strategies, and has practical significance in that it shows the applicability in actual investment.

Estimation of fruit number of apple tree based on YOLOv5 and regression model (YOLOv5 및 다항 회귀 모델을 활용한 사과나무의 착과량 예측 방법)

  • Hee-Jin Gwak;Yunju Jeong;Ik-Jo Chun;Cheol-Hee Lee
    • Journal of IKEEE
    • /
    • v.28 no.2
    • /
    • pp.150-157
    • /
    • 2024
  • In this paper, we propose a novel algorithm for predicting the number of apples on an apple tree using a deep learning-based object detection model and a polynomial regression model. Measuring the number of apples on an apple tree can be used to predict apple yield and to assess losses for determining agricultural disaster insurance payouts. To measure apple fruit load, we photographed the front and back sides of apple trees. We manually labeled the apples in the captured images to construct a dataset, which was then used to train a one-stage object detection CNN model. However, when apples on an apple tree are obscured by leaves, branches, or other parts of the tree, they may not be captured in images. Consequently, it becomes difficult for image recognition-based deep learning models to detect or infer the presence of these apples. To address this issue, we propose a two-stage inference process. In the first stage, we utilize an image-based deep learning model to count the number of apples in photos taken from both sides of the apple tree. In the second stage, we conduct a polynomial regression analysis, using the total apple count from the deep learning model as the independent variable, and the actual number of apples manually counted during an on-site visit to the orchard as the dependent variable. The performance evaluation of the two-stage inference system proposed in this paper showed an average accuracy of 90.98% in counting the number of apples on each apple tree. Therefore, the proposed method can significantly reduce the time and cost associated with manually counting apples. Furthermore, this approach has the potential to be widely adopted as a new foundational technology for fruit load estimation in related fields using deep learning.

Tomato Crop Diseases Classification Models Using Deep CNN-based Architectures (심층 CNN 기반 구조를 이용한 토마토 작물 병해충 분류 모델)

  • Kim, Sam-Keun;Ahn, Jae-Geun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.5
    • /
    • pp.7-14
    • /
    • 2021
  • Tomato crops are highly affected by tomato diseases, and if not prevented, a disease can cause severe losses for the agricultural economy. Therefore, there is a need for a system that quickly and accurately diagnoses various tomato diseases. In this paper, we propose a system that classifies nine diseases as well as healthy tomato plants by applying various pretrained deep learning-based CNN models trained on an ImageNet dataset. The tomato leaf image dataset obtained from PlantVillage is provided as input to ResNet, Xception, and DenseNet, which have deep learning-based CNN architectures. The proposed models were constructed by adding a top-level classifier to the basic CNN model, and they were trained by applying a 5-fold cross-validation strategy. All three of the proposed models were trained in two stages: transfer learning (which freezes the layers of the basic CNN model and then trains only the top-level classifiers), and fine-tuned learning (which sets the learning rate to a very small number and trains after unfreezing basic CNN layers). SGD, RMSprop, and Adam were applied as optimization algorithms. The experimental results show that the DenseNet CNN model to which the RMSprop algorithm was applied output the best results, with 98.63% accuracy.

Authorship Attribution of Web Texts with Korean Language Applying Deep Learning Method (딥러닝을 활용한 웹 텍스트 저자의 남녀 구분 및 연령 판별 : SNS 사용자를 중심으로)

  • Park, Chan Yub;Jang, In Ho;Lee, Zoon Ky
    • Journal of Information Technology Services
    • /
    • v.15 no.3
    • /
    • pp.147-155
    • /
    • 2016
  • According to rapid development of technology, web text is growing explosively and attracting many fields as substitution for survey. The user of Facebook is reaching up to 113 million people per month, Twitter is used in various institution or company as a behavioral analysis tool. However, many research has focused on meaning of the text itself. And there is a lack of study for text's creation subject. Therefore, this research consists of sex/age text classification with by using 20,187 Facebook users' posts that reveal the sex and age of the writer. This research utilized Convolution Neural Networks, a type of deep learning algorithms which came into the spotlight as a recent image classifier in web text analyzing. The following result assured with 92% of accuracy for possibility as a text classifier. Also, this research was minimizing the Korean morpheme analysis and it was conducted using a Korean web text to Authorship Attribution. Based on these feature, this study can develop users' multiple capacity such as web text management information resource for worker, non-grammatical analyzing system for researchers. Thus, this study proposes a new method for web text analysis.

Parallel-Addition Convolution Algorithm in Grayscale Image (그레이스케일 영상의 병렬가산 컨볼루션 알고리즘)

  • Choi, Jong-Ho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.4
    • /
    • pp.288-294
    • /
    • 2017
  • Recently, deep learning using convolutional neural network (CNN) has been extensively studied in image recognition. Convolution consists of addition and multiplication. Multiplication is computationally expensive in hardware implementation, relative to addition. It is also important factor limiting a chip design in an embedded deep learning system. In this paper, I propose a parallel-addition processing algorithm that converts grayscale images to the superposition of binary images and performs convolution only with addition. It is confirmed that the convolution can be performed by a parallel-addition method capable of reducing the processing time in experiment for verifying the availability of proposed algorithm.

Breast Mass Classification using the Fundamental Deep Learning Approach: To build the optimal model applying various methods that influence the performance of CNN

  • Lee, Jin;Choi, Kwang Jong;Kim, Seong Jung;Oh, Ji Eun;Yoon, Woong Bae;Kim, Kwang Gi
    • Journal of Multimedia Information System
    • /
    • v.3 no.3
    • /
    • pp.97-102
    • /
    • 2016
  • Deep learning enables machines to have perception and can potentially outperform humans in the medical field. It can save a lot of time and reduce human error by detecting certain patterns from medical images without being trained. The main goal of this paper is to build the optimal model for breast mass classification by applying various methods that influence the performance of Convolutional Neural Network (CNN). Google's newly developed software library Tensorflow was used to build CNN and the mammogram dataset used in this study was obtained from 340 breast cancer cases. The best classification performance we achieved was an accuracy of 0.887, sensitivity of 0.903, and specificity of 0.869 for normal tissue versus malignant mass classification with augmented data, more convolutional filters, and ADAM optimizer. A limitation of this method, however, was that it only considered malignant masses which are relatively easier to classify than benign masses. Therefore, further studies are required in order to properly classify any given data for medical uses.

Empirical Comparison of Deep Learning Networks on Backbone Method of Human Pose Estimation

  • Rim, Beanbonyka;Kim, Junseob;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.21-29
    • /
    • 2020
  • Accurate estimation of human pose relies on backbone method in which its role is to extract feature map. Up to dated, the method of backbone feature extraction is conducted by the plain convolutional neural networks named by CNN and the residual neural networks named by Resnet, both of which have various architectures and performances. The CNN family network such as VGG which is well-known as a multiple stacked hidden layers architecture of deep learning methods, is base and simple while Resnet which is a bottleneck layers architecture yields fewer parameters and outperform. They have achieved inspired results as a backbone network in human pose estimation. However, they were used then followed by different pose estimation networks named by pose parsing module. Therefore, in this paper, we present a comparison between the plain CNN family network (VGG) and bottleneck network (Resnet) as a backbone method in the same pose parsing module. We investigate their performances such as number of parameters, loss score, precision and recall. We experiment them in the bottom-up method of human pose estimation system by adapted the pose parsing module of openpose. Our experimental results show that the backbone method using VGG network outperforms the Resent network with fewer parameter, lower loss score and higher accuracy of precision and recall.

Deep learning-based Automatic Weed Detection on Onion Field (딥러닝을 이용한 양파 밭의 잡초 검출 연구)

  • Kim, Seo jeong;Lee, Jae Su;Kim, Hyong Suk
    • Smart Media Journal
    • /
    • v.7 no.3
    • /
    • pp.16-21
    • /
    • 2018
  • This paper presents the design and implementation of a deep learning-based automated weed detector on onion fields. The system is based on a Convolutional Neural Network that specifically selects proposed regions. The detector initiates training with a dataset taken from agricultural onion fields, after which candidate regions with very high probability of suspicion are considered weeds. Non-maximum suppression helps preserving the less overlapped bounding boxes. The dataset collected from different onion farms is evaluated with the proposed classifier. Classification accuracy is about 99% for the dataset, indicating the proposed method's superior performance with regard to weed detection on the onion fields.

A Method for Improving Resolution and Critical Dimension Measurement of an Organic Layer Using Deep Learning Superresolution

  • Kim, Sangyun;Pahk, Heui Jae
    • Current Optics and Photonics
    • /
    • v.2 no.2
    • /
    • pp.153-164
    • /
    • 2018
  • In semiconductor manufacturing, critical dimensions indicate the features of patterns formed by the semiconductor process. The purpose of measuring critical dimensions is to confirm whether patterns are made as intended. The deposition process for an organic light emitting diode (OLED) forms a luminous organic layer on the thin-film transistor electrode. The position of this organic layer greatly affects the luminescent performance of an OLED. Thus, a system for measuring the position of the organic layer from outside of the vacuum chamber in real-time is desired for monitoring the deposition process. Typically, imaging from large stand-off distances results in low spatial resolution because of diffraction blur, and it is difficult to attain an adequate industrial-level measurement. The proposed method offers a new superresolution single-image using a conversion formula between two different optical systems obtained by a deep learning technique. This formula converts an image measured at long distance and with low-resolution optics into one image as if it were measured with high-resolution optics. The performance of this method is evaluated with various samples in terms of spatial resolution and measurement performance.

Remote Distance Measurement from a Single Image by Automatic Detection and Perspective Correction

  • Layek, Md Abu;Chung, TaeChoong;Huh, Eui-Nam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.3981-4004
    • /
    • 2019
  • This paper proposes a novel method for locating objects in real space from a single remote image and measuring actual distances between them by automatic detection and perspective transformation. The dimensions of the real space are known in advance. First, the corner points of the interested region are detected from an image using deep learning. Then, based on the corner points, the region of interest (ROI) is extracted and made proportional to real space by applying warp-perspective transformation. Finally, the objects are detected and mapped to the real-world location. Removing distortion from the image using camera calibration improves the accuracy in most of the cases. The deep learning framework Darknet is used for detection, and necessary modifications are made to integrate perspective transformation, camera calibration, un-distortion, etc. Experiments are performed with two types of cameras, one with barrel and the other with pincushion distortions. The results show that the difference between calculated distances and measured on real space with measurement tapes are very small; approximately 1 cm on an average. Furthermore, automatic corner detection allows the system to be used with any type of camera that has a fixed pose or in motion; using more points significantly enhances the accuracy of real-world mapping even without camera calibration. Perspective transformation also increases the object detection efficiency by making unified sizes of all objects.