• Title/Summary/Keyword: deep machine learning

Search Result 1,085, Processing Time 0.027 seconds

Analysis of Credit Approval Data using Machine Learning Model (기계학습 모델을 이용한 신용 승인 데이터 분석)

  • Kim, Dong-Hyun;Kim, Se-Jun;Lee, Byung-Jun;Kim, Kyung-Tae;Youn, Hee-Yong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.41-42
    • /
    • 2019
  • 본 논문에서는 다양한 기계학습 모델을 이용한 신용 데이터 분석 기법에 대해 서술한다. 기계학습 모델은 크게 Canonical models, Committee machines, 그리고 Deep learning models로 분류된다. 이러한 다양한 기계학습 모델 중 일부 학습 모델을 기반으로 Benchmark dataset인 Credit Approval 데이터를 분석하고 성능을 평가한다. 성능 평가에는 k-fold evaluation method를 사용하며, k-fold evaluation 결과에 대한 평균 성능을 측정하기 위해 Accuracy, Precision, Recall, 그리고 F1-score가 사용되었다.

  • PDF

Systematic Review of Bug Report Processing Techniques to Improve Software Management Performance

  • Lee, Dong-Gun;Seo, Yeong-Seok
    • Journal of Information Processing Systems
    • /
    • v.15 no.4
    • /
    • pp.967-985
    • /
    • 2019
  • Bug report processing is a key element of bug fixing in modern software maintenance. Bug reports are not processed immediately after submission and involve several processes such as bug report deduplication and bug report triage before bug fixing is initiated; however, this method of bug fixing is very inefficient because all these processes are performed manually. Software engineers have persistently highlighted the need to automate these processes, and as a result, many automation techniques have been proposed for bug report processing; however, the accuracy of the existing methods is not satisfactory. Therefore, this study focuses on surveying to improve the accuracy of existing techniques for bug report processing. Reviews of each method proposed in this study consist of a description, used techniques, experiments, and comparison results. The results of this study indicate that research in the field of bug deduplication still lacks and therefore requires numerous studies that integrate clustering and natural language processing. This study further indicates that although all studies in the field of triage are based on machine learning, results of studies on deep learning are still insufficient.

Theories, Frameworks, and Models of Using Artificial Intelligence in Organizations

  • Alotaibi, Sara Jeza
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.11
    • /
    • pp.357-366
    • /
    • 2022
  • Artificial intelligence (AI) is the replication of human intelligence by computer systems and machines using tools like machine learning, deep learning, expert systems, and natural language processing. AI can be applied in administrative settings to automate repetitive processes, analyze and forecast data, foster social communication skills among staff, reduce costs, and boost overall operational effectiveness. In order to understand how AI is being used for administrative duties in various organizations, this paper gives a critical dialogue on the topic and proposed a framework for using artificial intelligence in organizations. Additionally, it offers a list of specifications, attributes, and requirements that organizations planning to use AI should consider.

Assembly performance evaluation method for prefabricated steel structures using deep learning and k-nearest neighbors

  • Hyuntae Bang;Byeongjun Yu;Haemin Jeon
    • Smart Structures and Systems
    • /
    • v.32 no.2
    • /
    • pp.111-121
    • /
    • 2023
  • This study proposes an automated assembly performance evaluation method for prefabricated steel structures (PSSs) using machine learning methods. Assembly component images were segmented using a modified version of the receptive field pyramid. By factorizing channel modulation and the receptive field exploration layers of the convolution pyramid, highly accurate segmentation results were obtained. After completing segmentation, the positions of the bolt holes were calculated using various image processing techniques, such as fuzzy-based edge detection, Hough's line detection, and image perspective transformation. By calculating the distance ratio between bolt holes, the assembly performance of the PSS was estimated using the k-nearest neighbors (kNN) algorithm. The effectiveness of the proposed framework was validated using a 3D PSS printing model and a field test. The results indicated that this approach could recognize assembly components with an intersection over union (IoU) of 95% and evaluate assembly performance with an error of less than 5%.

Korean Voice Phishing Text Classification Performance Analysis Using Machine Learning Techniques (머신러닝 기법을 이용한 한국어 보이스피싱 텍스트 분류 성능 분석)

  • Boussougou, Milandu Keith Moussavou;Jin, Sangyoon;Chang, Daeho;Park, Dong-Joo
    • Annual Conference of KIPS
    • /
    • 2021.11a
    • /
    • pp.297-299
    • /
    • 2021
  • Text classification is one of the popular tasks in Natural Language Processing (NLP) used to classify text or document applications such as sentiment analysis and email filtering. Nowadays, state-of-the-art (SOTA) Machine Learning (ML) and Deep Learning (DL) algorithms are the core engine used to perform these classification tasks with high accuracy, and they show satisfying results. This paper conducts a benchmarking performance's analysis of multiple SOTA algorithms on the first known labeled Korean voice phishing dataset called KorCCVi. Experimental results reveal performed on a test set of 366 samples reveal which algorithm performs the best considering the training time and metrics such as accuracy and F1 score.

A review of artificial intelligence based demand forecasting techniques (인공지능 기반 수요예측 기법의 리뷰)

  • Jeong, Hyerin;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.6
    • /
    • pp.795-835
    • /
    • 2019
  • Big data has been generated in various fields. Many companies have now tried to make profits by building a system capable of analyzing big data based on artificial intelligence (AI) techniques. Integrating AI technology has made analyzing and utilizing vast amounts of data increasingly valuable. In particular, demand forecasting with maximum accuracy is critical to government and business management in various fields such as finance, procurement, production and marketing. In this case, it is important to apply an appropriate model that considers the demand pattern for each field. It is possible to analyze complex patterns of real data that can also be enlarged by a traditional time series model or regression model. However, choosing the right model among the various models is difficult without prior knowledge. Many studies based on AI techniques such as machine learning and deep learning have been proven to overcome these problems. In addition, demand forecasting through the analysis of stereotyped data and unstructured data of images or texts has also shown high accuracy. This paper introduces important areas where demand forecasts are relatively active as well as introduces machine learning and deep learning techniques that consider the characteristics of each field.

A Distributed Scheduling Algorithm based on Deep Reinforcement Learning for Device-to-Device communication networks (단말간 직접 통신 네트워크를 위한 심층 강화학습 기반 분산적 스케쥴링 알고리즘)

  • Jeong, Moo-Woong;Kim, Lyun Woo;Ban, Tae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.11
    • /
    • pp.1500-1506
    • /
    • 2020
  • In this paper, we study a scheduling problem based on reinforcement learning for overlay device-to-device (D2D) communication networks. Even though various technologies for D2D communication networks using Q-learning, which is one of reinforcement learning models, have been studied, Q-learning causes a tremendous complexity as the number of states and actions increases. In order to solve this problem, D2D communication technologies based on Deep Q Network (DQN) have been studied. In this paper, we thus design a DQN model by considering the characteristics of wireless communication systems, and propose a distributed scheduling scheme based on the DQN model that can reduce feedback and signaling overhead. The proposed model trains all parameters in a centralized manner, and transfers the final trained parameters to all mobiles. All mobiles individually determine their actions by using the transferred parameters. We analyze the performance of the proposed scheme by computer simulation and compare it with optimal scheme, opportunistic selection scheme and full transmission scheme.

Diabetes prediction mechanism using machine learning model based on patient IQR outlier and correlation coefficient (환자 IQR 이상치와 상관계수 기반의 머신러닝 모델을 이용한 당뇨병 예측 메커니즘)

  • Jung, Juho;Lee, Naeun;Kim, Sumin;Seo, Gaeun;Oh, Hayoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.10
    • /
    • pp.1296-1301
    • /
    • 2021
  • With the recent increase in diabetes incidence worldwide, research has been conducted to predict diabetes through various machine learning and deep learning technologies. In this work, we present a model for predicting diabetes using machine learning techniques with German Frankfurt Hospital data. We apply outlier handling using Interquartile Range (IQR) techniques and Pearson correlation and compare model-specific diabetes prediction performance with Decision Tree, Random Forest, Knn (k-nearest neighbor), SVM (support vector machine), Bayesian Network, ensemble techniques XGBoost, Voting, and Stacking. As a result of the study, the XGBoost technique showed the best performance with 97% accuracy on top of the various scenarios. Therefore, this study is meaningful in that the model can be used to accurately predict and prevent diabetes prevalent in modern society.

Calculated Damage of Italian Ryegrass in Abnormal Climate Based World Meteorological Organization Approach Using Machine Learning

  • Jae Seong Choi;Ji Yung Kim;Moonju Kim;Kyung Il Sung;Byong Wan Kim
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.43 no.3
    • /
    • pp.190-198
    • /
    • 2023
  • This study was conducted to calculate the damage of Italian ryegrass (IRG) by abnormal climate using machine learning and present the damage through the map. The IRG data collected 1,384. The climate data was collected from the Korea Meteorological Administration Meteorological data open portal.The machine learning model called xDeepFM was used to detect IRG damage. The damage was calculated using climate data from the Automated Synoptic Observing System (95 sites) by machine learning. The calculation of damage was the difference between the Dry matter yield (DMY)normal and DMYabnormal. The normal climate was set as the 40-year of climate data according to the year of IRG data (1986~2020). The level of abnormal climate was set as a multiple of the standard deviation applying the World Meteorological Organization (WMO) standard. The DMYnormal was ranged from 5,678 to 15,188 kg/ha. The damage of IRG differed according to region and level of abnormal climate with abnormal temperature, precipitation, and wind speed from -1,380 to 1,176, -3 to 2,465, and -830 to 962 kg/ha, respectively. The maximum damage was 1,176 kg/ha when the abnormal temperature was -2 level (+1.04℃), 2,465 kg/ha when the abnormal precipitation was all level and 962 kg/ha when the abnormal wind speed was -2 level (+1.60 ㎧). The damage calculated through the WMO method was presented as an map using QGIS. There was some blank area because there was no climate data. In order to calculate the damage of blank area, it would be possible to use the automatic weather system (AWS), which provides data from more sites than the automated synoptic observing system (ASOS).

A Study On Memory Optimization for Applying Deep Learning to PC (딥러닝을 PC에 적용하기 위한 메모리 최적화에 관한 연구)

  • Lee, Hee-Yeol;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.2
    • /
    • pp.136-141
    • /
    • 2017
  • In this paper, we propose an algorithm for memory optimization to apply deep learning to PC. The proposed algorithm minimizes the memory and computation processing time by reducing the amount of computation processing and data required in the conventional deep learning structure in a general PC. The algorithm proposed in this paper consists of three steps: a convolution layer configuration process using a random filter with discriminating power, a data reduction process using PCA, and a CNN structure creation using SVM. The learning process is not necessary in the convolution layer construction process using the discriminating random filter, thereby shortening the learning time of the overall deep learning. PCA reduces the amount of memory and computation throughput. The creation of the CNN structure using SVM maximizes the effect of reducing the amount of memory and computational throughput required. In order to evaluate the performance of the proposed algorithm, we experimented with Yale University's Extended Yale B face database. The results show that the algorithm proposed in this paper has a similar performance recognition rate compared with the existing CNN algorithm. And it was confirmed to be excellent. Based on the algorithm proposed in this paper, it is expected that a deep learning algorithm with many data and computation processes can be implemented in a general PC.