• Title/Summary/Keyword: deep Learning

Search Result 5,795, Processing Time 0.04 seconds

Comparison of Chlorophyll-a Prediction and Analysis of Influential Factors in Yeongsan River Using Machine Learning and Deep Learning (머신러닝과 딥러닝을 이용한 영산강의 Chlorophyll-a 예측 성능 비교 및 변화 요인 분석)

  • Sun-Hee, Shim;Yu-Heun, Kim;Hye Won, Lee;Min, Kim;Jung Hyun, Choi
    • Journal of Korean Society on Water Environment
    • /
    • v.38 no.6
    • /
    • pp.292-305
    • /
    • 2022
  • The Yeongsan River, one of the four largest rivers in South Korea, has been facing difficulties with water quality management with respect to algal bloom. The algal bloom menace has become bigger, especially after the construction of two weirs in the mainstream of the Yeongsan River. Therefore, the prediction and factor analysis of Chlorophyll-a (Chl-a) concentration is needed for effective water quality management. In this study, Chl-a prediction model was developed, and the performance evaluated using machine and deep learning methods, such as Deep Neural Network (DNN), Random Forest (RF), and eXtreme Gradient Boosting (XGBoost). Moreover, the correlation analysis and the feature importance results were compared to identify the major factors affecting the concentration of Chl-a. All models showed high prediction performance with an R2 value of 0.9 or higher. In particular, XGBoost showed the highest prediction accuracy of 0.95 in the test data.The results of feature importance suggested that Ammonia (NH3-N) and Phosphate (PO4-P) were common major factors for the three models to manage Chl-a concentration. From the results, it was confirmed that three machine learning methods, DNN, RF, and XGBoost are powerful methods for predicting water quality parameters. Also, the comparison between feature importance and correlation analysis would present a more accurate assessment of the important major factors.

Development of wound segmentation deep learning algorithm (딥러닝을 이용한 창상 분할 알고리즘 )

  • Hyunyoung Kang;Yeon-Woo Heo;Jae Joon Jeon;Seung-Won Jung;Jiye Kim;Sung Bin Park
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.2
    • /
    • pp.90-94
    • /
    • 2024
  • Diagnosing wounds presents a significant challenge in clinical settings due to its complexity and the subjective assessments by clinicians. Wound deep learning algorithms quantitatively assess wounds, overcoming these challenges. However, a limitation in existing research is reliance on specific datasets. To address this limitation, we created a comprehensive dataset by combining open dataset with self-produced dataset to enhance clinical applicability. In the annotation process, machine learning based on Gradient Vector Flow (GVF) was utilized to improve objectivity and efficiency over time. Furthermore, the deep learning model was equipped U-net with residual blocks. Significant improvements were observed using the input dataset with images cropped to contain only the wound region of interest (ROI), as opposed to original sized dataset. As a result, the Dice score remarkably increased from 0.80 using the original dataset to 0.89 using the wound ROI crop dataset. This study highlights the need for diverse research using comprehensive datasets. In future study, we aim to further enhance and diversify our dataset to encompass different environments and ethnicities.

A Deep Learning Approach for Covid-19 Detection in Chest X-Rays

  • Sk. Shalauddin Kabir;Syed Galib;Hazrat Ali;Fee Faysal Ahmed;Mohammad Farhad Bulbul
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.3
    • /
    • pp.125-134
    • /
    • 2024
  • The novel coronavirus 2019 is called COVID-19 has outspread swiftly worldwide. An early diagnosis is more important to control its quick spread. Medical imaging mechanics, chest calculated tomography or chest X-ray, are playing a vital character in the identification and testing of COVID-19 in this present epidemic. Chest X-ray is cost effective method for Covid-19 detection however the manual process of x-ray analysis is time consuming given that the number of infected individuals keep growing rapidly. For this reason, it is very important to develop an automated COVID-19 detection process to control this pandemic. In this study, we address the task of automatic detection of Covid-19 by using a popular deep learning model namely the VGG19 model. We used 1300 healthy and 1300 confirmed COVID-19 chest X-ray images in this experiment. We performed three experiments by freezing different blocks and layers of VGG19 and finally, we used a machine learning classifier SVM for detecting COVID-19. In every experiment, we used a five-fold cross-validation method to train and validated the model and finally achieved 98.1% overall classification accuracy. Experimental results show that our proposed method using the deep learning-based VGG19 model can be used as a tool to aid radiologists and play a crucial role in the timely diagnosis of Covid-19.

Speech Recognition Model Based on CNN using Spectrogram (스펙트로그램을 이용한 CNN 음성인식 모델)

  • Won-Seog Jeong;Haeng-Woo Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.4
    • /
    • pp.685-692
    • /
    • 2024
  • In this paper, we propose a new CNN model to improve the recognition performance of command voice signals. This method obtains a spectrogram image after performing a short-time Fourier transform (STFT) of the input signal and improves command recognition performance through supervised learning using a CNN model. After Fourier transforming the input signal for each short-time section, a spectrogram image is obtained and multi-classification learning is performed using a CNN deep learning model. This effectively classifies commands by converting the time domain voice signal to the frequency domain to express the characteristics well and performing deep learning training using the spectrogram image for the conversion parameters. To verify the performance of the speech recognition system proposed in this study, a simulation program using Tensorflow and Keras libraries was created and a simulation experiment was performed. As a result of the experiment, it was confirmed that an accuracy of 92.5% could be obtained using the proposed deep learning algorithm.

Application Target and Scope of Artificial Intelligence Machine Learning Deep Learning Algorithms (인공지능 머신러닝 딥러닝 알고리즘의 활용 대상과 범위 시스템 연구)

  • Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.177-179
    • /
    • 2022
  • In the Google Deepmind Challenge match, Alphago defeated Korea's Sedol Lee (human) with 4 wins and 1 loss in the Go match. Finally, artificial intelligence is going beyond the use of human intelligence. The Korean government's budget for the Digital New Deal is 9 trillion won in 2022, and an additional 301 types of data construction projects for artificial intelligence learning will be secured. From 2023, the industrial paradigm will change with the use and application of learning of artificial intelligence in all fields of industry. This paper conducts research to utilize artificial intelligence algorithms. Focusing on the analysis and judgment of data in artificial intelligence learning, research on the appropriate target and scope of application of algorithms in artificial intelligence machine learning and deep learning learning is conducted. This study will provide basic data for artificial intelligence in the 4th industrial revolution technology and artificial intelligence robot use in the 5th industrial revolution technology.

  • PDF

Predicting Dynamic Response of a Railway Bridge Using Transfer-Learning Technique (전이학습 기법을 이용한 철도교량의 동적응답 예측)

  • Minsu Kim;Sanghyun Choi
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.1
    • /
    • pp.39-48
    • /
    • 2023
  • Because a railway bridge is designed over a long period of time and covers a large site, it involves various environmental factors and uncertainties. For this reason, design changes often occur, even if the design was thoroughly reviewed in the initial design stage. In particular, design changes of large-scale facilities, such as railway bridges, consume significant time and cost, and it is extremely inefficient to repeat all the procedures each time. In this study, a technique that can improve the efficiency of learning after design change was developed by utilizing the learning result before design change through transfer learning among deep-learning algorithms. For analysis, scenarios were created, and a database was built using a previously developed railway bridge deep-learning-based prediction system. The proposed method results in similar accuracy when learning only 1000 data points in the new domain compared with the 8000 data points used for learning in the old domain before the design change. Moreover, it was confirmed that it has a faster convergence speed.

The Agriculture Decision-making System(ADS) based on Deep Learning for improving crop productivity (농산물 생산성 향상을 위한 딥러닝 기반 농업 의사결정시스템)

  • Park, Jinuk;Ahn, Heuihak;Lee, ByungKwan
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.521-530
    • /
    • 2018
  • This paper proposes "The Agriculture Decision-making System(ADS) based on Deep Learning for improving crop productivity" that collects weather information based on location supporting precision agriculture, predicts current crop condition by using the collected information and real time crop data, and notifies a farmer of the result. The system works as follows. The ICM(Information Collection Module) collects weather information based on location supporting precision agriculture. The DRCM(Deep learning based Risk Calculation Module) predicts whether the C, H, N and moisture content of soil are appropriate to grow specific crops according to current weather. The RNM(Risk Notification Module) notifies a farmer of the prediction result based on the DRCM. The proposed system improves the stability because it reduces the accuracy reduction rate as the amount of data increases and is apply the unsupervised learning to the analysis stage compared to the existing system. As a result, the simulation result shows that the ADS improved the success rate of data analysis by about 6%. And the ADS predicts the current crop growth condition accurately, prevents in advance the crop diseases in various environments, and provides the optimized condition for growing crops.

Antibiotics-Resistant Bacteria Infection Prediction Based on Deep Learning (딥러닝 기반 항생제 내성균 감염 예측)

  • Oh, Sung-Woo;Lee, Hankil;Shin, Ji-Yeon;Lee, Jung-Hoon
    • The Journal of Society for e-Business Studies
    • /
    • v.24 no.1
    • /
    • pp.105-120
    • /
    • 2019
  • The World Health Organization (WHO) and other government agencies aroundthe world have warned against antibiotic-resistant bacteria due to abuse of antibiotics and are strengthening their care and monitoring to prevent infection. However, it is highly necessary to develop an expeditious and accurate prediction and estimating method for preemptive measures. Because it takes several days to cultivate the infecting bacteria to identify the infection, quarantine and contact are not effective to prevent spread of infection. In this study, the disease diagnosis and antibiotic prescriptions included in Electronic Health Records were embedded through neural embedding model and matrix factorization, and deep learning based classification predictive model was proposed. The f1-score of the deep learning model increased from 0.525 to 0.617when embedding information on disease and antibiotics, which are the main causes of antibiotic resistance, added to the patient's basic information and hospital use information. And deep learning model outperformed the traditional machine hospital use information. And deep learning model outperformed the traditional machine learning models.As a result of analyzing the characteristics of antibiotic resistant patients, resistant patients were more likely to use antibiotics in J01 than nonresistant patients who were diagnosed with the same diseases and were prescribed 6.3 times more than DDD.

Application Trends of Deep Learning Artificial Intelligence in Autonomous Things (자율사물을 위한 심층학습 인공지능 기술 적용 동향)

  • Cho, J.M.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.6
    • /
    • pp.1-11
    • /
    • 2020
  • Recently, autonomous things, which are pieces of equipment or devices that grasp the context of circumstances on their own and perform actions appropriate for the situation in the surrounding environment, are attracting much research interest. This is because autonomous things are expected to be able to interact with humans more naturally, supersede humans in many tasks, and further solve problems by themselves by collaborating with each other without human intervention. This prospect leans heavily on AI as deep learning has delivered astonishing breakthroughs recently and broadened its range of applications. This paper surveys application trends in deep learning-based AI techniques for autonomous things, especially autonomous driving vehicles, because they present a wide range of problems involving perception, decision, and actions that are very common in other autonomous things.

Deep Learning Based Security Model for Cloud based Task Scheduling

  • Devi, Karuppiah;Paulraj, D.;Muthusenthil, Balasubramanian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3663-3679
    • /
    • 2020
  • Scheduling plays a dynamic role in cloud computing in generating as well as in efficient distribution of the resources of each task. The principle goal of scheduling is to limit resource starvation and to guarantee fairness among the parties using the resources. The demand for resources fluctuates dynamically hence the prearranging of resources is a challenging task. Many task-scheduling approaches have been used in the cloud-computing environment. Security in cloud computing environment is one of the core issue in distributed computing. We have designed a deep learning-based security model for scheduling tasks in cloud computing and it has been implemented using CloudSim 3.0 simulator written in Java and verification of the results from different perspectives, such as response time with and without security factors, makespan, cost, CPU utilization, I/O utilization, Memory utilization, and execution time is compared with Round Robin (RR) and Waited Round Robin (WRR) algorithms.