• Title/Summary/Keyword: MachineLearning

Search Result 5,657, Processing Time 0.042 seconds

Prediction of ocean surface current: Research status, challenges, and opportunities. A review

  • Ittaka Aldini;Adhistya E. Permanasari;Risanuri Hidayat;Andri Ramdhan
    • Ocean Systems Engineering
    • /
    • v.14 no.1
    • /
    • pp.85-99
    • /
    • 2024
  • Ocean surface currents have an essential role in the Earth's climate system and significantly impact the marine ecosystem, weather patterns, and human activities. However, predicting ocean surface currents remains challenging due to the complexity and variability of the oceanic processes involved. This review article provides an overview of the current research status, challenges, and opportunities in the prediction of ocean surface currents. We discuss the various observational and modelling approaches used to study ocean surface currents, including satellite remote sensing, in situ measurements, and numerical models. We also highlight the major challenges facing the prediction of ocean surface currents, such as data assimilation, model-observation integration, and the representation of sub-grid scale processes. In this article, we suggest that future research should focus on developing advanced modeling techniques, such as machine learning, and the integration of multiple observational platforms to improve the accuracy and skill of ocean surface current predictions. We also emphasize the need to address the limitations of observing instruments, such as delays in receiving data, versioning errors, missing data, and undocumented data processing techniques. Improving data availability and quality will be essential for enhancing the accuracy of predictions. The future research should focus on developing methods for effective bias correction, a series of data preprocessing procedures, and utilizing combined models and xAI models to incorporate data from various sources. Advancements in predicting ocean surface currents will benefit various applications such as maritime operations, climate studies, and ecosystem management.

Harnessing the Power of Voice: A Deep Neural Network Model for Alzheimer's Disease Detection

  • Chan-Young Park;Minsoo Kim;YongSoo Shim;Nayoung Ryoo;Hyunjoo Choi;Ho Tae Jeong;Gihyun Yun;Hunboc Lee;Hyungryul Kim;SangYun Kim;Young Chul Youn
    • Dementia and Neurocognitive Disorders
    • /
    • v.23 no.1
    • /
    • pp.1-10
    • /
    • 2024
  • Background and Purpose: Voice, reflecting cerebral functions, holds potential for analyzing and understanding brain function, especially in the context of cognitive impairment (CI) and Alzheimer's disease (AD). This study used voice data to distinguish between normal cognition and CI or Alzheimer's disease dementia (ADD). Methods: This study enrolled 3 groups of subjects: 1) 52 subjects with subjective cognitive decline; 2) 110 subjects with mild CI; and 3) 59 subjects with ADD. Voice features were extracted using Mel-frequency cepstral coefficients and Chroma. Results: A deep neural network (DNN) model showed promising performance, with an accuracy of roughly 81% in 10 trials in predicting ADD, which increased to an average value of about 82.0%±1.6% when evaluated against unseen test dataset. Conclusions: Although results did not demonstrate the level of accuracy necessary for a definitive clinical tool, they provided a compelling proof-of-concept for the potential use of voice data in cognitive status assessment. DNN algorithms using voice offer a promising approach to early detection of AD. They could improve the accuracy and accessibility of diagnosis, ultimately leading to better outcomes for patients.

Artificial Intelligence-Based Colorectal Polyp Histology Prediction by Using Narrow-Band Image-Magnifying Colonoscopy

  • Istvan Racz;Andras Horvath;Noemi Kranitz;Gyongyi Kiss;Henriett Regoczi;Zoltan Horvath
    • Clinical Endoscopy
    • /
    • v.55 no.1
    • /
    • pp.113-121
    • /
    • 2022
  • Background/Aims: We have been developing artificial intelligence based polyp histology prediction (AIPHP) method to classify Narrow Band Imaging (NBI) magnifying colonoscopy images to predict the hyperplastic or neoplastic histology of polyps. Our aim was to analyze the accuracy of AIPHP and narrow-band imaging international colorectal endoscopic (NICE) classification based histology predictions and also to compare the results of the two methods. Methods: We studied 373 colorectal polyp samples taken by polypectomy from 279 patients. The documented NBI still images were analyzed by the AIPHP method and by the NICE classification parallel. The AIPHP software was created by machine learning method. The software measures five geometrical and color features on the endoscopic image. Results: The accuracy of AIPHP was 86.6% (323/373) in total of polyps. We compared the AIPHP accuracy results for diminutive and non-diminutive polyps (82.1% vs. 92.2%; p=0.0032). The accuracy of the hyperplastic histology prediction was significantly better by NICE compared to AIPHP method both in the diminutive polyps (n=207) (95.2% vs. 82.1%) (p<0.001) and also in all evaluated polyps (n=373) (97.1% vs. 86.6%) (p<0.001) Conclusions: Our artificial intelligence based polyp histology prediction software could predict histology with high accuracy only in the large size polyp subgroup.

Scoring systems for the management of oncological hepato-pancreato-biliary patients

  • Alexander W. Coombs;Chloe Jordan;Sabba A. Hussain;Omar Ghandour
    • Annals of Hepato-Biliary-Pancreatic Surgery
    • /
    • v.26 no.1
    • /
    • pp.17-30
    • /
    • 2022
  • Oncological scoring systems in surgery are used as evidence-based decision aids to best support management through assessing prognosis, effectiveness and recurrence. Currently, the use of scoring systems in the hepato-pancreato-biliary (HPB) field is limited as concerns over precision and applicability prevent their widespread clinical implementation. The aim of this review was to discuss clinically useful oncological scoring systems for surgical management of HPB patients. A narrative review was conducted to appraise oncological HPB scoring systems. Original research articles of established and novel scoring systems were searched using Google Scholar, PubMed, Cochrane, and Ovid Medline. Selected models were determined by authors. This review discusses nine scoring systems in cancers of the liver (CLIP, BCLC, ALBI Grade, RETREAT, Fong's score), pancreas (Genç's score, mGPS), and biliary tract (TMHSS, MEGNA). Eight models used exclusively objective measurements to compute their scores while one used a mixture of both subjective and objective inputs. Seven models evaluated their scoring performance in external populations, with reported discriminatory c-statistic ranging from 0.58 to 0.82. Selection of model variables was most frequently determined using a combination of univariate and multivariate analysis. Calibration, another determinant of model accuracy, was poorly reported amongst nine scoring systems. A diverse range of HPB surgical scoring systems may facilitate evidence-based decisions on patient management and treatment. Future scoring systems need to be developed using heterogenous patient cohorts with improved stratification, with future trends integrating machine learning and genetics to improve outcome prediction.

Study on Failure Classification of Missile Seekers Using Inspection Data from Production and Manufacturing Phases (생산 및 제조 단계의 검사 데이터를 이용한 유도탄 탐색기의 고장 분류 연구)

  • Ye-Eun Jeong;Kihyun Kim;Seong-Mok Kim;Youn-Ho Lee;Ji-Won Kim;Hwa-Young Yong;Jae-Woo Jung;Jung-Won Park;Yong Soo Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.47 no.2
    • /
    • pp.30-39
    • /
    • 2024
  • This study introduces a novel approach for identifying potential failure risks in missile manufacturing by leveraging Quality Inspection Management (QIM) data to address the challenges presented by a dataset comprising 666 variables and data imbalances. The utilization of the SMOTE for data augmentation and Lasso Regression for dimensionality reduction, followed by the application of a Random Forest model, results in a 99.40% accuracy rate in classifying missiles with a high likelihood of failure. Such measures enable the preemptive identification of missiles at a heightened risk of failure, thereby mitigating the risk of field failures and enhancing missile life. The integration of Lasso Regression and Random Forest is employed to pinpoint critical variables and test items that significantly impact failure, with a particular emphasis on variables related to performance and connection resistance. Moreover, the research highlights the potential for broadening the scope of data-driven decision-making within quality control systems, including the refinement of maintenance strategies and the adjustment of control limits for essential test items.

Predicting Traffic Accident Risk based on Driver Abnormal Behavior and Gaze

  • Ji-Woong Yang;Hyeon-Jin Jung;Han-Jin Lee;Tae-Wook Kim;Ellen J. Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.8
    • /
    • pp.1-9
    • /
    • 2024
  • In this paper, we propose a new approach by analyzing driver behavior and gaze changes within the vehicle in real-time to assess and predict the risk of traffic accidents. Utilizing data analysis and machine learning algorithms, this research precisely measures drivers' abnormal behaviors and gaze movement patterns in real-time, and aggregates these into an overall Risk Score to evaluate the potential for traffic accidents. This research underscores the significance of internal factors, previously unexplored, providing a novel perspective in the field of traffic safety research. Such an innovative approach suggests the feasibility of developing real-time predictive models for traffic accident prevention and safety enhancement, expected to offer critical foundational data for future traffic accident prevention strategies and policy formulation.

Verification of the Suitability of Fine Dust and Air Quality Management Systems Based on Artificial Intelligence Evaluation Models

  • Heungsup Sim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.8
    • /
    • pp.165-170
    • /
    • 2024
  • This study aims to verify the accuracy of the air quality management system in Yangju City using an artificial intelligence (AI) evaluation model. The consistency and reliability of fine dust data were assessed by comparing public data from the Ministry of Environment with data from Yangju City's air quality management system. To this end, we analyzed the completeness, uniqueness, validity, consistency, accuracy, and integrity of the data. Exploratory statistical analysis was employed to compare data consistency. The results of the AI-based data quality index evaluation revealed no statistically significant differences between the two datasets. Among AI-based algorithms, the random forest model demonstrated the highest predictive accuracy, with its performance evaluated through ROC curves and AUC. Notably, the random forest model was identified as a valuable tool for optimizing the air quality management system. This study confirms that the reliability and suitability of fine dust data can be effectively assessed using AI-based model performance evaluation, contributing to the advancement of air quality management strategies.

Seismic Data Processing Using BERT-Based Pretraining: Comparison of Shotgather Arrays (BERT 기반 사전학습을 이용한 탄성파 자료처리: 송신원 모음 배열 비교)

  • Youngjae Shin
    • Geophysics and Geophysical Exploration
    • /
    • v.27 no.3
    • /
    • pp.171-180
    • /
    • 2024
  • The processing of seismic data involves analyzing earthquake wave data to understand the internal structure and characteristics of the Earth, which requires high computational power. Recently, machine learning (ML) techniques have been introduced to address these challenges and have been utilized in various tasks such as noise reduction and velocity model construction. However, most studies have focused on specific seismic data processing tasks, limiting the full utilization of similar features and structures inherent in the datasets. In this study, we compared the efficacy of using receiver-wise time-series data ("receiver array") and synchronized receiver signals ("time array") from shotgathers for pretraining a Bidirectional Encoder Representations from Transformers (BERT) model. To this end, shotgather data generated from a synthetic model containing faults was used to perform noise reduction, velocity prediction, and fault detection tasks. In the task of random noise reduction, both the receiver and time arrays showed good performance. However, for tasks requiring the identification of spatial distributions, such as velocity estimation and fault detection, the results from the time array were superior.

Study on the Performance Evaluation of Encoding and Decoding Schemes in Vector Symbolic Architectures (벡터 심볼릭 구조의 부호화 및 복호화 성능 평가에 관한 연구)

  • Youngseok Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.229-235
    • /
    • 2024
  • Recent years have seen active research on methods for efficiently processing and interpreting large volumes of data in the fields of artificial intelligence and machine learning. One of these data processing technologies, Vector Symbolic Architecture (VSA), offers an innovative approach to representing complex symbols and data using high-dimensional vectors. VSA has garnered particular attention in various applications such as natural language processing, image recognition, and robotics. This study quantitatively evaluates the characteristics and performance of VSA methodologies by applying five VSA methodologies to the MNIST dataset and measuring key performance indicators such as encoding speed, decoding speed, memory usage, and recovery accuracy across different vector lengths. BSC and VT demonstrated relatively fast performance in encoding and decoding speeds, while MAP and HRR were relatively slow. In terms of memory usage, BSC was the most efficient, whereas MAP used the most memory. The recovery accuracy was highest for MAP and lowest for BSC. The results of this study provide a basis for selecting appropriate VSA methodologies depending on the application area.

A counting-time optimization method for artificial neural network (ANN) based gamma-ray spectroscopy

  • Moonhyung Cho;Jisung Hwang;Sangho Lee;Kilyoung Ko;Wonku Kim;Gyuseong Cho
    • Nuclear Engineering and Technology
    • /
    • v.56 no.7
    • /
    • pp.2690-2697
    • /
    • 2024
  • With advancements in machine learning technologies, artificial neural networks (ANNs) are being widely used to improve the performance of gamma-ray spectroscopy based on NaI(Tl) scintillation detectors. Typically, the performance of ANNs is evaluated using test datasets composed of actual spectra. However, the generation of such test datasets encompassing a wide range of actual spectra representing various scenarios often proves inefficient and time-consuming. Thus, instead of measuring actual spectra, we generated virtual spectra with diverse spectral features by sampling from categorical distribution functions derived from the base spectra of six radioactive isotopes: 54Mn, 57Co, 60Co, 134Cs, 137Cs, and 241Am. For practical applications, we determined the optimum counting time (OCT) as the point at which the change in the Kullback-Leibler divergence (ΔKLDV) values between the synthetic spectra used for training the ANN and the virtual spectra approaches zero. The accuracies of the actual spectra were significantly improved when measured up to their respective OCTs. The outcomes demonstrated that the proposed method can effectively determine the OCTs for gamma-ray spectroscopy based on ANNs without the need to measure actual spectra.