• Title/Summary/Keyword: Computer Training

Search Result 2,428, Processing Time 0.03 seconds

A Methodology for Making Military Surveillance System to be Intelligent Applied by AI Model (AI모델을 적용한 군 경계체계 지능화 방안)

  • Changhee Han;Halim Ku;Pokki Park
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.57-64
    • /
    • 2023
  • The ROK military faces a significant challenge in its vigilance mission due to demographic problems, particularly the current aging population and population cliff. This study demonstrates the crucial role of the 4th industrial revolution and its core artificial intelligence algorithm in maximizing work efficiency within the Command&Control room by mechanizing simple tasks. To achieve a fully developed military surveillance system, we have chosen multi-object tracking (MOT) technology as an essential artificial intelligence component, aligning with our goal of an intelligent and automated surveillance system. Additionally, we have prioritized data visualization and user interface to ensure system accessibility and efficiency. These complementary elements come together to form a cohesive software application. The CCTV video data for this study was collected from the CCTV cameras installed at the 1st and 2nd main gates of the 00 unit, with the cooperation by Command&Control room. Experimental results indicate that an intelligent and automated surveillance system enables the delivery of more information to the operators in the room. However, it is important to acknowledge the limitations of the developed software system in this study. By highlighting these limitations, we can present the future direction for the development of military surveillance systems.

A Comparison of Image Classification System for Building Waste Data based on Deep Learning (딥러닝기반 건축폐기물 이미지 분류 시스템 비교)

  • Jae-Kyung Sung;Mincheol Yang;Kyungnam Moon;Yong-Guk Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.199-206
    • /
    • 2023
  • This study utilizes deep learning algorithms to automatically classify construction waste into three categories: wood waste, plastic waste, and concrete waste. Two models, VGG-16 and ViT (Vision Transformer), which are convolutional neural network image classification algorithms and NLP-based models that sequence images, respectively, were compared for their performance in classifying construction waste. Image data for construction waste was collected by crawling images from search engines worldwide, and 3,000 images, with 1,000 images for each category, were obtained by excluding images that were difficult to distinguish with the naked eye or that were duplicated and would interfere with the experiment. In addition, to improve the accuracy of the models, data augmentation was performed during training with a total of 30,000 images. Despite the unstructured nature of the collected image data, the experimental results showed that VGG-16 achieved an accuracy of 91.5%, and ViT achieved an accuracy of 92.7%. This seems to suggest the possibility of practical application in actual construction waste data management work. If object detection techniques or semantic segmentation techniques are utilized based on this study, more precise classification will be possible even within a single image, resulting in more accurate waste classification

Improving the Performance of Deep-Learning-Based Ground-Penetrating Radar Cavity Detection Model using Data Augmentation and Ensemble Techniques (데이터 증강 및 앙상블 기법을 이용한 딥러닝 기반 GPR 공동 탐지 모델 성능 향상 연구)

  • Yonguk Choi;Sangjin Seo;Hangilro Jang;Daeung Yoon
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.4
    • /
    • pp.211-228
    • /
    • 2023
  • Ground-penetrating radar (GPR) surveys are commonly used to monitor embankments, which is a nondestructive geophysical method. The results of GPR surveys can be complex, depending on the situation, and data processing and interpretation are subject to expert experiences, potentially resulting in false detection. Additionally, this process is time-intensive. Consequently, various studies have been undertaken to detect cavities in GPR survey data using deep learning methods. Deep-learning-based approaches require abundant data for training, but GPR field survey data are often scarce due to cost and other factors constaining field studies. Therefore, in this study, a deep- learning-based model was developed for embankment GPR survey cavity detection using data augmentation strategies. A dataset was constructed by collecting survey data over several years from the same embankment. A you look only once (YOLO) model, commonly used in computer vision for object detection, was employed for this purpose. By comparing and analyzing various strategies, the optimal data augmentation approach was determined. After initial model development, a stepwise process was employed, including box clustering, transfer learning, self-ensemble, and model ensemble techniques, to enhance the final model performance. The model performance was evaluated, with the results demonstrating its effectiveness in detecting cavities in embankment GPR survey data.

Applying deep learning based super-resolution technique for high-resolution urban flood analysis (고해상도 도시 침수 해석을 위한 딥러닝 기반 초해상화 기술 적용)

  • Choi, Hyeonjin;Lee, Songhee;Woo, Hyuna;Kim, Minyoung;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.10
    • /
    • pp.641-653
    • /
    • 2023
  • As climate change and urbanization are causing unprecedented natural disasters in urban areas, it is crucial to have urban flood predictions with high fidelity and accuracy. However, conventional physically- and deep learning-based urban flood modeling methods have limitations that require a lot of computer resources or data for high-resolution flooding analysis. In this study, we propose and implement a method for improving the spatial resolution of urban flood analysis using a deep learning based super-resolution technique. The proposed approach converts low-resolution flood maps by physically based modeling into the high-resolution using a super-resolution deep learning model trained by high-resolution modeling data. When applied to two cases of retrospective flood analysis at part of City of Portland, Oregon, U.S., the results of the 4-m resolution physical simulation were successfully converted into 1-m resolution flood maps through super-resolution. High structural similarity between the super-solution image and the high-resolution original was found. The results show promising image quality loss within an acceptable limit of 22.80 dB (PSNR) and 0.73 (SSIM). The proposed super-resolution method can provide efficient model training with a limited number of flood scenarios, significantly reducing data acquisition efforts and computational costs.

Mean Teacher Learning Structure Optimization for Semantic Segmentation of Crack Detection (균열 탐지의 의미론적 분할을 위한 Mean Teacher 학습 구조 최적화 )

  • Seungbo Shim
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.5
    • /
    • pp.113-119
    • /
    • 2023
  • Most infrastructure structures were completed during periods of economic growth. The number of infrastructure structures reaching their lifespan is increasing, and the proportion of old structures is gradually increasing. The functions and performance of these structures at the time of design may deteriorate and may even lead to safety accidents. To prevent this repercussion, accurate inspection and appropriate repair are requisite. To this end, demand is increasing for computer vision and deep learning technology to accurately detect even minute cracks. However, deep learning algorithms require a large number of training data. In particular, label images indicating the location of cracks in the image are required. To secure a large number of those label images, a lot of labor and time are consumed. To reduce these costs as well as increase detection accuracy, this study proposed a learning structure based on mean teacher method. This learning structure was trained on a dataset of 900 labeled image dataset and 3000 unlabeled image dataset. The crack detection network model was evaluated on over 300 labeled image dataset, and the detection accuracy recorded a mean intersection over union of 89.23% and an F1 score of 89.12%. Through this experiment, it was confirmed that detection performance was improved compared to supervised learning. It is expected that this proposed method will be used in the future to reduce the cost required to secure label images.

Digital Citizenship Library Programming in Award-Winning Libraries of the Future: A case review of public libraries in the United States (공공도서관의 디지털 시민성 프로그래밍: 미국의 미래 도서관 수상 도서관을 중심으로)

  • Jonathan M. Hollister;Jisue Lee
    • Journal of Korean Library and Information Science Society
    • /
    • v.54 no.4
    • /
    • pp.359-392
    • /
    • 2023
  • Digital citizenship includes an evolving set of knowledge and skills related to effectively and ethically using technology, especially when interacting with other people, information, and media in the online context. As public libraries have long provided access to and training with a variety of technologies, this study explores how digital citizenship has been covered in public library programming to identify potential trends and best practices. A purposive sampling of public library recipients of the American Library Association (ALA) and Information Today Inc.'s Library of the Future Award over the past 11 years (2013-2023) identified 7 case libraries to review. The titles and descriptions of 337 relevant library programs for audiences of school-aged children (5 years old and up) to seniors were collected for a 2-month period from each library's website and analyzed using Ribble & Parks (2019) 9 elements of digital citizenship. The findings suggest that programming related to digital citizenship most often addresses themes connected to digital access and digital fluency through coverage of topics related to computer and technology use. Based on themes and examples from the findings, public libraries are encouraged to expand upon existing programs to integrate all elements of digital citizenship, strive for inclusive and accessible digital citizenship education for all ages, and leverage resources and expertise from relevant stakeholders and community partnerships.

Analysis and Study for Appropriate Deep Neural Network Structures and Self-Supervised Learning-based Brain Signal Data Representation Methods (딥 뉴럴 네트워크의 적절한 구조 및 자가-지도 학습 방법에 따른 뇌신호 데이터 표현 기술 분석 및 고찰)

  • Won-Jun Ko
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.137-142
    • /
    • 2024
  • Recently, deep learning technology has become those methods as de facto standards in the area of medical data representation. But, deep learning inherently requires a large amount of training data, which poses a challenge for its direct application in the medical field where acquiring large-scale data is not straightforward. Additionally, brain signal modalities also suffer from these problems owing to the high variability. Research has focused on designing deep neural network structures capable of effectively extracting spectro-spatio-temporal characteristics of brain signals, or employing self-supervised learning methods to pre-learn the neurophysiological features of brain signals. This paper analyzes methodologies used to handle small-scale data in emerging fields such as brain-computer interfaces and brain signal-based state prediction, presenting future directions for these technologies. At first, this paper examines deep neural network structures for representing brain signals, then analyzes self-supervised learning methodologies aimed at efficiently learning the characteristics of brain signals. Finally, the paper discusses key insights and future directions for deep learning-based brain signal analysis.

Predicting the splitting tensile strength of manufactured-sand concrete containing stone nano-powder through advanced machine learning techniques

  • Manish Kewalramani;Hanan Samadi;Adil Hussein Mohammed;Arsalan Mahmoodzadeh;Ibrahim Albaijan;Hawkar Hashim Ibrahim;Saleh Alsulamy
    • Advances in nano research
    • /
    • v.16 no.4
    • /
    • pp.375-394
    • /
    • 2024
  • The extensive utilization of concrete has given rise to environmental concerns, specifically concerning the depletion of river sand. To address this issue, waste deposits can provide manufactured-sand (MS) as a substitute for river sand. The objective of this study is to explore the application of machine learning techniques to facilitate the production of manufactured-sand concrete (MSC) containing stone nano-powder through estimating the splitting tensile strength (STS) containing compressive strength of cement (CSC), tensile strength of cement (TSC), curing age (CA), maximum size of the crushed stone (Dmax), stone nano-powder content (SNC), fineness modulus of sand (FMS), water to cement ratio (W/C), sand ratio (SR), and slump (S). To achieve this goal, a total of 310 data points, encompassing nine influential factors affecting the mechanical properties of MSC, are collected through laboratory tests. Subsequently, the gathered dataset is divided into two subsets, one for training and the other for testing; comprising 90% (280 samples) and 10% (30 samples) of the total data, respectively. By employing the generated dataset, novel models were developed for evaluating the STS of MSC in relation to the nine input features. The analysis results revealed significant correlations between the CSC and the curing age CA with STS. Moreover, when delving into sensitivity analysis using an empirical model, it becomes apparent that parameters such as the FMS and the W/C exert minimal influence on the STS. We employed various loss functions to gauge the effectiveness and precision of our methodologies. Impressively, the outcomes of our devised models exhibited commendable accuracy and reliability, with all models displaying an R-squared value surpassing 0.75 and loss function values approaching insignificance. To further refine the estimation of STS for engineering endeavors, we also developed a user-friendly graphical interface for our machine learning models. These proposed models present a practical alternative to laborious, expensive, and complex laboratory techniques, thereby simplifying the production of mortar specimens.

Machine Learning-Based Prediction of COVID-19 Severity and Progression to Critical Illness Using CT Imaging and Clinical Data

  • Subhanik Purkayastha;Yanhe Xiao;Zhicheng Jiao;Rujapa Thepumnoeysuk;Kasey Halsey;Jing Wu;Thi My Linh Tran;Ben Hsieh;Ji Whae Choi;Dongcui Wang;Martin Vallieres;Robin Wang;Scott Collins;Xue Feng;Michael Feldman;Paul J. Zhang;Michael Atalay;Ronnie Sebro;Li Yang;Yong Fan;Wei-hua Liao;Harrison X. Bai
    • Korean Journal of Radiology
    • /
    • v.22 no.7
    • /
    • pp.1213-1224
    • /
    • 2021
  • Objective: To develop a machine learning (ML) pipeline based on radiomics to predict Coronavirus Disease 2019 (COVID-19) severity and the future deterioration to critical illness using CT and clinical variables. Materials and Methods: Clinical data were collected from 981 patients from a multi-institutional international cohort with real-time polymerase chain reaction-confirmed COVID-19. Radiomics features were extracted from chest CT of the patients. The data of the cohort were randomly divided into training, validation, and test sets using a 7:1:2 ratio. A ML pipeline consisting of a model to predict severity and time-to-event model to predict progression to critical illness were trained on radiomics features and clinical variables. The receiver operating characteristic area under the curve (ROC-AUC), concordance index (C-index), and time-dependent ROC-AUC were calculated to determine model performance, which was compared with consensus CT severity scores obtained by visual interpretation by radiologists. Results: Among 981 patients with confirmed COVID-19, 274 patients developed critical illness. Radiomics features and clinical variables resulted in the best performance for the prediction of disease severity with a highest test ROC-AUC of 0.76 compared with 0.70 (0.76 vs. 0.70, p = 0.023) for visual CT severity score and clinical variables. The progression prediction model achieved a test C-index of 0.868 when it was based on the combination of CT radiomics and clinical variables compared with 0.767 when based on CT radiomics features alone (p < 0.001), 0.847 when based on clinical variables alone (p = 0.110), and 0.860 when based on the combination of visual CT severity scores and clinical variables (p = 0.549). Furthermore, the model based on the combination of CT radiomics and clinical variables achieved time-dependent ROC-AUCs of 0.897, 0.933, and 0.927 for the prediction of progression risks at 3, 5 and 7 days, respectively. Conclusion: CT radiomics features combined with clinical variables were predictive of COVID-19 severity and progression to critical illness with fairly high accuracy.

Comparison of Association Rule Learning and Subgroup Discovery for Mining Traffic Accident Data (교통사고 데이터의 마이닝을 위한 연관규칙 학습기법과 서브그룹 발견기법의 비교)

  • Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.1-16
    • /
    • 2015
  • Traffic accident is one of the major cause of death worldwide for the last several decades. According to the statistics of world health organization, approximately 1.24 million deaths occurred on the world's roads in 2010. In order to reduce future traffic accident, multipronged approaches have been adopted including traffic regulations, injury-reducing technologies, driving training program and so on. Records on traffic accidents are generated and maintained for this purpose. To make these records meaningful and effective, it is necessary to analyze relationship between traffic accident and related factors including vehicle design, road design, weather, driver behavior etc. Insight derived from these analysis can be used for accident prevention approaches. Traffic accident data mining is an activity to find useful knowledges about such relationship that is not well-known and user may interested in it. Many studies about mining accident data have been reported over the past two decades. Most of studies mainly focused on predict risk of accident using accident related factors. Supervised learning methods like decision tree, logistic regression, k-nearest neighbor, neural network are used for these prediction. However, derived prediction model from these algorithms are too complex to understand for human itself because the main purpose of these algorithms are prediction, not explanation of the data. Some of studies use unsupervised clustering algorithm to dividing the data into several groups, but derived group itself is still not easy to understand for human, so it is necessary to do some additional analytic works. Rule based learning methods are adequate when we want to derive comprehensive form of knowledge about the target domain. It derives a set of if-then rules that represent relationship between the target feature with other features. Rules are fairly easy for human to understand its meaning therefore it can help provide insight and comprehensible results for human. Association rule learning methods and subgroup discovery methods are representing rule based learning methods for descriptive task. These two algorithms have been used in a wide range of area from transaction analysis, accident data analysis, detection of statistically significant patient risk groups, discovering key person in social communities and so on. We use both the association rule learning method and the subgroup discovery method to discover useful patterns from a traffic accident dataset consisting of many features including profile of driver, location of accident, types of accident, information of vehicle, violation of regulation and so on. The association rule learning method, which is one of the unsupervised learning methods, searches for frequent item sets from the data and translates them into rules. In contrast, the subgroup discovery method is a kind of supervised learning method that discovers rules of user specified concepts satisfying certain degree of generality and unusualness. Depending on what aspect of the data we are focusing our attention to, we may combine different multiple relevant features of interest to make a synthetic target feature, and give it to the rule learning algorithms. After a set of rules is derived, some postprocessing steps are taken to make the ruleset more compact and easier to understand by removing some uninteresting or redundant rules. We conducted a set of experiments of mining our traffic accident data in both unsupervised mode and supervised mode for comparison of these rule based learning algorithms. Experiments with the traffic accident data reveals that the association rule learning, in its pure unsupervised mode, can discover some hidden relationship among the features. Under supervised learning setting with combinatorial target feature, however, the subgroup discovery method finds good rules much more easily than the association rule learning method that requires a lot of efforts to tune the parameters.