• Title/Summary/Keyword: 한계입력

Search Result 628, Processing Time 0.03 seconds

A Study on Efficient AI Model Drift Detection Methods for MLOps (MLOps를 위한 효율적인 AI 모델 드리프트 탐지방안 연구)

  • Ye-eun Lee;Tae-jin Lee
    • Journal of Internet Computing and Services
    • /
    • v.24 no.5
    • /
    • pp.17-27
    • /
    • 2023
  • Today, as AI (Artificial Intelligence) technology develops and its practicality increases, it is widely used in various application fields in real life. At this time, the AI model is basically learned based on various statistical properties of the learning data and then distributed to the system, but unexpected changes in the data in a rapidly changing data situation cause a decrease in the model's performance. In particular, as it becomes important to find drift signals of deployed models in order to respond to new and unknown attacks that are constantly created in the security field, the need for lifecycle management of the entire model is gradually emerging. In general, it can be detected through performance changes in the model's accuracy and error rate (loss), but there are limitations in the usage environment in that an actual label for the model prediction result is required, and the detection of the point where the actual drift occurs is uncertain. there is. This is because the model's error rate is greatly influenced by various external environmental factors, model selection and parameter settings, and new input data, so it is necessary to precisely determine when actual drift in the data occurs based only on the corresponding value. There are limits to this. Therefore, this paper proposes a method to detect when actual drift occurs through an Anomaly analysis technique based on XAI (eXplainable Artificial Intelligence). As a result of testing a classification model that detects DGA (Domain Generation Algorithm), anomaly scores were extracted through the SHAP(Shapley Additive exPlanations) Value of the data after distribution, and as a result, it was confirmed that efficient drift point detection was possible.

The Role of Relational Agency in a Need-reality Colliding Situation (욕구-현실 충돌 상황에서의 주체성의 역할)

  • Seheon Kim ;Taekyun Hur
    • Korean Journal of Culture and Social Issue
    • /
    • v.29 no.4
    • /
    • pp.617-636
    • /
    • 2023
  • The goal of this study was to explain the phenomenon of making efforts to overcome the need-reality collision as a cultural characteristic of Koreans. Specifically, we examined whether the behavior varies depending on the degree of relational agency in the situation where conflicts between one's needs and reality have occurred. To this end, a total of 217 participants participated in the online experiment, and the data of 156 participants were finally analyzed. After responding to the relational agency scale, the participants were exposed to a decision-making scenario in which conflicting factors existed. The scenario were about buying a house and making a wedding hall contract, and in each scenario, two important values were set to conflict with each other in the market. Participants read the scenario and entered the level they wanted for each value. After that, they encounter a situation in which he or she has not found the candidate site corresponding to the level he or she wants. Then, the participants responded to their willingness to make additional efforts themselves. As a result of the study, the degree of relational agency of the participants showed a positive relationship with the degree of additional effort. In addition, the degree of the desired level beyond the reality (expectancy discrepancy) showed a nonlinear (reverse U-shape) influence on the additional effort while controlling for individual difference. Furthermore, the interaction effect between relational agency and expectancy discrepancy was significant. Specifically, individuals with low agency did not have a significant relationship between the degree of expectancy discrepancy and the dependent variable, but individuals with high relational agency had a significant non-linear relationship between the degree of expectancy discrepancy and the dependent variable. Based on the results of the study, the role and function of Koreans' psychological characteristics (relational agency) in the scene of managing needs-reality collision were discussed.

Effective Multi-Modal Feature Fusion for 3D Semantic Segmentation with Multi-View Images (멀티-뷰 영상들을 활용하는 3차원 의미적 분할을 위한 효과적인 멀티-모달 특징 융합)

  • Hye-Lim Bae;Incheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.505-518
    • /
    • 2023
  • 3D point cloud semantic segmentation is a computer vision task that involves dividing the point cloud into different objects and regions by predicting the class label of each point. Existing 3D semantic segmentation models have some limitations in performing sufficient fusion of multi-modal features while ensuring both characteristics of 2D visual features extracted from RGB images and 3D geometric features extracted from point cloud. Therefore, in this paper, we propose MMCA-Net, a novel 3D semantic segmentation model using 2D-3D multi-modal features. The proposed model effectively fuses two heterogeneous 2D visual features and 3D geometric features by using an intermediate fusion strategy and a multi-modal cross attention-based fusion operation. Also, the proposed model extracts context-rich 3D geometric features from input point cloud consisting of irregularly distributed points by adopting PTv2 as 3D geometric encoder. In this paper, we conducted both quantitative and qualitative experiments with the benchmark dataset, ScanNetv2 in order to analyze the performance of the proposed model. In terms of the metric mIoU, the proposed model showed a 9.2% performance improvement over the PTv2 model using only 3D geometric features, and a 12.12% performance improvement over the MVPNet model using 2D-3D multi-modal features. As a result, we proved the effectiveness and usefulness of the proposed model.

Neural Network-Based Prediction of Dynamic Properties (인공신경망을 활용한 동적 물성치 산정 연구)

  • Min, Dae-Hong;Kim, YoungSeok;Kim, Sewon;Choi, Hyun-Jun;Yoon, Hyung-Koo
    • Journal of the Korean Geotechnical Society
    • /
    • v.39 no.12
    • /
    • pp.37-46
    • /
    • 2023
  • Dynamic soil properties are essential factors for predicting the detailed behavior of the ground. However, there are limitations to gathering soil samples and performing additional experiments. In this study, we used an artificial neural network (ANN) to predict dynamic soil properties based on static soil properties. The selected static soil properties were soil cohesion, internal friction angle, porosity, specific gravity, and uniaxial compressive strength, whereas the compressional and shear wave velocities were determined for the dynamic soil properties. The Levenberg-Marquardt and Bayesian regularization methods were used to enhance the reliability of the ANN results, and the reliability associated with each optimization method was compared. The accuracy of the ANN model was represented by the coefficient of determination, which was greater than 0.9 in the training and testing phases, indicating that the proposed ANN model exhibits high reliability. Further, the reliability of the output values was verified with new input data, and the results showed high accuracy.

Measurement of the Plane Wave Reflection Coefficient for the Saturated Granular Medium in the Water Tank and Comparison to Predictions by the Biot Theory (수조에서 입자 매질의 평면파 반사계수 측정과 Biot 이론에 의한 예측)

  • Lee Keun-Hwa
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.5
    • /
    • pp.246-256
    • /
    • 2006
  • The plane wave reflection coefficient is an acoustic property containing all the information concerning the ocean bottom and can be used as an input parameter to various acoustic propagation models. In this paper, we measure the plane wave reflection coefficient, the sound speed, thd the attenuation for saturated granular medium in the water tank. Three kinds of glass beads and natural sand are used as the granular medium. The reflection experiment is performed with the sinusoidal tone bursts of 100 kHz at incident angles from 28 to 53 degrees, and the sound speed and attenuation experiment are performed also with the same signal. From the measured reflection signal, the reflection coefficient is calculated with the self calibration method and the experimental uncertainties are discussed. The sound speed and the attenuation measurements are used for the estimation of the porosity and permeability, the main Biot parameters. The estimated values are compared to the directly measured values and used as input values to the Biot theory in order to calculate the theoretical reflection coefficient. Finally, the reflection coefficient predicted by Biot theory is compared to the measured reflection coefficient and their characteristics are discussed.

A Case Study on Metadata Extractionfor Records Management Using ChatGPT (챗GPT를 활용한 기록관리 메타데이터 추출 사례연구)

  • Minji Kim;Sunghee Kang;Hae-young Rieh
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.24 no.2
    • /
    • pp.89-112
    • /
    • 2024
  • Metadata is a crucial component of record management, playing a vital role in properly managing and understanding the record. In cases where automatic metadata assignment is not feasible, manual input by records professionals becomes necessary. This study aims to alleviate the challenges associated with manual entry by proposing a method that harnesses ChatGPT technology for extracting records management metadata elements. To employ ChatGPT technology, a Python program utilizing the LangChain library was developed. This program was designed to analyze PDF documents and extract metadata from records through questions, both with a locally installed instance of ChatGPT and the ChatGPT online service. Multiple PDF documents were subjected to this process to test the effectiveness of metadata extraction. The results revealed that while using LangChain with ChatGPT-3.5 turbo provided a secure environment, it exhibited some limitations in accurately retrieving metadata elements. Conversely, the ChatGPT-4 online service yielded relatively accurate results despite being unable to handle sensitive documents for security reasons. This exploration underscores the potential of utilizing ChatGPT technology to extract metadata in records management. With advancements in ChatGPT-related technologies, safer and more accurate results are expected to be achieved. Leveraging these advantages can significantly enhance the efficiency and productivity of tasks associated with managing records and metadata in archives.

A Basic Study on User Experience Evaluation Based on User Experience Hierarchy Using ChatGPT 4.0 (챗지피티 4.0을 활용한 사용자 경험 계층 기반 사용자 경험 평가에 관한 기초적 연구)

  • Soomin Han;Jae Wan Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.493-498
    • /
    • 2024
  • With the rapid advancement of generative artificial intelligence technology, there is growing interest in how to utilize it in practical applications. Additionally, the importance of prompt engineering to generate results that meet user demands is being newly highlighted. Exploring the new possibilities of generative AI can hold significant value. This study aims to utilize ChatGPT 4.0, a leading generative AI, to propose an effective method for evaluating user experience through the analysis of online customer review data. The user experience evaluation method was based on the six-layer elements of user experience: 'functionality', 'reliability', 'usability', 'convenience', 'emotion', and 'significance'. For this study, a literature review was conducted to enhance the understanding of prompt engineering and to grasp the clear concept of the user experience hierarchy. Based on this, prompts were crafted, and experiments for the user experience evaluation method were carried out using the analysis of collected online customer review data. In this study, we reveal that when provided with accurate definitions and descriptions of the classification processes for user experience factors, ChatGPT demonstrated excellent performance in evaluating user experience. However, it was also found that due to time constraints, there were limitations in analyzing large volumes of data. By introducing and proposing a method to utilize ChatGPT 4.0 for user experience evaluation, we expect to contribute to the advancement of the UX field.

Metadata extraction using AI and advanced metadata research for web services (AI를 활용한 메타데이터 추출 및 웹서비스용 메타데이터 고도화 연구)

  • Sung Hwan Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.499-503
    • /
    • 2024
  • Broadcasting programs are provided to various media such as Internet replay, OTT, and IPTV services as well as self-broadcasting. In this case, it is very important to provide keywords for search that represent the characteristics of the content well. Broadcasters mainly use the method of manually entering key keywords in the production process and the archive process. This method is insufficient in terms of quantity to secure core metadata, and also reveals limitations in recommending and using content in other media services. This study supports securing a large number of metadata by utilizing closed caption data pre-archived through the DTV closed captioning server developed in EBS. First, core metadata was automatically extracted by applying Google's natural language AI technology. The next step is to propose a method of finding core metadata by reflecting priorities and content characteristics as core research contents. As a technology to obtain differentiated metadata weights, the importance was classified by applying the TF-IDF calculation method. Successful weight data were obtained as a result of the experiment. The string metadata obtained by this study, when combined with future string similarity measurement studies, becomes the basis for securing sophisticated content recommendation metadata from content services provided to other media.

Utilizing deep learning algorithm and high-resolution precipitation product to predict water level variability (고해상도 강우자료와 딥러닝 알고리즘을 활용한 수위 변동성 예측)

  • Han, Heechan;Kang, Narae;Yoon, Jungsoo;Hwang, Seokhwan
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.7
    • /
    • pp.471-479
    • /
    • 2024
  • Flood damage is becoming more serious due to the heavy rainfall caused by climate change. Physically based hydrological models have been utilized to predict stream water level variability and provide flood forecasting. Recently, hydrological simulations using machine learning and deep learning algorithms based on nonlinear relationships between hydrological data have been getting attention. In this study, the Long Short-Term Memory (LSTM) algorithm is used to predict the water level of the Seomjin River watershed. In addition, Climate Prediction Center morphing method (CMORPH)-based gridded precipitation data is applied as input data for the algorithm to overcome for the limitations of ground data. The water level prediction results of the LSTM algorithm coupling with the CMORPH data showed that the mean CC was 0.98, RMSE was 0.07 m, and NSE was 0.97. It is expected that deep learning and remote data can be used together to overcome for the shortcomings of ground observation data and to obtain reliable prediction results.

Enhancing machine learning-based anomaly detection for TBM penetration rate with imbalanced data manipulation (불균형 데이터 처리를 통한 머신러닝 기반 TBM 굴진율 이상탐지 개선)

  • Kibeom Kwon;Byeonghyun Hwang;Hyeontae Park;Ju-Young Oh;Hangseok Choi
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.26 no.5
    • /
    • pp.519-532
    • /
    • 2024
  • Anomaly detection for the penetration rate of tunnel boring machines (TBMs) is crucial for effective risk management in TBM tunnel projects. However, previous machine learning models for predicting the penetration rate have struggled with imbalanced data between normal and abnormal penetration rates. This study aims to enhance the performance of machine learning-based anomaly detection for the penetration rate by utilizing a data augmentation technique to address this data imbalance. Initially, six input features were selected through correlation analysis. The lowest and highest 10% of the penetration rates were designated as abnormal classes, while the remaining penetration rates were categorized as a normal class. Two prediction models were developed, each trained on an original training set and an oversampled training set constructed using SMOTE (synthetic minority oversampling technique): an XGB (extreme gradient boosting) model and an XGB-SMOTE model. The prediction results showed that the XGB model performed poorly for the abnormal classes, despite performing well for the normal class. In contrast, the XGB-SMOTE model consistently exhibited superior performance across all classes. These findings can be attributed to the data augmentation for the abnormal penetration rates using SMOTE, which enhances the model's ability to learn patterns between geological and operational factors that contribute to abnormal penetration rates. Consequently, this study demonstrates the effectiveness of employing data augmentation to manage imbalanced data in anomaly detection for TBM penetration rates.