• Title/Summary/Keyword: 모델 일반화

Search Result 618, Processing Time 0.024 seconds

Calculation Model of Time Varying Loudness by Using the Critical-banded Filters (임계 대역 필터를 이용한 과도음의 라우드니스 계산 모델)

  • Jeong, Hyuk;Ih, Jeong-Guon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.5
    • /
    • pp.65-70
    • /
    • 2000
  • It is blown that the loudness is one of the most important metrics in assessing the sound quality and a calculation method for loudness has been standardized for steady sounds. In this study, a new loudness model is suggested for dealing with the transient sound for a unified analysis of various practical sounds. A signal processing technique is introduced for this purpose, which is required for the band subdivision and the prediction of band-level change of transient sounds. In addition, models for the post-masking and the temporal integration are adopted in the analysis of the loudness of transient sounds. In order to solve the problem of the conventional loudness model in the pure-tone signal processing, a critical band filter is employed in the analysis, which consists of 47 critical filters having a filter spacing of a half of the critical bandwidth. For testing the effectiveness of the present model, the predicted responses are compared with the experimental data and it is observed that they are in good agreements.

  • PDF

Bio-data Classification using Modified Additive Factor Model (변형된 팩터 분석 모델을 이용한 생체데이타 분류 시스템)

  • Cho, Min-Kook;Park, Hye-Young
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.667-680
    • /
    • 2007
  • The bio-data processing is used for a suitable purpose with bio-signals, which are obtained from human individuals. Recently, there is increasing demand that the bio-data has been widely applied to various applications. However, it is often that the number of data within each class is limited and the number of classes is large due to the property of problem domain. Therefore, the conventional pattern recognition systems and classification methods are suffering form low generalization performance because the system using the lack of data is influenced by noises of that. To solve this problem, we propose a modified additive factor model for bio-data generation, with two factors; the class factor which affects properties of each individuals and the environment factor such as noises which affects all classes. We then develop a classification system through defining a new similarity function using the proposed model. The proposed method maximizes to use an information of the class classification. So, we can expect to obtain good generalization performances with robust noises from small number of datas for bio-data. Experimental results show that proposed method outperforms significantly conventional method with real bio-data.

Predicting the Future Price of Export Items in Trade Using a Deep Regression Model (딥러닝 기반 무역 수출 가격 예측 모델)

  • Kim, Ji Hun;Lee, Jee Hang
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.10
    • /
    • pp.427-436
    • /
    • 2022
  • Korea Trade-Investment Promotion Agency (KOTRA) annually publishes the trade data in South Korea under the guidance of the Ministry of Trade, Industry and Energy in South Korea. The trade data usually contains Gross domestic product (GDP), a custom tariff, business score, and the price of export items in previous and this year, with regards to the trading items and the countries. However, it is challenging to figure out the meaningful insight so as to predict the future price on trading items every year due to the significantly large amount of data accumulated over the several years under the limited human/computing resources. Within this context, this paper proposes a multi layer perception that can predict the future price of potential trading items in the next year by training large amounts of past year's data with a low computational and human cost.

A Survey on the Latest Research Trends in Retrieval-Augmented Generation (검색 증강 생성(RAG) 기술의 최신 연구 동향에 대한 조사)

  • Eunbin Lee;Ho Bae
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.9
    • /
    • pp.429-436
    • /
    • 2024
  • As Large Language Models (LLMs) continue to advance, effectively harnessing their potential has become increasingly important. LLMs, trained on vast datasets, are capable of generating text across a wide range of topics, making them useful in applications such as content creation, machine translation, and chatbots. However, they often face challenges in generalization due to gaps in specific or specialized knowledge, and updating these models with the latest information post-training remains a significant hurdle. To address these issues, Retrieval-Augmented Generation (RAG) models have been introduced. These models enhance response generation by retrieving information from continuously updated external databases, thereby reducing the hallucination phenomenon often seen in LLMs while improving efficiency and accuracy. This paper presents the foundational architecture of RAG, reviews recent research trends aimed at enhancing the retrieval capabilities of LLMs through RAG, and discusses evaluation techniques. Additionally, it explores performance optimization and real-world applications of RAG in various industries. Through this analysis, the paper aims to propose future research directions for the continued development of RAG models.

Improving Recall for Context-Sensitive Spelling Correction Rules using Conditional Probability Model with Dynamic Window Sizes (동적 윈도우를 갖는 조건부확률 모델을 이용한 한국어 문맥의존 철자오류 교정 규칙의 재현율 향상)

  • Choi, Hyunsoo;Kwon, Hyukchul;Yoon, Aesun
    • Journal of KIISE
    • /
    • v.42 no.5
    • /
    • pp.629-636
    • /
    • 2015
  • The types of errors corrected by a Korean spelling and grammar checker can be classified into isolated-term spelling errors and context-sensitive spelling errors (CSSE). CSSEs are difficult to detect and to correct, since they are correct words when examined alone. Thus, they can be corrected only by considering the semantic and syntactic relations to their context. CSSEs, which are frequently made even by expert wiriters, significantly affect the reliability of spelling and grammar checkers. An existing Korean spelling and grammar checker developed by P University (KSGC 4.5) adopts hand-made correction rules for correcting CSSEs. The KSGC 4.5 is designed to obtain very high precision, which results in an extremely low recall. Our overall goal of previous works was to improve the recall without considerably lowering the precision, by generalizing CSSE correction rules that mainly depend on linguistic knowledge. A variety of rule-based methods has been proposed in previous works, and the best performance showed 95.19% of average precision and 37.56% of recall. This study thus proposes a statistics based method using a conditional probability model with dynamic window sizes. in order to further improve the recall. The proposed method obtained 97.23% of average precision and 50.50% of recall.

Statistical Techniques to Detect Sensor Drifts (센서드리프트 판별을 위한 통계적 탐지기술 고찰)

  • Seo, In-Yong;Shin, Ho-Cheol;Park, Moon-Ghu;Kim, Seong-Jun
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.3
    • /
    • pp.103-112
    • /
    • 2009
  • In a nuclear power plant (NPP), periodic sensor calibrations are required to assure sensors are operating correctly. However, only a few faulty sensors are found to be calibrated. For the safe operation of an NPP and the reduction of unnecessary calibration, on-line calibration monitoring is needed. In this paper, principal component-based Auto-Associative support vector regression (PCSVR) was proposed for the sensor signal validation of the NPP. It utilizes the attractive merits of principal component analysis (PCA) for extracting predominant feature vectors and AASVR because it easily represents complicated processes that are difficult to model with analytical and mechanistic models. With the use of real plant startup data from the Kori Nuclear Power Plant Unit 3, SVR hyperparameters were optimized by the response surface methodology (RSM). Moreover the statistical techniques are integrated with PCSVR for the failure detection. The residuals between the estimated signals and the measured signals are tested by the Shewhart Control Chart, Exponentially Weighted Moving Average (EWMA), Cumulative Sum (CUSUM) and generalized likelihood ratio test (GLRT) to detect whether the sensors are failed or not. This study shows the GLRT can be a candidate for the detection of sensor drift.

State-based Peridynamic Modeling for Dynamic Fracture of Plane Stress (평면응력 문제의 상태 기반 페리다이나믹 동적파괴 해석 모델링)

  • Ha, Youn Doh
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.28 no.3
    • /
    • pp.301-307
    • /
    • 2015
  • A bond-based peridynamic model has been shown to be capable of analyzing many of dynamic brittle fracture phenomena. However, there have been issued limitations on handling constitutive models of various materials. Especially, it assumes bonds act independently of each other, so that Poisson's ratio for 3D model is fixed as 1/4 as well as taking only account the bond stretching results in a volume change not a shear change. In this paper a state-based peridynamic model of dynamic brittle fracture is presented. The state-based peridynamic model is a generalized peridynamic model that is able to directly use a constitutive model from the standard theory. It permits the response of a material at a point to depend collectively on the deformation of all bonds connected to the point. Thus, the volume and shear changes of the material can be reproduced by the state-based peridynamic theory. For a linearly elastic solid, a plane stress model is introduced and the damage model suitable for the state-based peridynamic model is discussed. Through a convergence study under decreasing the peridynamic nonlocal region($\delta$-convergence), the dynamic fracture model is verified. It is also shown that the state-based peridynamic model is reliable for modeling dynamic crack propagatoin.

Assessing Techniques for Advancing Land Cover Classification Accuracy through CNN and Transformer Model Integration (CNN 모델과 Transformer 조합을 통한 토지피복 분류 정확도 개선방안 검토)

  • Woo-Dam SIM;Jung-Soo LEE
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.27 no.1
    • /
    • pp.115-127
    • /
    • 2024
  • This research aimed to construct models with various structures based on the Transformer module and to perform land cover classification, thereby examining the applicability of the Transformer module. For the classification of land cover, the Unet model, which has a CNN structure, was selected as the base model, and a total of four deep learning models were constructed by combining both the encoder and decoder parts with the Transformer module. During the training process of the deep learning models, the training was repeated 10 times under the same conditions to evaluate the generalization performance. The evaluation of the classification accuracy of the deep learning models showed that the Model D, which utilized the Transformer module in both the encoder and decoder structures, achieved the highest overall accuracy with an average of approximately 89.4% and a Kappa coefficient average of about 73.2%. In terms of training time, models based on CNN were the most efficient. however, the use of Transformer-based models resulted in an average improvement of 0.5% in classification accuracy based on the Kappa coefficient. It is considered necessary to refine the model by considering various variables such as adjusting hyperparameters and image patch sizes during the integration process with CNN models. A common issue identified in all models during the land cover classification process was the difficulty in detecting small-scale objects. To improve this misclassification phenomenon, it is deemed necessary to explore the use of high-resolution input data and integrate multidimensional data that includes terrain and texture information.

Management Automation Technique for Maintaining Performance of Machine Learning-Based Power Grid Condition Prediction Model (기계학습 기반 전력망 상태예측 모델 성능 유지관리 자동화 기법)

  • Lee, Haesung;Lee, Byunsung;Moon, Sangun;Kim, Junhyuk;Lee, Heysun
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.6 no.4
    • /
    • pp.413-418
    • /
    • 2020
  • It is necessary to manage the prediction accuracy of the machine learning model to prevent the decrease in the performance of the grid network condition prediction model due to overfitting of the initial training data and to continuously utilize the prediction model in the field by maintaining the prediction accuracy. In this paper, we propose an automation technique for maintaining the performance of the model, which increases the accuracy and reliability of the prediction model by considering the characteristics of the power grid state data that constantly changes due to various factors, and enables quality maintenance at a level applicable to the field. The proposed technique modeled a series of tasks for maintaining the performance of the power grid condition prediction model through the application of the workflow management technology in the form of a workflow, and then automated it to make the work more efficient. In addition, the reliability of the performance result is secured by evaluating the performance of the prediction model taking into account both the degree of change in the statistical characteristics of the data and the level of generalization of the prediction, which has not been attempted in the existing technology. Through this, the accuracy of the prediction model is maintained at a certain level, and further new development of predictive models with excellent performance is possible. As a result, the proposed technique not only solves the problem of performance degradation of the predictive model, but also improves the field utilization of the condition prediction model in a complex power grid system.

Dynamic Fracture Analysis of High-speed Impact on Granite with Peridynamic Plasticity (페리다이나믹 소성 모델을 통한 화강암의 고속 충돌 파괴 해석)

  • Ha, Youn Doh
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.1
    • /
    • pp.37-44
    • /
    • 2019
  • A bond-based peridynamic model has been reported dynamic fracture characteristic of brittle materials through a simple constitutive model. In the model, each bond is assumed to be a simple spring operating independently. As a result, this simple bond interaction modeling restricts the material behavior having a fixed Poisson's ratio of 1/4 and not being capable of expressing shear deformation. We consider a state-based peridynamics as a generalized peridynamic model. Constitutive models in the state-based peridynamics are corresponding to those in continuum theory. In state-based peridynamics, thus, the response of a material particle depends collectively on deformation of all bonds connected to other particles. So, a state-based peridynamic theory can represent the volume and shear changes of the material. In this paper, the perfect plasticity is considered to express plastic deformation of material by the state-based peridynamic constitutive model with perfect plastic flow rule. The elastic-plastic behavior of the material is verified through the stress-strain curves of the flat plate example. Furthermore, we simulate the high-speed impact on 3D granite model with a nonlocal contact modeling. It is observed that the damage patterns obtained by peridynamics are similar to experimental observations.