• Title/Summary/Keyword: System Performance Parameter

Search Result 2,050, Processing Time 0.028 seconds

The Effect of Plant Coverage on the Constructed Wetlands Performance and Development and Management of Macrophyte Communities (식생피도가 인공습지의 질소 및 인 처리효율에 미치는 영향과 습지식물의 조성 및 관리)

  • Ham, Jong-Hwa;Kim, Hyung-Chul;Koo, Won-Seok;Shin, Hyun-Bhum;Yun, Chun-Gyeong
    • Korean Journal of Ecology and Environment
    • /
    • v.38 no.3 s.113
    • /
    • pp.393-402
    • /
    • 2005
  • The field scale experiment was performed to examine the effect of plant coverage on the constructed wetland performance and recommend the optimum development and management of macrophyte communities. Four sets (each set of 0.88 ha) of wetland (0.8 ha) and pond (0.08 ha) systems were used. Water flowing into the Seokmoon estuarine reservoir from the Dangjin stream was pumped into wetland system. Water depth was maintained at 0.3 ${\sim}$ 0.5 m and hydraulic retention time was managed to about 2 ${\sim}$ 5 days; emergent plants were allowed to grow in the wetlands. After three growing seasons of the construction of wetlands, plant coverage was about 90%, even with no plantation, from bare soil surfaces at the initial stage. During the start up period of constructed wetlands, lower water levels should be maintained to avoid flooding newly plants, if wetland plants are to be started from germinating seeds. Effluent T-N concentration in low plant coverage wetland was higher in winter than high plant coverage wetland, whereas no T-P effluent concentration and removal efficiency difference was observed within 15% plant coverage. Dead vegetation affected nitrogen removal during winter because it is a source of organic carbon which is an essential parameter in denitrification. Biomass harvesting is not a realistic management option for most constructed wetland systems because it could only slightly increase the removal rate and provide a minor nitrogen removal pathway due to lack of organic carbon.

An efficient 2.5D inversion of loop-loop electromagnetic data (루프-루프 전자탐사자료의 효과적인 2.5차원 역산)

  • Song, Yoon-Ho;Kim, Jung-Ho
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.1
    • /
    • pp.68-77
    • /
    • 2008
  • We have developed an inversion algorithm for loop-loop electromagnetic (EM) data, based on the localised non-linear or extended Born approximation to the solution of the 2.5D integral equation describing an EM scattering problem. Source and receiver configuration may be horizontal co-planar (HCP) or vertical co-planar (VCP). Both multi-frequency and multi-separation data can be incorporated. Our inversion code runs on a PC platform without heavy computational load. For the sake of stable and high-resolution performance of the inversion, we implemented an algorithm determining an optimum spatially varying Lagrangian multiplier as a function of sensitivity distribution, through parameter resolution matrix and Backus-Gilbert spread function analysis. Considering that the different source-receiver orientation characteristics cause inconsistent sensitivities to the resistivity structure in simultaneous inversion of HCP and VCP data, which affects the stability and resolution of the inversion result, we adapted a weighting scheme based on the variances of misfits between the measured and calculated datasets. The accuracy of the modelling code that we have developed has been proven over the frequency, conductivity, and geometric ranges typically used in a loop-loop EM system through comparison with 2.5D finite-element modelling results. We first applied the inversion to synthetic data, from a model with resistive as well as conductive inhomogeneities embedded in a homogeneous half-space, to validate its performance. Applying the inversion to field data and comparing the result with that of dc resistivity data, we conclude that the newly developed algorithm provides a reasonable image of the subsurface.

Design and Fabrication of Low Loss, High Power SP6T Switch Chips for Quad-Band Applications Using pHEMT Process (pHEMT 공정을 이용한 저손실, 고전력 4중 대역용 SP6T 스위치 칩의 설계 및 제작)

  • Kwon, Tae-Min;Park, Yong-Min;Kim, Dong-Wook
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.22 no.6
    • /
    • pp.584-597
    • /
    • 2011
  • In this paper, low-loss and high-power RF SP6T switch chips are designed, fabricated and measured for GSM/EGSM/DCS/PCS applications using WIN Semiconductors 0.5 ${\mu}m$ pHEMT process. We utilized a combined configuration of series and series-shunt structures for optimized switch performance, and a common transistor structure on a receiver path for reducing chip area. The gate width and the number of stacked transistors are determined using ON/OFF input power level of the transceiver system. To improve the switch performance, feed-forward capacitors, shunt capacitors and parasitic FET inductance elimination due to resonance are actively used. The fabricated chip size is $1.2{\times}1.5\;mm^2$. S-parameter measurement shows an insertion loss of 0.5~1.2 dB and isolation of 28~36 dB. The fabricated SP6T switch chips can handle 4 W input power and suppress second and third harmonics by more than 75 dBc.

A Digital Phase-locked Loop design based on Minimum Variance Finite Impulse Response Filter with Optimal Horizon Size (최적의 측정값 구간의 길이를 갖는 최소 공분산 유한 임펄스 응답 필터 기반 디지털 위상 고정 루프 설계)

  • You, Sung-Hyun;Pae, Dong-Sung;Choi, Hyun-Duck
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.4
    • /
    • pp.591-598
    • /
    • 2021
  • The digital phase-locked loops(DPLL) is a circuit used for phase synchronization and has been generally used in various fields such as communication and circuit fields. State estimators are used to design digital phase-locked loops, and infinite impulse response state estimators such as the well-known Kalman filter have been used. In general, the performance of the infinite impulse response state estimator-based digital phase-locked loop is excellent, but a sudden performance degradation may occur in unexpected situations such as inaccuracy of initial value, model error, and disturbance. In this paper, we propose a minimum variance finite impulse response filter with optimal horizon for designing a new digital phase-locked loop. A numerical method is introduced to obtain the measured value interval length, which is an important parameter of the proposed finite impulse response filter, and to obtain a gain, the covariance matrix of the error is set as a cost function, and a linear matrix inequality is used to minimize it. In order to verify the superiority and robustness of the proposed digital phase-locked loop, a simulation was performed for comparison and analysis with the existing method in a situation where noise information was inaccurate.

A Review on Measurement Techniques and Constitutive Models of Suction in Unsaturated Bentonite Buffer (불포화 벤토나이트 완충재의 수분흡입력 측정기술 및 구성모델 고찰)

  • Lee, Jae Owan;Yoon, Seok;Kim, Geon Young
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.17 no.3
    • /
    • pp.329-338
    • /
    • 2019
  • Suction of unsaturated bentonite buffers is a very important input parameter for hydro-mechanical performance assessment and design of an engineered barrier system. This study analyzed suction measurement techniques and constitutive models of unsaturated porous media reported in the literature, and suggested suction measurement techniques and constitutive models suitable for bentonite buffer in an HLW repository. The literature review showed the suction of bentonite buffer to be much higher than that of soil, as measured by total suction including matric suction and osmotic suction. The measurement methods (RH-Cell, RH-Cell/Sensor) using a relative humidity sensor were suitable for suction measurement of the bentonite buffer; the RH-Cell /Sensor method was more preferred in consideration of the temperature change due to radioactive decay heat and measurement time. Various water retention models of bentonite buffers have been proposed through experiments, but the van Genuchten model is mainly used as a constitutive model of hydro-mechanical performance assessment of unsaturated buffers. The water characteristic curve of bentonite buffers showed different tendencies according to bentonite type, dry density, temperature, salinity, sample state and hysteresis. Selection of water retention models and determination of model input parameters should consider the effects of these controlling factors so as to improve overall reliability.

A Study on Efficient AI Model Drift Detection Methods for MLOps (MLOps를 위한 효율적인 AI 모델 드리프트 탐지방안 연구)

  • Ye-eun Lee;Tae-jin Lee
    • Journal of Internet Computing and Services
    • /
    • v.24 no.5
    • /
    • pp.17-27
    • /
    • 2023
  • Today, as AI (Artificial Intelligence) technology develops and its practicality increases, it is widely used in various application fields in real life. At this time, the AI model is basically learned based on various statistical properties of the learning data and then distributed to the system, but unexpected changes in the data in a rapidly changing data situation cause a decrease in the model's performance. In particular, as it becomes important to find drift signals of deployed models in order to respond to new and unknown attacks that are constantly created in the security field, the need for lifecycle management of the entire model is gradually emerging. In general, it can be detected through performance changes in the model's accuracy and error rate (loss), but there are limitations in the usage environment in that an actual label for the model prediction result is required, and the detection of the point where the actual drift occurs is uncertain. there is. This is because the model's error rate is greatly influenced by various external environmental factors, model selection and parameter settings, and new input data, so it is necessary to precisely determine when actual drift in the data occurs based only on the corresponding value. There are limits to this. Therefore, this paper proposes a method to detect when actual drift occurs through an Anomaly analysis technique based on XAI (eXplainable Artificial Intelligence). As a result of testing a classification model that detects DGA (Domain Generation Algorithm), anomaly scores were extracted through the SHAP(Shapley Additive exPlanations) Value of the data after distribution, and as a result, it was confirmed that efficient drift point detection was possible.

Application of Multiple Linear Regression Analysis and Tree-Based Machine Learning Techniques for Cutter Life Index(CLI) Prediction (커터수명지수 예측을 위한 다중선형회귀분석과 트리 기반 머신러닝 기법 적용)

  • Ju-Pyo Hong;Tae Young Ko
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.594-609
    • /
    • 2023
  • TBM (Tunnel Boring Machine) method is gaining popularity in urban and underwater tunneling projects due to its ability to ensure excavation face stability and minimize environmental impact. Among the prominent models for predicting disc cutter life, the NTNU model uses the Cutter Life Index(CLI) as a key parameter, but the complexity of testing procedures and rarity of equipment make measurement challenging. In this study, CLI was predicted using multiple linear regression analysis and tree-based machine learning techniques, utilizing rock properties. Through literature review, a database including rock uniaxial compressive strength, Brazilian tensile strength, equivalent quartz content, and Cerchar abrasivity index was built, and derived variables were added. The multiple linear regression analysis selected input variables based on statistical significance and multicollinearity, while the machine learning prediction model chose variables based on their importance. Dividing the data into 80% for training and 20% for testing, a comparative analysis of the predictive performance was conducted, and XGBoost was identified as the optimal model. The validity of the multiple linear regression and XGBoost models derived in this study was confirmed by comparing their predictive performance with prior research.

Generating Sponsored Blog Texts through Fine-Tuning of Korean LLMs (한국어 언어모델 파인튜닝을 통한 협찬 블로그 텍스트 생성)

  • Bo Kyeong Kim;Jae Yeon Byun;Kyung-Ae Cha
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.3
    • /
    • pp.1-12
    • /
    • 2024
  • In this paper, we fine-tuned KoAlpaca, a large-scale Korean language model, and implemented a blog text generation system utilizing it. Blogs on social media platforms are widely used as a marketing tool for businesses. We constructed training data of positive reviews through emotion analysis and refinement of collected sponsored blog texts and applied QLoRA for the lightweight training of KoAlpaca. QLoRA is a fine-tuning approach that significantly reduces the memory usage required for training, with experiments in an environment with a parameter size of 12.8B showing up to a 58.8% decrease in memory usage compared to LoRA. To evaluate the generative performance of the fine-tuned model, texts generated from 100 inputs not included in the training data produced on average more than twice the number of words compared to the pre-trained model, with texts of positive sentiment also appearing more than twice as often. In a survey conducted for qualitative evaluation of generative performance, responses indicated that the fine-tuned model's generated outputs were more relevant to the given topics on average 77.5% of the time. This demonstrates that the positive review generation language model for sponsored content in this paper can enhance the efficiency of time management for content creation and ensure consistent marketing effects. However, to reduce the generation of content that deviates from the category of positive reviews due to elements of the pre-trained model, we plan to proceed with fine-tuning using the augmentation of training data.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Parameter Estimation of the Aerated Wetland for the Performance of the Polluted Stream Treatment (오염하천 정화를 위한 호기성 인공습지의 운영인자 평가)

  • Kim, Dul-Sun;Lee, Dong-Keun
    • Clean Technology
    • /
    • v.25 no.4
    • /
    • pp.302-310
    • /
    • 2019
  • A constructed wetland with the aerobic tank and anaerobic/anoxic tank connected in series was employed in order to treat highly polluted stream water. The aerobic tank was maintained aerobic with a continuous supply of air through the natural air draft system. Five pilot plants having different residence times were employed together to obtain parameters for the best performances of the wetland. BOD and COD removals at the aerobic tank followed the first order kinetics. COD removal rate constants were slightly lower than BOD. The temperature dependence of COD (θ = 1.0079) and BOD (θ = 1.0083) was almost the same, but the temperature dependence (θN) of T-N removal was 1.0189. The SS removal rate was as high as 98% and the removal efficiency showed a tendency to increase with increasing hydraulic loading rate (Q/A). The main mechanism of BOD and COD removal at the anaerobic/anoxic tank was entirely different from that of the aerobic tank. BOD and COD were supplied as the carbon source for biological denitrification. T-P was believed to be removed though the cation exchange between orthophosphate and gravels within the anaerobic and anoxic tanks. The wetland could successfully be operated without being blocked by the filtered solid which subsequently decomposed at an extremely fast rate.