• Title/Summary/Keyword: Verification & Validation

Search Result 573, Processing Time 0.029 seconds

Development and Validation of Change Motivation Scale for Growth and Development (성장 및 발전을 위한 변화동기 척도 개발 및 타당화)

  • Lee Eun Joo;Tak Jin kook
    • The Korean Journal of Coaching Psychology
    • /
    • v.7 no.1
    • /
    • pp.59-89
    • /
    • 2023
  • In this study, change motivation for growth and development is defined as 'the power to set a specific action direction for change based on the perception of one's current behavior in order to achieve a goal that one considers important, and to be willing to act'. In addition, the purpose of this study was to develop and validate a scale to measure the motivation for change for growth and development of general adults. To develop preliminary questions, interviews were conducted with 7 coaching experts and 9 experienced coaches, and an open-ended questionnaire was conducted with 55 adults. Afterwards, 7 factors and 83 questions were selected through three rounds of item classification and content validity verification, and a preliminary survey was conducted targeting 321 general adults, and 42 items, 4 factors, were derived through exploratory factor analysis. did Finally, the main survey was conducted with 631 adults in order to verify the validity of the construct concept of the change motivation scale and the validity of the criterion. Divided into two groups, 315 people in group 1 conducted exploratory factor analysis and 316 people in group 2 conducted confirmatory factor analysis to verify the concept of change motivation scale. As a result of the factor analysis of Group 1, it was found that the 3 factor structure consisting of 31 items was appropriate, and as a result of the confirmatory factor analysis of Group 2, the goodness of fit of the modified model of the 3 factor structure was confirmed, which motivated change. The construct validity of the scale was demonstrated. As a result of analyzing the correlations with various variables for the analysis of convergent validity and criterion-related validity of the Motivation for Change scale, each of the three factors was found to be significantly related to most variables. Finally, the significance, implications and limitations of this study, and future research were discussed.

Development of High-Resolution Fog Detection Algorithm for Daytime by Fusing GK2A/AMI and GK2B/GOCI-II Data (GK2A/AMI와 GK2B/GOCI-II 자료를 융합 활용한 주간 고해상도 안개 탐지 알고리즘 개발)

  • Ha-Yeong Yu;Myoung-Seok Suh
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1779-1790
    • /
    • 2023
  • Satellite-based fog detection algorithms are being developed to detect fog in real-time over a wide area, with a focus on the Korean Peninsula (KorPen). The GEO-KOMPSAT-2A/Advanced Meteorological Imager (GK2A/AMI, GK2A) satellite offers an excellent temporal resolution (10 min) and a spatial resolution (500 m), while GEO-KOMPSAT-2B/Geostationary Ocean Color Imager-II (GK2B/GOCI-II, GK2B) provides an excellent spatial resolution (250 m) but poor temporal resolution (1 h) with only visible channels. To enhance the fog detection level (10 min, 250 m), we developed a fused GK2AB fog detection algorithm (FDA) of GK2A and GK2B. The GK2AB FDA comprises three main steps. First, the Korea Meteorological Satellite Center's GK2A daytime fog detection algorithm is utilized to detect fog, considering various optical and physical characteristics. In the second step, GK2B data is extrapolated to 10-min intervals by matching GK2A pixels based on the closest time and location when GK2B observes the KorPen. For reflectance, GK2B normalized visible (NVIS) is corrected using GK2A NVIS of the same time, considering the difference in wavelength range and observation geometry. GK2B NVIS is extrapolated at 10-min intervals using the 10-min changes in GK2A NVIS. In the final step, the extrapolated GK2B NVIS, solar zenith angle, and outputs of GK2A FDA are utilized as input data for machine learning (decision tree) to develop the GK2AB FDA, which detects fog at a resolution of 250 m and a 10-min interval based on geographical locations. Six and four cases were used for the training and validation of GK2AB FDA, respectively. Quantitative verification of GK2AB FDA utilized ground observation data on visibility, wind speed, and relative humidity. Compared to GK2A FDA, GK2AB FDA exhibited a fourfold increase in spatial resolution, resulting in more detailed discrimination between fog and non-fog pixels. In general, irrespective of the validation method, the probability of detection (POD) and the Hanssen-Kuiper Skill score (KSS) are high or similar, indicating that it better detects previously undetected fog pixels. However, GK2AB FDA, compared to GK2A FDA, tends to over-detect fog with a higher false alarm ratio and bias.

Optimization and Applicability Verification of Simultaneous Chlorogenic acid and Caffeine Analysis in Health Functional Foods using HPLC-UVD (HPLC-UVD를 이용한 건강기능식품에서 클로로겐산과 카페인 동시분석법 최적화 및 적용성 검증)

  • Hee-Sun Jeong;Se-Yun Lee;Kyu-Heon Kim;Mi-Young Lee;Jung-Ho Choi;Jeong-Sun Ahn;Jae-Myoung Oh;Kwang-Il Kwon;Hye-Young Lee
    • Journal of Food Hygiene and Safety
    • /
    • v.39 no.2
    • /
    • pp.61-71
    • /
    • 2024
  • In this study, we analyzed chlorogenic acid indicator components in preparation for the additional listing of green coffee bean extract in the Health Functional Food Code and optimized caffeine for simultaneous analysis. We extracted chlorogenic acid and caffeine using 30% methanol, phosphoric acid solution, and acetonitrile-containing phosphoric acid and analyzed them at 330 and 280 nm, respectively, using liquid chromatography. Our analysis validation results yielded a correlation coefficient (R2) revealing a significance level of at least 0.999 within the linear quantitative range. The chlorogenic acid and caffeine detection and quantification limits were 0.5 and 0.2 ㎍/mL and 1.4, and 0.4 ㎍/mL, respectively. We confirmed that the precision and accuracy results were suitable using the AOAC validation guidelines. Finally, we developed a simultaneous chlorogenic acid and caffeine analysis approach. In addition, we confirmed that our analysis approach could simultaneously quantify chlorogenic acid and caffeine by examining the applicability of each formulation through prototypes and distribution products. In conclusion, the results of this study demonstrated that the standardized analysis would expectably increase chlorogenic acidcontaining health functional food quality control reliability.

Improvements for Atmospheric Motion Vectors Algorithm Using First Guess by Optical Flow Method (옵티컬 플로우 방법으로 계산된 초기 바람 추정치에 따른 대기운동벡터 알고리즘 개선 연구)

  • Oh, Yurim;Park, Hyungmin;Kim, Jae Hwan;Kim, Somyoung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.763-774
    • /
    • 2020
  • Wind data forecasted from the numerical weather prediction (NWP) model is generally used as the first-guess of the target tracking process to obtain the atmospheric motion vectors(AMVs) because it increases tracking accuracy and reduce computational time. However, there is a contradiction that the NWP model used as the first-guess is used again as the reference in the AMVs verification process. To overcome this problem, model-independent first guesses are required. In this study, we propose the AMVs derivation from Lucas and Kanade optical flow method and then using it as the first guess. To retrieve AMVs, Himawari-8/AHI geostationary satellite level-1B data were used at 00, 06, 12, and 18 UTC from August 19 to September 5, 2015. To evaluate the impact of applying the optical flow method on the AMV derivation, cross-validation has been conducted in three ways as follows. (1) Without the first-guess, (2) NWP (KMA/UM) forecasted wind as the first-guess, and (3) Optical flow method based wind as the first-guess. As the results of verification using ECMWF ERA-Interim reanalysis data, the highest precision (RMSVD: 5.296-5.804 ms-1) was obtained using optical flow based winds as the first-guess. In addition, the computation speed for AMVs derivation was the slowest without the first-guess test, but the other two had similar performance. Thus, applying the optical flow method in the target tracking process of AMVs algorithm, this study showed that the optical flow method is very effective as a first guess for model-independent AMVs derivation.

The Alignment Evaluation for Patient Positioning System(PPS) of Gamma Knife PerfexionTM (감마나이프 퍼펙션의 자동환자이송장치에 대한 정렬됨 평가)

  • Jin, Seong Jin;Kim, Gyeong Rip;Hur, Beong Ik
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.3
    • /
    • pp.203-209
    • /
    • 2020
  • The purpose of this study is to assess the mechanical stability and alignment of the patient positioning system (PPS) of Leksell Gamma Knife Perfexion(LGK PFX). The alignment of the PPS of the LGK PFX was evaluated through measurements of the deviation of the coincidence of the Radiological Focus Point(RFP) and the PPS Calibration Center Point(CCP) applying different weights on the couch(0, 50, 60, 70, 80, and 90 kg). In measurements, a service diode test tool with three diode detectors being used biannually at the time of the routine preventive maintenance was used. The test conducted with varying weights on the PPS using the service diode test tool measured the radial deviations for all three collimators 4, 8, and 16 mm and also for three different positions of the PPS. In order to evaluate the alignment of the PPS, the radial deviations of the correspondence of the radiation focus and the LGK calibration center point of multiple beams were averaged using the calibrated service diode test tool at three university hospitals in Busan and Gyeongnam. Looking at the center diode for all collimators 4, 8, and 16 mm without weight on the PPS, and examining the short and long diodes for the 4 mm collimator, the means of the validation difference, i.e., the radial deviation for the setting of 4, 8, and 16 mm collimators for the center diode were respectively measured to 0.058 ± 0.023, 0.079 ± 0.023, and 0.097 ± 0.049 mm, and when the 4 mm collimator was applied to the center diode, the short diode, and the long diode, the average of the radial deviation was respectively 0.058 ± 0.023, 0.078 ± 0.01 and 0.070 ± 0.023 mm. The average of the radial deviations when irradiating 8 and 16 mm collimators on short and long diodes without weight are measured to 0.07 ± 0.003(8 mm sd), 0.153 ± 0.002 mm(16 mm sd) and 0.031 ± 0.014(8 mm ld), 0.175 ± 0.01 mm(16 mm ld) respectively. When various weights of 50 to 90 kg are placed on the PPS, the average of radial deviation when irradiated to the center diode for 4, 8, and 16 mm is 0.061 ± 0.041 to 0.075 ± 0.015, 0.023 ± 0.004 to 0.034 ± 0.003, and 0.158 ± 0.08 to 0.17 ± 0.043 mm, respectively. In addition, in the same situation, when the short diode for 4, 8, and 16 mm was irradiated, the averages of radial deviations were 0.063 ± 0.024 to 0.07 ± 0.017, 0.037 ± 0.006 to 0.059 ± 0.001, and 0.154 ± 0.03 to 0.165 ± 0.07 mm, respectively. In addition, when irradiated on long diode for 4, 8, and 16 mm, the averages of radial deviations were measured to be 0.102 ± 0.029 to 0.124 ± 0.036, 0.035 ± 0.004 to 0.054 ± 0.02, and 0.183 ± 0.092 to 0.202 ± 0.012 mm, respectively. It was confirmed that all the verification results performed were in accordance with the manufacturer's allowable deviation criteria. It was found that weight dependence was negligible as a result of measuring the alignment according to various weights placed on the PPS that mimics the actual treatment environment. In particular, no further adjustment or recalibration of the PPS was required during the verification. It has been confirmed that the verification test of the PPS according to various weights is suitable for normal Quality Assurance of LGK PFX.

Verification of Gated Radiation Therapy: Dosimetric Impact of Residual Motion (여닫이형 방사선 치료의 검증: 잔여 움직임의 선량적 영향)

  • Yeo, Inhwan;Jung, Jae Won
    • Progress in Medical Physics
    • /
    • v.25 no.3
    • /
    • pp.128-138
    • /
    • 2014
  • In gated radiation therapy (gRT), due to residual motion, beam delivery is intended to irradiate not only the true extent of disease, but also neighboring normal tissues. It is desired that the delivery covers the true extent (i.e. clinical target volume or CTV) as a minimum, although target moves under dose delivery. The objectives of our study are to validate if the intended dose is surely delivered to the true target in gRT and to quantitatively understand the trend of dose delivery on it and neighboring normal tissues when gating window (GW), motion amplitude (MA), and CTV size changes. To fulfill the objectives, experimental and computational studies have been designed and performed. A custom-made phantom with rectangle- and pyramid-shaped targets (CTVs) on a moving platform was scanned for four-dimensional imaging. Various GWs were selected and image integration was performed to generate targets (internal target volume or ITV) for planning that included the CTVs and internal margins (IM). The planning was done conventionally for the rectangle target and IMRT optimization was done for the pyramid target. Dose evaluation was then performed on a diode array aligned perpendicularly to the gated beams through measurements and computational modeling of dose delivery under motion. This study has quantitatively demonstrated and analytically interpreted the impact of residual motion including penumbral broadening for both targets, perturbed but secured dose coverage on the CTV, and significant doses delivered in the neighboring normal tissues. Dose volume histogram analyses also demonstrated and interpreted the trend of dose coverage: for ITV, it increased as GW or MA decreased or CTV size increased; for IM, it increased as GW or MA decreased; for the neighboring normal tissue, opposite trend to that of IM was observed. This study has provided a clear understanding on the impact of the residual motion and proved that if breathing is reproducible gRT is secure despite discontinuous delivery and target motion. The procedures and computational model can be used for commissioning, routine quality assurance, and patient-specific validation of gRT. More work needs to be done for patient-specific dose reconstruction on CT images.

A Comparative Study on the Effective Deep Learning for Fingerprint Recognition with Scar and Wrinkle (상처와 주름이 있는 지문 판별에 효율적인 심층 학습 비교연구)

  • Kim, JunSeob;Rim, BeanBonyka;Sung, Nak-Jun;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.17-23
    • /
    • 2020
  • Biometric information indicating measurement items related to human characteristics has attracted great attention as security technology with high reliability since there is no fear of theft or loss. Among these biometric information, fingerprints are mainly used in fields such as identity verification and identification. If there is a problem such as a wound, wrinkle, or moisture that is difficult to authenticate to the fingerprint image when identifying the identity, the fingerprint expert can identify the problem with the fingerprint directly through the preprocessing step, and apply the image processing algorithm appropriate to the problem. Solve the problem. In this case, by implementing artificial intelligence software that distinguishes fingerprint images with cuts and wrinkles on the fingerprint, it is easy to check whether there are cuts or wrinkles, and by selecting an appropriate algorithm, the fingerprint image can be easily improved. In this study, we developed a total of 17,080 fingerprint databases by acquiring all finger prints of 1,010 students from the Royal University of Cambodia, 600 Sokoto open data sets, and 98 Korean students. In order to determine if there are any injuries or wrinkles in the built database, criteria were established, and the data were validated by experts. The training and test datasets consisted of Cambodian data and Sokoto data, and the ratio was set to 8: 2. The data of 98 Korean students were set up as a validation data set. Using the constructed data set, five CNN-based architectures such as Classic CNN, AlexNet, VGG-16, Resnet50, and Yolo v3 were implemented. A study was conducted to find the model that performed best on the readings. Among the five architectures, ResNet50 showed the best performance with 81.51%.

Independent Verification Program for High-Dose-Rate Brachytherapy Treatment Plans (고선량률 근접치료계획의 정도보증 프로그램)

  • Han Youngyih;Chu Sung Sil;Huh Seung Jae;Suh Chang-Ok
    • Radiation Oncology Journal
    • /
    • v.21 no.3
    • /
    • pp.238-244
    • /
    • 2003
  • Purpose: The Planning of High-Dose-Rate (HDR) brachytherapy treatments are becoming individualized and more dependent on the treatment planning system. Therefore, computer software has been developed to perform independent point dose calculations with the integration of an isodose distribution curve display into the patient anatomy images. Meterials and Methods: As primary input data, the program takes patients'planning data including the source dwell positions, dwell times and the doses at reference points, computed by an HDR treatment planning system (TPS). Dosimetric calculations were peformed in a $10\times12\times10\;Cm^3$ grid space using the Interstitial Collaborative Working Group (ICWG) formalism and an anisotropy table for the HDR Iridium-192 source. The computed doses at the reference points were automatically compared with the relevant results of the TPS. The MR and simulation film images were then imported and the isodose distributions on the axial, sagittal and coronal planes intersecting the point selected by a user were superimposed on the imported images and then displayed. The accuracy of the software was tested in three benchmark plans peformed by Gamma-Med 12i TPS (MDS Nordion, Germany). Nine patients'plans generated by Plato (Nucletron Corporation, The Netherlands) were verified by the developed software. Results: The absolute doses computed by the developed software agreed with the commercial TPS results within an accuracy of $2.8\%$ in the benchmark plans. The isodose distribution plots showed excellent agreements with the exception of the tip legion of the source's longitudinal axis where a slight deviation was observed. In clinical plans, the secondary dose calculations had, on average, about a $3.4\%$ deviation from the TPS plans. Conclusion: The accurate validation of complicate treatment plans is possible with the developed software and the qualify of the HDR treatment plan can be improved with the isodose display integrated into the patient anatomy information.

Numerical Modelling for the Dilation Flow of Gas in a Bentonite Buffer Material: DECOVALEX-2019 Task A (벤토나이트 완충재에서의 기체 팽창 흐름 수치 모델링: DECOVALEX-2019 Task A)

  • Lee, Jaewon;Lee, Changsoo;Kim, Geon Young
    • Tunnel and Underground Space
    • /
    • v.30 no.4
    • /
    • pp.382-393
    • /
    • 2020
  • The engineered barrier system of high-level radioactive waste disposal must maintain its performance in the long term, because it must play a role in slowing the rate of leakage to the surrounding rock mass even if a radionuclide leak occurs from the canister. In particular, it is very important to clarify gas dilation flow phenomenon clearly, that occurs only in a medium containing a large amount of clay material such as a bentonite buffer, which can affect the long-term performance of the bentonite buffer. Accordingly, DECOVALEX-2019 Task A was conducted to identify the hydraulic-mechanical mechanism for the dilation flow, and to develop and verify a new numerical analysis technique for quantitative evaluation of gas migration phenomena. In this study, based on the conventional two-phase flow and mechanical behavior with effective stresses in the porous medium, the hydraulic-mechanical model was developed considering the concept of damage to simulate the formation of micro-cracks and expansion of the medium and the corresponding change in the hydraulic properties. Model verification and validation were conducted through comparison with the results of 1D and 3D gas injection tests. As a result of the numerical analysis, it was possible to model the sudden increase in pore water pressure, stress, gas inflow and outflow rate due to the dilation flow induced by gas pressure, however, the influence of the hydraulic-mechanical interaction was underestimated. Nevertheless, this study can provide a preliminary model for the dilation flow and a basis for developing an advanced model. It is believed that it can be used not only for analyzing data from laboratory and field tests, but also for long-term performance evaluation of the high-level radioactive waste disposal system.

Data collection strategy for building rainfall-runoff LSTM model predicting daily runoff (강수-일유출량 추정 LSTM 모형의 구축을 위한 자료 수집 방안)

  • Kim, Dongkyun;Kang, Seokkoo
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.10
    • /
    • pp.795-805
    • /
    • 2021
  • In this study, after developing an LSTM-based deep learning model for estimating daily runoff in the Soyang River Dam basin, the accuracy of the model for various combinations of model structure and input data was investigated. A model was built based on the database consisting of average daily precipitation, average daily temperature, average daily wind speed (input up to here), and daily average flow rate (output) during the first 12 years (1997.1.1-2008.12.31). The Nash-Sutcliffe Model Efficiency Coefficient (NSE) and RMSE were examined for validation using the flow discharge data of the later 12 years (2009.1.1-2020.12.31). The combination that showed the highest accuracy was the case in which all possible input data (12 years of daily precipitation, weather temperature, wind speed) were used on the LSTM model structure with 64 hidden units. The NSE and RMSE of the verification period were 0.862 and 76.8 m3/s, respectively. When the number of hidden units of LSTM exceeds 500, the performance degradation of the model due to overfitting begins to appear, and when the number of hidden units exceeds 1000, the overfitting problem becomes prominent. A model with very high performance (NSE=0.8~0.84) could be obtained when only 12 years of daily precipitation was used for model training. A model with reasonably high performance (NSE=0.63-0.85) when only one year of input data was used for model training. In particular, an accurate model (NSE=0.85) could be obtained if the one year of training data contains a wide magnitude of flow events such as extreme flow and droughts as well as normal events. If the training data includes both the normal and extreme flow rates, input data that is longer than 5 years did not significantly improve the model performance.