• Title/Summary/Keyword: 모델검증시스템

Search Result 2,770, Processing Time 0.03 seconds

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Development of New 4D Phantom Model in Respiratory Gated Volumetric Modulated Arc Therapy for Lung SBRT (폐암 SBRT에서 호흡동조 VMAT의 정확성 분석을 위한 새로운 4D 팬텀 모델 개발)

  • Yoon, KyoungJun;Kwak, JungWon;Cho, ByungChul;Song, SiYeol;Lee, SangWook;Ahn, SeungDo;Nam, SangHee
    • Progress in Medical Physics
    • /
    • v.25 no.2
    • /
    • pp.100-109
    • /
    • 2014
  • In stereotactic body radiotherapy (SBRT), the accurate location of treatment sites should be guaranteed from the respiratory motions of patients. Lots of studies on this topic have been conducted. In this letter, a new verification method simulating the real respiratory motion of heterogenous treatment regions was proposed to investigate the accuracy of lung SBRT for Volumetric Modulated Arc Therapy. Based on the CT images of lung cancer patients, lung phantoms were fabricated to equip in $QUASAR^{TM}$ respiratory moving phantom using 3D printer. The phantom was bisected in order to measure 2D dose distributions by the insertion of EBT3 film. To ensure the dose calculation accuracy in heterogeneous condition, The homogeneous plastic phantom were also utilized. Two dose algorithms; Analytical Anisotropic Algorithm (AAA) and AcurosXB (AXB) were applied in plan dose calculation processes. In order to evaluate the accuracy of treatments under respiratory motion, we analyzed the gamma index between the plan dose and film dose measured under various moving conditions; static and moving target with or without gating. The CT number of GTV region was 78 HU for real patient and 92 HU for the homemade lung phantom. The gamma pass rates with 3%/3 mm criteria between the plan dose calculated by AAA algorithm and the film doses measured in heterogeneous lung phantom under gated and no gated beam delivery with respiratory motion were 88% and 78%. In static case, 95% of gamma pass rate was presented. In the all cases of homogeneous phantom, the gamma pass rates were more than 99%. Applied AcurosXB algorithm, for heterogeneous phantom, more than 98% and for homogeneous phantom, more than 99% of gamma pass rates were achieved. Since the respiratory amplitude was relatively small and the breath pattern had the longer exhale phase than inhale, the gamma pass rates in 3%/3 mm criteria didn't make any significant difference for various motion conditions. In this study, the new phantom model of 4D dose distribution verification using patient-specific lung phantoms moving in real breathing patterns was successfully implemented. It was also evaluated that the model provides the capability to verify dose distributions delivered in the more realistic condition and also the accuracy of dose calculation.

A Hybrid Forecasting Framework based on Case-based Reasoning and Artificial Neural Network (사례기반 추론기법과 인공신경망을 이용한 서비스 수요예측 프레임워크)

  • Hwang, Yousub
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.43-57
    • /
    • 2012
  • To enhance the competitive advantage in a constantly changing business environment, an enterprise management must make the right decision in many business activities based on both internal and external information. Thus, providing accurate information plays a prominent role in management's decision making. Intuitively, historical data can provide a feasible estimate through the forecasting models. Therefore, if the service department can estimate the service quantity for the next period, the service department can then effectively control the inventory of service related resources such as human, parts, and other facilities. In addition, the production department can make load map for improving its product quality. Therefore, obtaining an accurate service forecast most likely appears to be critical to manufacturing companies. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average simulation. However, these methods are only efficient for data with are seasonal or cyclical. If the data are influenced by the special characteristics of product, they are not feasible. In our research, we propose a forecasting framework that predicts service demand of manufacturing organization by combining Case-based reasoning (CBR) and leveraging an unsupervised artificial neural network based clustering analysis (i.e., Self-Organizing Maps; SOM). We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the service forecasting domain. Our proposed approach has several appealing features : (1) We applied CBR and SOM in a new forecasting domain such as service demand forecasting. (2) We proposed our combined approach between CBR and SOM in order to overcome limitations of traditional statistical forecasting methods and We have developed a service forecasting tool based on the proposed approach using an unsupervised artificial neural network and Case-based reasoning. In this research, we conducted an empirical study on a real digital TV manufacturer (i.e., Company A). In addition, we have empirically evaluated the proposed approach and tool using real sales and service related data from digital TV manufacturer. In our empirical experiments, we intend to explore the performance of our proposed service forecasting framework when compared to the performances predicted by other two service forecasting methods; one is traditional CBR based forecasting model and the other is the existing service forecasting model used by Company A. We ran each service forecasting 144 times; each time, input data were randomly sampled for each service forecasting framework. To evaluate accuracy of forecasting results, we used Mean Absolute Percentage Error (MAPE) as primary performance measure in our experiments. We conducted one-way ANOVA test with the 144 measurements of MAPE for three different service forecasting approaches. For example, the F-ratio of MAPE for three different service forecasting approaches is 67.25 and the p-value is 0.000. This means that the difference between the MAPE of the three different service forecasting approaches is significant at the level of 0.000. Since there is a significant difference among the different service forecasting approaches, we conducted Tukey's HSD post hoc test to determine exactly which means of MAPE are significantly different from which other ones. In terms of MAPE, Tukey's HSD post hoc test grouped the three different service forecasting approaches into three different subsets in the following order: our proposed approach > traditional CBR-based service forecasting approach > the existing forecasting approach used by Company A. Consequently, our empirical experiments show that our proposed approach outperformed the traditional CBR based forecasting model and the existing service forecasting model used by Company A. The rest of this paper is organized as follows. Section 2 provides some research background information such as summary of CBR and SOM. Section 3 presents a hybrid service forecasting framework based on Case-based Reasoning and Self-Organizing Maps, while the empirical evaluation results are summarized in Section 4. Conclusion and future research directions are finally discussed in Section 5.

Design and Optimization of Pilot-Scale Bunsen Process in Sulfur-Iodine (SI) Cycle for Hydrogen Production (수소 생산을 위한 Sulfur-Iodine Cycle 분젠반응의 Pilot-Scale 공정 모델 개발 및 공정 최적화)

  • Park, Junkyu;Nam, KiJeon;Heo, SungKu;Lee, Jonggyu;Lee, In-Beum;Yoo, ChangKyoo
    • Korean Chemical Engineering Research
    • /
    • v.58 no.2
    • /
    • pp.235-247
    • /
    • 2020
  • Simulation study and validation on 50 L/hr pilot-scale Bunsen process was carried out in order to investigate thermodynamics parameters, suitable reactor type, separator configuration, and the optimal conditions of reactors and separation. Sulfur-Iodine is thermochemical process using iodine and sulfur compounds for producing hydrogen from decomposition of water as net reaction. Understanding in phase separation and reaction of Bunsen Process is crucial since Bunsen Process acts as an intermediate process among three reactions. Electrolyte Non-Random Two-Liquid model is implemented in simulation as thermodynamic model. The simulation results are validated with the thermodynamic parameters and the 50 L/hr pilot-scale experimental data. The SO2 conversions of PFR and CSTR were compared as varying the temperature and reactor volume in order to investigate suitable type of reactor. Impurities in H2SO4 phase and HIX phase were investigated for 3-phase separator (vapor-liquid-liquid) and two 2-phase separators (vapor-liquid & liquid-liquid) in order to select separation configuration with better performance. The process optimization on reactor and phase separator is carried out to find the operating conditions and feed conditions that can reach the maximum SO2 conversion and the minimum H2SO4 impurities in HIX phase. For reactor optimization, the maximum 98% SO2 conversion was obtained with fixed iodine and water inlet flow rate when the diameter and length of PFR reactor are 0.20 m and 7.6m. Inlet water and iodine flow rate is reduced by 17% and 22% to reach the maximum 10% SO2 conversion with fixed temperature and PFR size (diameter: 3/8", length:3 m). When temperature (121℃) and PFR size (diameter: 0.2, length:7.6 m) are applied to the feed composition optimization, inlet water and iodine flow rate is reduced by 17% and 22% to reach the maximum 10% SO2 conversion.

Is Fertility Rate Proportional to the Quality of Life? An Exploratory Analysis of the Relationship between Better Life Index (BLI) and Fertility Rate in OECD Countries (출산율은 삶의 질과 비례하는가? OECD 국가의 삶의 질 요인과 출산율의 관계에 관한 추이분석)

  • Kim, KyungHee;Ryu, SeoungHo;Chung, HeeTae;Gim, HyeYeong;Park, HeongJoon
    • International Area Studies Review
    • /
    • v.22 no.1
    • /
    • pp.215-235
    • /
    • 2018
  • Policy concerns related to raising fertility rates are not only common interests among the OECD countries, but they are also issues of great concern to South Korea whose fertility rate is the lowest in the world. The fertility rate in South Korea continues to decline, even though most of the national budget has been spent on measures to address this and many studies have been conducted on the increase in the fertility rates. In this regard, this study aims to verify the effectiveness of the detailed factors affecting the fertility rate that have been discussed in the previous studies on fertility rates, and to investigate the overall trend toward enhancing the quality of life and increasing the fertility rate through macroscopic and structural studies under the recognition of problems related to the policy approaches through the case studies of the European countries. Toward this end, this study investigated if a high quality of life in advanced countries contributes to the increase in the fertility rate, which country serves as a state model that has a high quality of life and a high fertility rate, and what kind of social and policy environment does the country have with regard to childbirth. The analysis of the OECD Better Life Index (BLI) and CIA fertility rate data showed that the countries whose people enjoy a high quality of life do not necessarily have high fertility rates. In addition, under the recognition that a country with a high quality of life and a high birth rate serves as a state model that South Korea should aim for, the social characteristics of Iceland, Ireland, and New Zealand, which turned out to have both a high quality of life and a high fertility rate, were compared with those of Germany, which showed a high quality of life but a low fertility rate. According to the comparison results, the three countries that were mentioned showed higher awareness of gender equality; therefore, the gender wage gap was small. It was also confirmed that the governments of these countries support various policies that promote both parents sharing the care of their children. In Germany, on the other hand, the gender wage gap was large and the fertility rate was low. In a related move, however, the German government has made active efforts to a paradigm shift toward gender equality. The fertility rate increases when the synergy lies in the relationship between parents and children; therefore, awareness about gender equality should be firmly established both at home and in the labor market. For this reason, the government is required to provide support for the childbirth and rearing environment through appropriate family policies, and exert greater efforts to enhance the effectiveness of the relevant systems rather than simply promoting a system construction. Furthermore, it is necessary to help people in making their own childbearing decisions during the process of creating a better society by changing the national goal from 'raising the fertility rate' to 'creating a healthy society made of happy families'

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Application and Analysis of Ocean Remote-Sensing Reflectance Quality Assurance Algorithm for GOCI-II (천리안해양위성 2호(GOCI-II) 원격반사도 품질 검증 시스템 적용 및 결과)

  • Sujung Bae;Eunkyung Lee;Jianwei Wei;Kyeong-sang Lee;Minsang Kim;Jong-kuk Choi;Jae Hyun Ahn
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1565-1576
    • /
    • 2023
  • An atmospheric correction algorithm based on the radiative transfer model is required to obtain remote-sensing reflectance (Rrs) from the Geostationary Ocean Color Imager-II (GOCI-II) observed at the top-of-atmosphere. This Rrs derived from the atmospheric correction is utilized to estimate various marine environmental parameters such as chlorophyll-a concentration, total suspended materials concentration, and absorption of dissolved organic matter. Therefore, an atmospheric correction is a fundamental algorithm as it significantly impacts the reliability of all other color products. However, in clear waters, for example, atmospheric path radiance exceeds more than ten times higher than the water-leaving radiance in the blue wavelengths. This implies atmospheric correction is a highly error-sensitive process with a 1% error in estimating atmospheric radiance in the atmospheric correction process can cause more than 10% errors. Therefore, the quality assessment of Rrs after the atmospheric correction is essential for ensuring reliable ocean environment analysis using ocean color satellite data. In this study, a Quality Assurance (QA) algorithm based on in-situ Rrs data, which has been archived into a database using Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Bio-optical Archive and Storage System (SeaBASS), was applied and modified to consider the different spectral characteristics of GOCI-II. This method is officially employed in the National Oceanic and Atmospheric Administration (NOAA)'s ocean color satellite data processing system. It provides quality analysis scores for Rrs ranging from 0 to 1 and classifies the water types into 23 categories. When the QA algorithm is applied to the initial phase of GOCI-II data with less calibration, it shows the highest frequency at a relatively low score of 0.625. However, when the algorithm is applied to the improved GOCI-II atmospheric correction results with updated calibrations, it shows the highest frequency at a higher score of 0.875 compared to the previous results. The water types analysis using the QA algorithm indicated that parts of the East Sea, South Sea, and the Northwest Pacific Ocean are primarily characterized as relatively clear case-I waters, while the coastal areas of the Yellow Sea and the East China Sea are mainly classified as highly turbid case-II waters. We expect that the QA algorithm will support GOCI-II users in terms of not only statistically identifying Rrs resulted with significant errors but also more reliable calibration with quality assured data. The algorithm will be included in the level-2 flag data provided with GOCI-II atmospheric correction.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Microbiological Hazard Analysis for HACCP System Application to Vinegared Pickle Radishes (식초절임 무의 HACCP 시스템 적용을 위한 미생물학적 위해 분석)

  • Kwon, Sang-Chul
    • Journal of Food Hygiene and Safety
    • /
    • v.28 no.1
    • /
    • pp.69-74
    • /
    • 2013
  • This study has been performed for 150 days from February 1 - June 31, 2012 aiming at analyzing biologically hazardous factors in order to develop HACCP system for the vinegared pickle radishes. A process chart was prepared as shown on Fig. 1 by referring to manufacturing process of manufacturer of general vinegared pickle radishes regarding process of raw agricultural products of vinegared pickle radishes, used water, warehousing of additives and packing material, storage, careful selection, washing, peeling off, cutting, sorting out, stuffing (filling), internal packing, metal detection, external packing, storage and consignment (delivery). As a result of measuring Coliform group, Staphylococcus aureus, Salmonella spp., Bacillus cereus, Listeria Monocytogenes, E. coli O157:H7, Clostridium perfringens, Yeast and Mold before and after washing raw radishes, Bacillus cereus was $5.00{\times}10$ CFU/g before washing but it was not detected after washing and Yeast and Mold was $3.80{\times}10^2$ CFU/g before washing but it was reduced to 10 CFU/g after washing and other pathogenic bacteria was not detected. As a result of testing microorganism variation depending on pH (2-5) of seasoning fluid (condiment), pH 3-4 was determined as pH of seasoning fluid as all the bacteria was not detected in pH3-4. As a result of testing air-borne bacteria (number of general bacteria, colon bacillus, fungus) depending on each workplace, number of microorganism of internal packing room, seasoning fluid processing room, washing room and storage room was detected to be 10 CFU/Plate, 2 CFU/Plate, 60 CFU/Plate and 20 CFU/Plate, respectively. As a result of testing palm condition of workers, as number of general bacteria and colon bacillus was represented to be high as 346 $CFU/Cm^2$ and 23 $CFU/Cm^2$, respectively, an education and training for individual sanitation control was considered to be required. As a result of inspecting surface pollution level of manufacturing facility and devices, colon bacillus was not detected in all the specimen but general bacteria was most dominantly detected in PP Packing machine and Siuping machine (PE Bulk) as $4.2{\times}10^3CFU/Cm^2$, $2.6{\times}10^3CFU/Cm^2$, respectively. As a result of analyzing above hazardous factors, processing process of seasoning fluid where pathogenic bacteria may be prevented, reduced or removed is required to be controlled by CCP-B (Biological) and threshold level (critical control point) was set at pH 3-4. Therefore, it is considered that thorough HACCP control plan including control criteria (point) of seasoning fluid processing process, countermeasures in case of its deviation, its verification method, education/training and record control would be required.

Evaluation of Planning Dose Accuracy in Case of Radiation Treatment on Inhomogeneous Organ Structure (불균질부 방사선치료 시 계획 선량의 정확성 평가)

  • Kim, Chan Yong;Lee, Jae Hee;Kwak, Yong Kook;Ha, Min Yong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.2
    • /
    • pp.137-143
    • /
    • 2013
  • Purpose: We are to find out the difference of calculated dose of treatment planning system (TPS) and measured dose in case of inhomogeneous organ structure. Materials and Methods: Inhomogeneous phantom is made with solid water phantom and cork plate. CT image of inhomogeneous phantom is acquired. Treatment plan is made with TPS (Pinnacle3 9.2. Royal Philips Electronics, Netherlands) and calculated dose of point of interest is acquired. Treatment plan was delivered in the inhomogeneous phantom by ARTISTE (Siemens AG, Germany) measured dose of each point of interest is obtained with Gafchromic EBT2 film (International Specialty Products, US) in the gap between solid water phantom or cork plate. To simulate lung cancer radiation treatment, artificial tumor target of paraffin is inserted in the cork volume of inhomogeneous phantom. Calculated dose and measured dose are acquired as above. Results: In case of inhomogeneous phantom experiment, dose difference of calculated dose and measured dose is about -8.5% at solid water phantom-cork gap and about -7% lower in measured dose at cork-solid water phantom gap. In case of inhomogeneous phantom inserted paraffin target experiment, dose difference is about 5% lower in measured dose at cork-paraffin gap. There is no significant difference at same material gap in both experiments. Conclusion: Radiation dose at the gap between two organs with different electron density is significantly lower than calculated dose with TPS. Therefore, we must be aware of dose calculation error in TPS and great care is suggested in case of radiation treatment planning on inhomogeneous organ structure.

  • PDF