• Title/Summary/Keyword: 시스템 최적화

Search Result 5,391, Processing Time 0.035 seconds

Comparison between the Calculated and Measured Doses in the Rectum during High Dose Rate Brachytherapy for Uterine Cervical Carcinomas (자궁암의 고선량율 근접 방사선치료시 전산화 치료계획 시스템과 in vivo dosimetry system 을 이용하여 측정한 직장 선량 비교)

  • Chung, Eun-Ji;Lee, Sang-Hoon
    • Radiation Oncology Journal
    • /
    • v.20 no.4
    • /
    • pp.396-404
    • /
    • 2002
  • Purpose : Many papers support a correlation between rectal complications and rectal doses in uterine cervical cancer patients treated with radical radiotherapy. In vivo dosimetry in the rectum following the ICRU report 38 contributes to the quality assurance in HDR brachytherapy, especially in minimizing side effects. This study compares the rectal doses calculated in the radiation treatment planning system to that measured with a silicon diode the in vivo dosimetry system. Methods : Nine patients, with a uterine cervical carcinoma, treated with Iridium-192 high dose rate brachytherapy between June 2001 and Feb. 2002, were retrospectively analysed. Six to eight-fractions of high dose rate (HDR)-intracavitary radiotherapy (ICR) were delivered two times per week, with a total dose of $28\~32\;Gy$ to point A. In 44 applications, to the 9 patients, the measured rectal doses were analyzed and compared with the calculated rectal doses using the radiation treatment planning system. Using graphic approximation methods, in conjunction with localization radiographs, the expected dose values at the detector points of an intrarectal semiconductor dosimeter, were calculated. Results : There were significant differences between the calculated rectal doses, based on the simulation radiographs, and the calculated rectal doses, based on the radiographs in each fraction of the HDR ICR. Also, there were significant differences between the calculated and measured rectal doses based on the in-vivo diode dosimetry system. The rectal reference point on the anteroposterior line drawn through the lower end of the uterine sources, according to ICRU 38 report, received the maximum rectal doses in only 2 out of the nine patients $(22.2\%)$. Conclusion : In HDR ICR planning for conical cancer, optimization of the dose to the rectum by the computer-assisted planning system, using radiographs in simulation, is improper. This study showed that in vivo rectal dosimetry, using a diode detector during the HDR ICR, could have a useful role in quality control for HDR brachytherapy in cervical carcinomas. The importance of individual dosimeters for each HDR ICR is clear. In some departments that do not have the in vivo dosimetry system, the radiation oncologist has to find, from lateral fluoroscopic findings, the location of the rectal marker before each fractionated HDR brachytherapy, which is a necessary and important step of HDR brachytherapy for cervical cancer.

Optimization of Separation Process of Bioflavonoids and Dietary Fibers from Tangerine Peels using Hollow Fiber Membrane (중공사 막을 이용한 감귤 과피 bioflavonoids 분리 및 식이 섬유 회수 공정 최적화)

  • Lee, Eun-Young;Woo, Gun-Jo
    • Korean Journal of Food Science and Technology
    • /
    • v.30 no.1
    • /
    • pp.151-160
    • /
    • 1998
  • Tangerine peel is mostly discarded as waste in citrus processing. However, tangerine peel contains besides dietary fibers bioflavonoids such as naringin and hesperidin which act as antimicrobials and blood pressure depressants, respectively. A continuous membrane separation process was optimized for the production of bioflavonoids relative to feed flow rate, transmembrane pressure, temperature, and pH. The tangerine peel was blended with 7.5 times water volume and the extract was prefiltered through a prefiltration system. The prefiltered extract was ultrafiltered in a hollow fiber membrane system. The flux and feed flow rate didn't show any apparent correlation, but we could observe a mass-transfer controlled region of over 8 psi. When temperature increased from $9^{\circ}C\;to\;25^{\circ}C$, the flux increased about $10\;liters/m^2/min\;(LMH)$ but between $25^{\circ}C\;and\;33^{\circ}C$, the flux increased only 2 LMH. At every transmembrane pressure, the flux of pH 4.8 was the most highest and the flux at pH 3.0 was lower than that of pH 6.0, 7.0, or 9.0. Therefore, the optimum operating conditions were 49.3 L/hr. 10 psi, $25^{\circ}C$, and pH 4.8. Under the optimum conditions, the flux gradually decreased and finally reached a steady-state after 1 hr 50 min. The amount of dietary fibers in 1.0 g retentate in each separation step was analyzed and bioflavonoids concentration in each permeate was measured. The contents of total dietary fiber in the 170 mesh retentate and soluble dietary fiber in the prefiltered retentate were the highest. Naringin and hesperidin concentration in the permeate were $0.45{\sim}0.65\;mg/g\;and\;5.15{\sim}6.86\;mg/g$ respectively, being $15{\sim}22$ times and $79{\sim}93$ times higher than those in the tangerine peel. Therefore, it can be said that PM 10 hollow fiber membrane separation system may be a very effective method for the recovery of bioflavonoids from tangerine peel.

  • PDF

Development of Optimum Grip System in Developing Design Tensile Strength of GFRP Rebars (GFRP 보강근의 설계 인장강도 발현을 위한 적정 그립시스템 개발)

  • You Young-Chan;Park Ji-Sun;You Young-Jun;Park Young-Hwan;Kim Keung-Hwan
    • Journal of the Korea Concrete Institute
    • /
    • v.17 no.6 s.90
    • /
    • pp.947-953
    • /
    • 2005
  • Previous test results showed that the current ASTM(American Standard for Testing and Materials) grip adapter for GFRP(Glass Fiber Reinforced Polymer) rebar was not fully successful in developing the design tensile strength of GFRP rebars with reasonable accuracy. It is because the current ASTM grip adapter which is composed of a pair of rectangular metal blocks of which inner faces are grooved along the longitudinal direction does not take into account the various geometric characteristics of GFRP rebar such as surface treatment, shape of bar cross section as well as physical characteristics such as poisson effect, elastic modulus in the transverse direction and so on. The objective of this paper is to provide how to proportion the optimum diameter of inner groove in ASTM grip adapter to develop design tensile strength of GFRP rebar. The proportioning of inner groove in ASTM grip adapter is based on the force equilibrium of GFRP rebar between tensile capacity and minimum frictional resistance required along the grip adapter. The frictional resistance of grip adapter is calculated based on the compressive strain compatibility in radial direction induced by the difference between diameter of GFRP rebar and inner groove In ASTM grip. All testing procedures were made according to the CSA S806-02 recommendations. From the preliminary test results on round-type GFRP rebars, it was found that maximum tensile loads acquired under the same testing conditions is highly affected by the diameter of inner groove in ASTM grip adapter. The grip adapter with specific dimension proportioned by proposed method recorded the highest tensile strength among them.

The Perception of Gifted Science Teachers Regarding a Individualized Instruction for Scientifically Gifted (영재 개별화 교육에 관한 과학영재 지도교사들의 인식)

  • Kim, Su-yeon;Han, Shin;Jeong, Jinwoo
    • Journal of the Korean Society of Earth Science Education
    • /
    • v.9 no.2
    • /
    • pp.199-216
    • /
    • 2016
  • The purpose of this study is to figure out how much gifted science education teachers in charge of the class realize the necessity of individualized curriculum and program for scientifically gifted, to find out the problems of the gifted science educational institutions from exploring them in depth in the light of the reality in the gifted science educational institutions, and to draw implications about the applicable direction of more aggressive individualized curriculum and program for scientifically gifted. I chose 15 people with the incumbent teachers who have ever taught scientifically gifted and have a degree in the gifted education or science subject education as study participants and had a depth interview with them. According to result of the study, 14 of 15 study participants recognized the necessity of individualized education in science should understand the personal requirements according to the tendency of the gifted students and should be a study led by students themselves. Of the problems in gifted science education, teachers regarded the reduction in the financial support as the biggest problem and the vocation and professionalism of teachers were referred as a very important factor. With constraints of time and space, there were plenty of opinions that can't ignore the influence of educational environment associated with the university entrance examination. There were many opinions that there is excessive expansion of the agencies and the target for gifted students, no standardized measurement tools and programs and the lack of the system for the coherent observation as a teacher. Also, the unified curriculum of gifted science education institutions were pointed out as the problem and the individualized programs which were already under way have a lot of weakness and being offered marginally. Thus, from now on, to apply for individualized education of gifted science, teachers demanded optimized education conditions and consistent policy support, and expressed the opinion that there needs of a possible continuous observation system. Besides, the curriculum and programs matched the needs of the students should be taken priority the most, and there were another answers that fellow learning within the cooperative learning can be an alternative of the individualized. Along with that, there were lots of opinions that the treatment to overcome an inferiority complex according to the individualized should be followed.

Evaluation of beam delivery accuracy for Small sized lung SBRT in low density lung tissue (Small sized lung SBRT 치료시 폐 실질 조직에서의 계획선량 전달 정확성 평가)

  • Oh, Hye Gyung;Son, Sang Jun;Park, Jang Pil;Lee, Je Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.7-15
    • /
    • 2019
  • Purpose: The purpose of this study is to evaluate beam delivery accuracy for small sized lung SBRT through experiment. In order to assess the accuracy, Eclipse TPS(Treatment planning system) equipped Acuros XB and radiochromic film were used for the dose distribution. Comparing calculated and measured dose distribution, evaluated the margin for PTV(Planning target volume) in lung tissue. Materials and Methods : Acquiring CT images for Rando phantom, planned virtual target volume by size(diameter 2, 3, 4, 5 cm) in right lung. All plans were normalized to the target Volume=prescribed 95 % with 6MV FFF VMAT 2 Arc. To compare with calculated and measured dose distribution, film was inserted in rando phantom and irradiated in axial direction. The indexes of evaluation are percentage difference(%Diff) for absolute dose, RMSE(Root-mean-square-error) value for relative dose, coverage ratio and average dose in PTV. Results: The maximum difference at center point was -4.65 % in diameter 2 cm size. And the RMSE value between the calculated and measured off-axis dose distribution indicated that the measured dose distribution in diameter 2 cm was different from calculated and inaccurate compare to diameter 5 cm. In addition, Distance prescribed 95 % dose($D_{95}$) in diameter 2 cm was not covered in PTV and average dose value was lowest in all sizes. Conclusion: This study demonstrated that small sized PTV was not enough covered with prescribed dose in low density lung tissue. All indexes of experimental results in diameter 2 cm were much different from other sizes. It is showed that minimized PTV is not accurate and affects the results of radiation therapy. It is considered that extended margin at small PTV in low density lung tissue for enhancing target center dose is necessary and don't need to constraint Maximum dose in optimization.

Economic Impact of HEMOS-Cloud Services for M&S Support (M&S 지원을 위한 HEMOS-Cloud 서비스의 경제적 효과)

  • Jung, Dae Yong;Seo, Dong Woo;Hwang, Jae Soon;Park, Sung Uk;Kim, Myung Il
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.10
    • /
    • pp.261-268
    • /
    • 2021
  • Cloud computing is a computing paradigm in which users can utilize computing resources in a pay-as-you-go manner. In a cloud system, resources can be dynamically scaled up and down to the user's on-demand so that the total cost of ownership can be reduced. The Modeling and Simulation (M&S) technology is a renowned simulation-based method to obtain engineering analysis and results through CAE software without actual experimental action. In general, M&S technology is utilized in Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD), Multibody dynamics (MBD), and optimization fields. The work procedure through M&S is divided into pre-processing, analysis, and post-processing steps. The pre/post-processing are GPU-intensive job that consists of 3D modeling jobs via CAE software, whereas analysis is CPU or GPU intensive. Because a general-purpose desktop needs plenty of time to analyze complicated 3D models, CAE software requires a high-end CPU and GPU-based workstation that can work fluently. In other words, for executing M&S, it is absolutely required to utilize high-performance computing resources. To mitigate the cost issue from equipping such tremendous computing resources, we propose HEMOS-Cloud service, an integrated cloud and cluster computing environment. The HEMOS-Cloud service provides CAE software and computing resources to users who want to experience M&S in business sectors or academics. In this paper, the economic ripple effect of HEMOS-Cloud service was analyzed by using industry-related analysis. The estimated results of using the experts-guided coefficients are the production inducement effect of KRW 7.4 billion, the value-added effect of KRW 4.1 billion, and the employment-inducing effect of 50 persons per KRW 1 billion.

Dosimetric Evaluation of a Small Intraoral X-ray Tube for Dental Imaging (치과용 초소형 X-선 튜브의 선량평가)

  • Ji, Yunseo;Kim, YeonWoo;Lee, Rena
    • Progress in Medical Physics
    • /
    • v.26 no.3
    • /
    • pp.160-167
    • /
    • 2015
  • Radiation exposure from medical diagnostic imaging procedures to patients is one of the most significant interests in diagnostic x-ray system. A miniature x-ray intraoral tube was developed for the first time in the world which can be inserted into the mouth for imaging. Dose evaluation should be carried out in order to utilize such an imaging device for clinical use. In this study, dose evaluation of the new x-ray unit was performed by 1) using a custom made in vivo Pig phantom, 2) determining exposure condition for the clinical use, and 3) measuring patient dose of the new system. On the basis of DRLs (Diagnostic Reference Level) recommended by KDFA (Korea Food & Drug Administration), the ESD (Entrance Skin Dose) and DAP (Dose Area Product) measurements for the new x-ray imaging device were designed and measured. The maximum voltage and current of the x-ray tubes used in this study were 55 kVp, and 300 mA. The active area of the detector was $72{\times}72mm$ with pixel size of $48{\mu}m$. To obtain the operating condition of the new system, pig jaw phantom images showing major tooth-associated tissues, such as clown, pulp cavity were acquired at 1 frame/sec. Changing the beam currents 20 to $80{\mu}A$, x-ray images of 50 frames were obtained for one beam current with optimum x-ray exposure setting. Pig jaw phantom images were acquired from two commercial x-ray imaging units and compared to the new x-ray device: CS 2100, Carestream Dental LLC and EXARO, HIOSSEN, Inc. Their exposure conditions were 60 kV, 7 mA, and 60 kV, 2 mA, respectively. Comparing the new x-ray device and conventional x-ray imaging units, images of the new x-ray device around teeth and their neighboring tissues turn out to be better in spite of its small x-ray field size. ESD of the new x-ray device was measured 1.369 mGy on the beam condition for the best image quality, 0.051 mAs, which is much less than DRLs recommended by IAEA (International Atomic Energy Agency) and KDFA, both. Its dose distribution in the x-ray field size was observed to be uniform with standard deviation of 5~10 %. DAP of the new x-ray device was $82.4mGy*cm^2$ less than DRL established by KDFA even though its x-ray field size was small. This study shows that the new x-ray imaging device offers better in image quality and lower radiation dose compared to the conventional intraoral units. In additions, methods and know-how for studies in x-ray features could be accumulated from this work.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.