• Title/Summary/Keyword: generating

Search Result 7,329, Processing Time 0.04 seconds

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

Catastrophic Art and Its Instrumentalized Selection System : From work by Hunter Jonakin and Dan Perjovschi (재앙적 예술과 그 도구화된 선별체계: 헌터 조너킨과 댄 퍼잡스키의 작품으로부터)

  • Shim, Sang-Yong
    • The Journal of Art Theory & Practice
    • /
    • no.13
    • /
    • pp.73-95
    • /
    • 2012
  • In terms of element and process, art today has already been fully systemized, yet tends to become even more systemized. All phases of creation and exhibition, appreciation and education, promotion and marketing are planned, adjusted, and decided within the order of a globalized, networked system. Each phase is executed, depending on the system of management and control and diverse means corresponding to the system. From the step of education, artists are guided to determine their styles and not be motivated by their desire to become star artists or running counter to mainstream tendency and fashion. In the process of planning an exhibition, the level of artist awareness is considered more significant than work quality. It is impossible to avoid such systems and institutions today. No one can escape or be freed from the influence of such system. This discussion addresses a serious distortion in the selection system as part of the system connotatively called "art museum system," especially to evaluate artistic achievement and aesthetic quality. Called "studio system" or "art star system," the system distinguishes successful minority from failed absolute majority and justifies the results, deciding discriminative compensations. The discussion begins from work by Hunter Jonakin and Dan Perjovschi. The key point of this discussion is not their art worlds but the shared truth referred by the two as the collusive "art market" and "art star system." Through works based on their experiences, the two artists refer to these systems which restrict and confine them. Jonakin's Jeff Koons Must Die! is avideo game conveying a critical comment on authoritative operation of the museum system and star system. In this work, participants, whether viewer or artist, are destined to lose: the game is unwinnable. Players take the role of a person locked in a museum where artist Jeff Koons' retrospective is held. The player can either look around and quietly observe the works, which causes a game-over, or he can blow the classical paintings to pieces and cause the artist Koons to come out and reprimand the player, also resulting in a game-over. Like Jonakin, Dan Perjovschi's some drawings also focuses on the status of the artist shrunken by the system. Most artists are ruined in a process of competition to survive within the museum system. As John Burger properly pointed out, out of the art systems today, public collections (art museums) and private collections have become "something unbearable." The system justifies the selection system of art stars and its frame of reference, disregarding the problem of producing numerable victims in its process. What should be underlined above all else is that the present selection system seriously shrinks art's creative function and its function of generating meaning. In this situation, art might fall to the level of entertainment, accessible to more people and compromising with popularity. This discussion is based on assumption and consciousness on the matter that this situation might cause catastrophic results for not only explicit victims of the system but also winners, or ones defined as winners. The system of art is probably possible only by desire or distortion stemmed from such desire. The system can be flourished only under the economic system of avarice: quantitatively expanding economy, abundant style, resort economy in Venice and Miami, and luxurious shopping malls with up-to-date facilities. The catastrophe here is ongoing, not a sudden emergence, and dynamic, leading the system itself to a devastating end.

  • PDF

Study on true nature of the Fung(風) and that of application to the medicine (풍(風)의 본질(本質)과 의학(醫學)에서의 운용(運用)에 관(關)한 고찰(考察))

  • Back, Sang Ryong;Park, Chan Kug
    • Journal of Korean Medical classics
    • /
    • v.7
    • /
    • pp.198-231
    • /
    • 1994
  • Up to now, after I had examined the relation between the origin of Fung(風) and Gi(氣) and the mean of Fung in medical science, I obtained the conclusion being as follows. The first, Fung(風) means a flux of Gi(氣) and Gi shows the process by virtue of the form of Fung, namely, Fung means motion of Gi. In other words, it is flow of power. Accordingly, the process of all power can give a name Fung. The second, Samul(事物) ceaselessly interchange with the external world to sustain the existence and life of themselves. And they make a adequate confrontation against the pressure of the outside. This the motive power of life action(生命活動) is Gi and shows its the process on the strength of Fung. The third, Samul(事物) incessantly releases power which it has to the outside. Power released to the outside forms the territory of the established power in the environment of them and keep up their substance(實體) in the space time(時空). It can be name Fung because the field(場) of this power incessantly flows. The fourth, man operates life on the ground of the creation of his own vigor(生氣) for himself as the life body(生命體) of the independence and self-support. The occurence of this vigor and the adjustment process(調節作用) is supervised by Gan(肝). That is to say, Gan plays a role to regulate and manage the process of Fung or the action of vigor with Fung-Zang(風臟). The fifth, because the Gi-Gi adjustment process(氣機調節作用) of Gan is the same as the process of Fung, Fung that operates the cause of a disease is attributed to the disharmony of the process of the human body Gi-Gi. Therefore, the generating pathological change is attributed to the extraordinary of the function by the incongruity of Gi-Gi(氣機) or the disorder of the direct motion of Gi-Hyul(氣血). Because the incongruity of this Gi-Gi of the human body gives rise to the abnormal of Zung-Gi(正氣) in the human body properly cannot cope with the invasion of 'Oi-Sa(外邪). Furthermore, Fung serves as the mediation body of the invasion of other Sa-Gi(邪氣) because of its dynamics, By virtue of this reason, Fung is named the head of all disease. And because the incongruity of the Gi-Gi has each other form according to Zang-Bu(臟腑), Kyung-Lak(經絡), and a region, the symptoms of a disease appear differently in line with them as well. The sixth, Fung-byung(風病) is approximately separated Zung-Fung(中風) and Fung-byung(猍義의 風病). Zung-fung and Fung-byung is to be attributed to the major invasion of each Jung-gi and Fung-sa(正氣와 風邪). But these two kinds stir up the problem to the direct motion of Gi-hyul(氣血) and the harmony of Gi-Gi in the human body. When one cures it, therefore, Zung-fung has to rectify Gi-Gi and the circulation of Gi-hyul on the basis of the supplement of Jung-gi(正氣) and Fung-byung must make the harmony of Gi-Gi with the Gu-fung(驅風). -Go-gi(調氣), Sun-Gi(順氣). Hang-Gi(行氣) - All existing living things as well as man maintain life on the ground of the pertinent harmony between the soul(精神) and the body(肉體). As soon as the harmony falls down, simultaneously life disappears as well. And Fung which means the outside process between Gi(氣) and Gi(氣) makes the action of their life cooperative and unified, Accordingy, the understanding of Fung, first, has to start wi th the whole thought that not only all Samul(事物) but also the soul and the body are one.

  • PDF

Impacts of Energy Tax Reform on Electricity Prices and Tax Revenues by Power System Simulation (전력계통 모의를 통한 에너지세제 개편의 전력가격 및 조세수입에 대한 영향 연구)

  • Kim, Yoon Kyung;Park, Kwang Soo;Cho, Sungjin
    • Environmental and Resource Economics Review
    • /
    • v.24 no.3
    • /
    • pp.573-605
    • /
    • 2015
  • This study proposed scenarios of tax reform regarding taxation on bituminous coal for power generation since July 2015 and July 2014, estimated its impact on SMP, settlement price, tax revenue from year 2015 to year 2029. These scenarios are compared with those of the standard scenario. To estimate them, the power system simulation was performed based on the government plan, such as demand supply program and the customized model to fit Korea's power system and operation. Imposing a tax on bituminous coal for power generation while maintaining tax neutrality reducing tax rate on LNG, the short-term SMP is lowered than the one of the standard scenario. Because the cost of nuclear power generation is still smaller than costs of other power generation, and the nuclear power generation rarely determines SMPs, the taxation impact on SMP is almost nonexistent. Thus it is difficult to slow down the electrification of energy consumption due to taxation of power plant bituminous coal in the short term, if SMP and settlement price is closely related. However, in the mid or long term, if the capacity of coal power plant is to be big enough, the taxation of power plant bituminous coal will increase SMP. Therefore, if the tax reform is made to impose on power plant bituminous coal in the short term, and if the tax rate on LNG is to be revised after implementing big enough new power plants using bituminous coal, the energy demand would be reduced by increasing electric charges through energy tax reform. Both imposing a tax on power plant bituminous coal and reducing tax rate on LNG increase settlement price, higher than the one of the standard scenario. In the mid or long term, the utilization of LNG complex power plants would be lower due to an expansion of generating plants, and thus, the tax rate on LNG would not affect on settlement price. Unlike to the impact on SMP, the taxation on nuclear power plants has increased settlement price due to the impact of settlement adjustment factor. The net impact of energy taxation will depend upon the level of offset between settlement price decrease by the expansion of energy supply and settlement price increase by imposing a tax on energy. Among taxable items, the tax on nuclear power plants will increase the most of additional tax revenue. Considering tax revenues in accordance with energy tax scenarios, the higher the tax rate on bituminous coal and nuclear power, the bigger the tax revenues.

Development of Optimum Traffic Safety Evaluation Model Using the Back-Propagation Algorithm (역전파 알고리즘을 이용한 최적의 교통안전 평가 모형개발)

  • Kim, Joong-Hyo;Kwon, Sung-Dae;Hong, Jeong-Pyo;Ha, Tae-Jun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.35 no.3
    • /
    • pp.679-690
    • /
    • 2015
  • The need to remove the cause of traffic accidents by improving the engineering system for a vehicle and the road in order to minimize the accident hazard. This is likely to cause traffic accident continue to take a large and significant social cost and time to improve the reliability and efficiency of this generally poor road, thereby generating a lot of damage to the national traffic accident caused by improper environmental factors. In order to minimize damage from traffic accidents, the cause of accidents must be eliminated through technological improvements of vehicles and road systems. Generally, it is highly probable that traffic accident occurs more often on roads that lack safety measures, and can only be improved with tremendous time and costs. In particular, traffic accidents at intersections are on the rise due to inappropriate environmental factors, and are causing great losses for the nation as a whole. This study aims to present safety countermeasures against the cause of accidents by developing an intersection Traffic safety evaluation model. It will also diagnose vulnerable traffic points through BPA (Back -propagation algorithm) among artificial neural networks recently investigated in the area of artificial intelligence. Furthermore, it aims to pursue a more efficient traffic safety improvement project in terms of operating signalized intersections and establishing traffic safety policies. As a result of conducting this study, the mean square error approximate between the predicted values and actual measured values of traffic accidents derived from the BPA is estimated to be 3.89. It appeared that the BPA appeared to have excellent traffic safety evaluating abilities compared to the multiple regression model. In other words, The BPA can be effectively utilized in diagnosing and practical establishing transportation policy in the safety of actual signalized intersections.

Biological Functions of N- and O-linked Oligosaccharides of Equine Chorionic Gonadotropin and Lutropin/Chorionic Gonadotropin Receptor

  • Min, K.S.
    • Korean Journal of Animal Reproduction
    • /
    • v.24 no.4
    • /
    • pp.357-364
    • /
    • 2000
  • Members of the glycoprotein family, which includes CG, LH, FSH and TSH, comprise two noncovalently linked $\alpha$- and $\beta$-subunits. Equine chorionic gonadotropin (eCG), known as PMSG, has a number of interesting and unique characteristics since it appears to be a single molecule that possesses both LH- and FSH-like activities in other species than the horse. This dual activity of eCG in heterologous species is of fundamental interest to the study of the structure-function relationships of gonadotropins and their receptors. CG and LH $\beta$ genes are different in primates. In horse, however, a single gene encodes both eCG and eLH $\beta$ -subunits. The subunit mRNA levels seem to be independently regulated and their imbalance may account for differences in the quantities of $\alpha$ - and $\beta$-subunits in the placenta and pituitary. The dual activities of eCG could be separated by removal of the N-linked oligosaccharide on the $\alpha$-subunit Asn 56 or CTP-associated O-linked oligosaccharides. The tethered-eCG was efficiently secreted and showed similar LH-like activity to the dimeric eCG. Interestingly, the FSH-like activity of the tethered-eCG was increased markedly in comparison with the native and wild type eCG. These results also suggest that this molecular can implay particular models of FSH-like activity not LH-like activity in the eCG/indicate that the constructs of tethered molecule will be useful in the study of mutants that affect subunit association and/or secretion. A single-chain analog can also be constructed to include additional hormone-specific bioactive generating potentially efficacious compounds that have only FSH-like activity. The LH/CG receptor (LH/CGR), a membrane glycoprotein that is present on testicular Leydig cells and ovarian theca, granulosa, luteal, and interstitial cells, plays a pivotal role in the regulation of gonadal development and function in males as well as in nonpregnant and pregnant females. The LH/CGR is a member of the family of G protein-coupled receptors and its structure is predicted to of a large extracellular domain connected to a bundle of seven membrane-spanning a-helices. The LH/CGR phosphorylation can be induced with a phorbol ester, but not with a calcium ionophore. The truncated form of LHR also was down-regulated normally in response to hCG stimulation. In contrast, the cell lines expressing LHR-t631 or LHR-628, the two phosphorylation-negative receptor mutant, showed a delay in the early phase of hCG-induced desensitization, a complete loss of PMA-induced desensitization, and an increase in the rate of hCG-induced receptor down-regulation. These results clearly show that residues 632~653 in the C-terminal tail of the LHR are involved in PMA-induced desensitization, hCG-induced desensitization, and hCG-induced down-regulation. Recently, constitutively activating mutations of the receptor have been identified that are associated with familial male-precocious puberty. Cells expressing LHR-D556Y bind hCG with normal affinity, exhibit a 25-fold increase in basal cAMP and respond to hCG with a normal increase in cAMP accumulation. This mutation enhances the internalization of the free and agoinst-occupied receptors ~2- and ~17- fold, respectively. We conclude that the state of activation of the LHR can modulate its basal and/or agonist-stimulated internalization. Since the internalization of hCG is involved in the termination of hCG actions, we suggest that the lack of responsiveness detected in cells expressing LHR-L435R is due to the fast rate of internalization of the bound hCG. This statement is supported by the finding that hCG responsiveness is restored when the cells are lysed and signal transduction is measured in a subcellular fraction (membranes) that cannot internalize the bound hormone.

  • PDF

Study on Genetic Evaluation using Genomic Information in Animal Breeding - Simulation Study for Estimation of Marker Effects (가축 유전체정보 활용 종축 유전능력 평가 연구 - 표지인자 효과 추정 모의실험)

  • Cho, Chung-Il;Lee, Deuk-Hwan
    • Journal of Animal Science and Technology
    • /
    • v.53 no.1
    • /
    • pp.1-6
    • /
    • 2011
  • This simulation study was performed to investigate the accuracy of the estimated breeding value by using genomic information (GEBV) by way of Bayesian framework. Genomic information by way of single nucleotide polymorphism (SNP) from a chromosome with length of 100cM were simulated with different marker distance (0.1cM, 0.5cM), heritabilities (0.1, 0.5) and half sibs families (20 heads, 4 heads). For generating the simulated population in which animals were inferred to genomic polymorphism, we assumed that the number of quantitative trait loci (QTL) were equal with the number of no effect markers. The positions of markers and QTLs were located with even and scatter distances, respectively. The accuracies of estimated breeding values by way of indicating correlations between true and estimated breeding values were compared on several cases of marker distances, heritabilities and family sizes. The accuracies of breeding values on animals only having genomic information were 0.87 and 0.81 in marker distances of 0.1cM and 0.5cM, respectively. These accuracies were shown to be influenced by heritabilities (0.87 at $h^2$ =0.10, 0.94 at $h^2$ =0.50). According to half sibs' family size, these accuracies were 0.87 and 0.84 in family size of 20 and 4, respectively. As half sibs family size is high, accuracy of breeding appeared high. Based on the results of this study it is concluded that the amount of marker information, heritability and family size would influence the accuracy of the estimated breeding values in genomic selection methodology for animal breeding.

Effect of Ozone Application on Sulfur Compounds and Ammonia Exhausted from Aerobic Fertilization System of Livestock Manure (가축분뇨 호기적 퇴.액비화시 발생하는 기체 중의 황 화합물과 암모니아에 대한 오존처리 효과)

  • Jeong, Kwang Hwa;Whang, Ok Hwa;Khan, Modabber Ahmed;Lee, Dong Hyun;Choi, Dong Yoon;Yu, Yong Hee
    • Journal of the Korea Organic Resources Recycling Association
    • /
    • v.20 no.4
    • /
    • pp.118-126
    • /
    • 2012
  • In this study, two types of ozone generating experimental instrument were installed in commercial livestock manure fertilization facility, which can treat hundred tons of pig manure in a day. Gas samples to be treated were collected from the upper part of the liquid fertilization system and composting system of the commercial livestock manure fertilization facility. The gas sample was flowed to oxidation reactor through pipe line by suction blower, therefore, contacted with ozone. Ammonia and sulfur compounds of gas samples collected from the inlet and outlet point of the experimental instrument were analyzed. The oxidation effect by the contact with ozone was higher in sulfur compounds than ammonia. Ammonia content was reduced about 10% by ozone contact. Sulfur compounds, on the other hand, reduced significantly while treated with ozone. In case of gas sample collected from liquid fertilization system, the concentrations of hydrogen sulfide ($H_2S$), methyl mercaptan (MM), dimethyl sulfide (DMS), and dimethyl disulfide (DMDS) of inlet gas were 50.091, 4.9089, 27.8109 and 0.4683 ppvs, respectively. After oxidized by ozone, the concentrations of sulfur compounds were 1.2317, 0.3839, 14.7279 and 0.3145 ppvs, respectively. Another sample collected from aerobic composting system was oxidized in the same conditions. The concentrations of $H_2S$, MM, DMS and DMDS of the sample collected from inlet point of the reactor were 40.6682, 1.3675, 24.2458 and 0.8289 ppvs, respectively. After oxidized, the concentrations of $H_2S$, MM, DMS, and DMDS were reduced to 3.013, ND, 8.8998 and 0.3651 ppvs, respectively. By application of another type of ozone, the concentrations of $H_2S$, MM, DMS and DMDS of inlet gas were reduced from 43.397, 1.4559, 3.6021 and 0.4061 to ND, ND, ND, and 0.21ppvs, respectively.

Study on the Various Size Dependence of Ionization Chamber in IMRT Measurement to Improve Dose-accuracy (세기조절 방사선치료(IMRT)의 환자 정도관리에서 다양한 이온전리함 볼륨이 정확도에 미치는 영향)

  • Kim, Sun-Young;Lee, Doo-Hyun;Cho, Jung-Keun;Jung, Do-Hyeung;Kim, Ho-Sick;Choi, Gye-Sook
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.18 no.1
    • /
    • pp.1-5
    • /
    • 2006
  • Purpose: IMRT quality assurance(Q.A) is consist of the absolute dosimetry using ionization chamber and relative dosimetry using the film. We have in general used 0.015 cc ionization chamber, because small size and measure the point dose. But this ionization chamber is too small to give an accurate measurement value. In this study, we have examined the degree of calculated to measured dose difference in intensity modulated radiotherapy(IMRT) based on the observed/expected ratio using various kinds of ion chambers, which were used for absolute dosimetry. Materials and Methods: we peformed the 6 cases of IMRT sliding-window method for head and neck cases. Radiation was delivered by using a Clinac 21EX unit(Varian, USA) generating a 6 MV x-ray beam, which is equipped with an integrated multileaf collimator. The dose rate for IMRT treatment is set to 300 MU/min. The ion chamber was located 5cm below the surface of phantom giving 100cm as a source-axis distance(SAD). The various types of ion chambers were used including 0.015cc(pin point type 31014, PTW. Germany), 0.125 cc(micro type 31002, PTW, Germany) and 0.6 cc(famer type 30002, PTW, Germany). The measurement point was carefully chosen to be located at low-gradient area. Results: The experimental results show that the average differences between plan value and measured value are ${\pm}0.91%$ for 0.015 cc pin point chamber, ${\pm}0.52%$ for 0.125 cc micro type chamber and ${\pm}0.76%$ for farmer type 0.6cc chamber. The 0.125 cc micro type chamber is appropriate size for dose measure in IMRT. Conclusion: IMRT Q.A is the important procedure. Based on the various types of ion chamber measurements, we have demonstrated that the dose discrepancy between calculated dose distribution and measured dose distribution for IMRT plans is dependent on the size of ion chambers. The reason is small size ionization chamber have the high signal-to-noise ratio and big size ionization chamber is not located accurate measurement point. Therefore our results suggest the 0.125 cc farmer type chamber is appropriate size for dose measure in IMRT.

  • PDF