• Title/Summary/Keyword: 모형실험시스템

Search Result 855, Processing Time 0.021 seconds

Characteristics of Greenhouse Gas Emissions from Charcoal Kiln (숯가마에서 발생하는 온실가스 배출 특성)

  • Lee, Seul-Ki;Jeon, Eui-Chan;Park, Seong-Kyu;Choi, Sang-Jin
    • Journal of Climate Change Research
    • /
    • v.4 no.2
    • /
    • pp.115-126
    • /
    • 2013
  • Recently Korea considers the source of biomass burning emissions reflecting national characteristic, so that includes the inventory of emission source but preceding research is rarely implemented in Korea. Therefore, a study on characteristics of greenhouse gas emissions from biomass burning is necessary and it also makes the source management effectively when the climate-atmospheric management system takes effect. In this study, using the manufactured charcoal kiln and the number of experiment was three times to get a reliable experiment result. The sampling time was decided by changing degree in charcoal kiln and charcoal manufacturing process. The results of calculation greenhouse gas emission factor from charcoal kiln were $668g\;CO_2/kg$, $20g\;CH_4/kg$, $0.01g\;N_2O/kg$. Using the emission factor developed in this study, estimate the emissions from charcoal kiln in Korea. The results of calculation were $46,040ton\;CO_2/yr$, $1,378ton\;CH_4/yr$, $0.69ton\;N_2O/yr$ and greenhouse gas emissions applying GWP are as follows. $CH_4$ emissions was $28,947ton\;CO_2eq./yr$, $N_2O$ emissions was $214ton\;CO_2eq./yr$. As a results, Gross emissions of charcoal kiln in Korea was $75,201ton\;CO_2eq./yr$, but the oak used in this study is included to the biomass so emissions of $CO_2$ are excluded. Therefore the net emissions of charcoal kiln in Korea was $29,161ton\;CO_2eq./yr$.

Dose verification for Gated Volumetric Modulated Arc Therapy according to Respiratory period (호흡연동 용적변조 회전방사선치료에서 호흡주기에 따른 선량전달 정확성 검증)

  • Jeon, Soo Dong;Bae, Sun Myung;Yoon, In Ha;Kang, Tae Young;Baek, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.1
    • /
    • pp.137-147
    • /
    • 2014
  • Purpose : The purpose of this study is to verify the accuracy of dose delivery according to the patient's breathing cycle in Gated Volumetric Modulated Arc Therapy Materials and Methods : TrueBeam STxTM(Varian Medical System, Palo Alto, CA) was used in this experiment. The Computed tomography(CT) images that were acquired with RANDO Phantom(Alderson Research Laboratories Inc. Stamford. CT, USA), using Computerized treatment planning system(Eclipse 10.0, Varian, USA), were used to create VMAT plans using 10MV FFF with 1500 cGy/fx (case 1, 2, 3) and 220 cGy/fx(case 4, 5, 6) of doserate of 1200 MU/min. The regular respiratory period of 1.5, 2.5, 3.5 and 4.5 sec and the patients respiratory period of 2.2 and 3.5 sec were reproduced with the $QUASAR^{TM}$ Respiratory Motion Phantom(Modus Medical Devices Inc), and it was set up to deliver radiation at the phase mode between the ranges of 30 to 70%. The results were measured at respective respiratory conditions by a 2-Dimensional ion chamber array detector(I'mRT Matrixx, IBA Dosimetry, Germany) and a MultiCube Phantom(IBA Dosimetry, Germany), and the Gamma pass rate(3 mm, 3%) were compared by the IMRT analysis program(OmniPro I'mRT system software Version 1.7b, IBA Dosimetry, Germany) Results : The gamma pass rates of Case 1, 2, 3, 4, 5 and 6 were the results of 100.0, 97.6, 98.1, 96.3, 93.0, 94.8% at a regular respiratory period of 1.5 sec and 98.8, 99.5, 97.5, 99.5, 98.3, 99.6% at 2.5 sec, 99.6, 96.6, 97.5, 99.2, 97.8, 99.1% at 3.5 sec and 99.4, 96.3, 97.2, 99.0, 98.0, 99.3% at 4.5 sec, respectively. When a patient's respiration was reproduced, 97.7, 95.4, 96.2, 98.9, 96.2, 98.4% at average respiratory period of 2.2 sec, and 97.3, 97.5, 96.8, 100.0, 99.3, 99.8% at 3.5 sec, respectively. Conclusion : The experiment showed clinically reliable results of a Gamma pass rate of 95% or more when 2.5 sec or more of a regular breathing period and the patient's breathing were reproduced. While it showed the results of 93.0% and 94.8% at a regular breathing period of 1.5 sec of Case 5 and 6, it could be confirmed that the accurate dose delivery could be possible on the most respiratory conditions because based on the results of 100 patients's respiratory period analysis as no one sustained a respiration of 1.5 sec. But, pretreatment dose verification should be precede because we can't exclude the possibility of error occurrence due to extremely short respiratory period, also a training at the simulation and careful monitoring are necessary for a patient to maintain stable breathing. Consequently, more reliable and accurate treatments can be administered.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.