• Title/Summary/Keyword: size-optimization

Search Result 1,540, Processing Time 0.034 seconds

An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework (빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로)

  • Ka, Hoi-Kwang;Kim, Jin-soo
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.443-472
    • /
    • 2014
  • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.

Optimization of Tube Voltage according to Patient's Body Type during Limb examination in Digital X-ray Equipment (디지털 엑스선 장비의 사지 검사 시 환자 체형에 따른 관전압 최적화)

  • Kim, Sang-Hyun
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.5
    • /
    • pp.379-385
    • /
    • 2017
  • This study identifies the optimal tube voltages depending on the changes in the patient's body type for limb tests using a digital radiography (DR) system. For the upper-limp test, the dose area product (DAP) was fixed at $5.06dGy{\ast} cm^2$, and for the lower-limb test, the DAP was fixed at $5.04dGy{\ast} cm^2$. Afterwards, the tube voltage was changed to four different stages and the images were taken three times at each stage. The thickness of the limbs was increased by 10 mm to 30 mm to change in the patient's body type. For a quantitative evaluation, Image J was used to calculate the contrast to noise ratio (CNR) and signal to noise ratio (SNR) among the four groups, according to the tube voltage. For statistical testing, the statistically significant differences were analyzed through the Kruskal-Wallis test at a 95% confidence level. For the qualitative analysis of the images, the pre-determined items were evaluated based on a 5-point Likert scale. In both upper-limb and lower-limb tests, the more the tube voltage increased, the more the CNR and SNR of the images decreased. The test on the changes depending on the patient's body shape showed that the more the thickness increased, the more the CNR and SNR decreased. In the qualitative evaluation on the upper limbs, the more the tube voltage increased, the more score increased to 4.6 at the maximum of 55kV and 3.6 at 40kV, respectively. The mean score for the lower limbs was 4.4, regardless of the tube voltage. The more either the upper or lower limbs got thicker, the more the score generally decreased. The score of the upper limps sharply dropped at 40kV, whereas that of the lower limps sharply dropped at 50kV. For patients with a standard thickness, the optimized images can be obtained when taken at 45kV for the upper limbs, and at 50kV for the lower limbs. However, when the thickness of the patient's limbs increases, it is best to set the tube voltage at 50 kV for the upper limbs and at 55 kV for the lower limbs.

Dose Verification Using Pelvic Phantom in High Dose Rate (HDR) Brachytherapy (자궁경부암용 팬톰을 이용한 HDR (High dose rate) 근접치료의 선량 평가)

  • 장지나;허순녕;김회남;윤세철;최보영;이형구;서태석
    • Progress in Medical Physics
    • /
    • v.14 no.1
    • /
    • pp.15-19
    • /
    • 2003
  • High dose rate (HDR) brachytherapy for treating a cervix carcinoma has become popular, because it eliminates many of the problems associated with conventional brachytherapy. In order to improve the clinical effectiveness with HDR brachytherapy, a dose calculation algorithm, optimization procedures, and image registrations need to be verified by comparing the dose distributions from a planning computer and those from a phantom. In this study, the phantom was fabricated in order to verify the absolute doses and the relative dose distributions. The measured doses from the phantom were then compared with the treatment planning system for the dose verification. The phantom needs to be designed such that the dose distributions can be quantitatively evaluated by utilizing the dosimeters with a high spatial resolution. Therefore, the small size of the thermoluminescent dosimeter (TLD) chips with a dimension of <1/8"and film dosimetry with a spatial resolution of <1mm used to measure the radiation dosages in the phantom. The phantom called a pelvic phantom was made from water and the tissue-equivalent acrylic plates. In order to firmly hold the HDR applicators in the water phantom, the applicators were inserted into the grooves of the applicator holder. The dose distributions around the applicators, such as Point A and B, were measured by placing a series of TLD chips (TLD-to-TLD distance: 5mm) in the three TLD holders, and placing three verification films in the orthogonal planes. This study used a Nucletron Plato treatment planning system and a Microselectron Ir-192 source unit. The results showed good agreement between the treatment plan and measurement. The comparisons of the absolute dose showed agreement within $\pm$4.0 % of the dose at point A and B, and the bladder and rectum point. In addition, the relative dose distributions by film dosimetry and those calculated by the planning computer show good agreement. This pelvic phantom could be a useful to verify the dose calculation algorithm and the accuracy of the image localization algorithm in the high dose rate (HDR) planning computer. The dose verification with film dosimetry and TLD as quality assurance (QA) tools are currently being undertaken in the Catholic University, Seoul, Korea.

  • PDF

The Analysis on the Relationship between Firms' Exposures to SNS and Stock Prices in Korea (기업의 SNS 노출과 주식 수익률간의 관계 분석)

  • Kim, Taehwan;Jung, Woo-Jin;Lee, Sang-Yong Tom
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.233-253
    • /
    • 2014
  • Can the stock market really be predicted? Stock market prediction has attracted much attention from many fields including business, economics, statistics, and mathematics. Early research on stock market prediction was based on random walk theory (RWT) and the efficient market hypothesis (EMH). According to the EMH, stock market are largely driven by new information rather than present and past prices. Since it is unpredictable, stock market will follow a random walk. Even though these theories, Schumaker [2010] asserted that people keep trying to predict the stock market by using artificial intelligence, statistical estimates, and mathematical models. Mathematical approaches include Percolation Methods, Log-Periodic Oscillations and Wavelet Transforms to model future prices. Examples of artificial intelligence approaches that deals with optimization and machine learning are Genetic Algorithms, Support Vector Machines (SVM) and Neural Networks. Statistical approaches typically predicts the future by using past stock market data. Recently, financial engineers have started to predict the stock prices movement pattern by using the SNS data. SNS is the place where peoples opinions and ideas are freely flow and affect others' beliefs on certain things. Through word-of-mouth in SNS, people share product usage experiences, subjective feelings, and commonly accompanying sentiment or mood with others. An increasing number of empirical analyses of sentiment and mood are based on textual collections of public user generated data on the web. The Opinion mining is one domain of the data mining fields extracting public opinions exposed in SNS by utilizing data mining. There have been many studies on the issues of opinion mining from Web sources such as product reviews, forum posts and blogs. In relation to this literatures, we are trying to understand the effects of SNS exposures of firms on stock prices in Korea. Similarly to Bollen et al. [2011], we empirically analyze the impact of SNS exposures on stock return rates. We use Social Metrics by Daum Soft, an SNS big data analysis company in Korea. Social Metrics provides trends and public opinions in Twitter and blogs by using natural language process and analysis tools. It collects the sentences circulated in the Twitter in real time, and breaks down these sentences into the word units and then extracts keywords. In this study, we classify firms' exposures in SNS into two groups: positive and negative. To test the correlation and causation relationship between SNS exposures and stock price returns, we first collect 252 firms' stock prices and KRX100 index in the Korea Stock Exchange (KRX) from May 25, 2012 to September 1, 2012. We also gather the public attitudes (positive, negative) about these firms from Social Metrics over the same period of time. We conduct regression analysis between stock prices and the number of SNS exposures. Having checked the correlation between the two variables, we perform Granger causality test to see the causation direction between the two variables. The research result is that the number of total SNS exposures is positively related with stock market returns. The number of positive mentions of has also positive relationship with stock market returns. Contrarily, the number of negative mentions has negative relationship with stock market returns, but this relationship is statistically not significant. This means that the impact of positive mentions is statistically bigger than the impact of negative mentions. We also investigate whether the impacts are moderated by industry type and firm's size. We find that the SNS exposures impacts are bigger for IT firms than for non-IT firms, and bigger for small sized firms than for large sized firms. The results of Granger causality test shows change of stock price return is caused by SNS exposures, while the causation of the other way round is not significant. Therefore the correlation relationship between SNS exposures and stock prices has uni-direction causality. The more a firm is exposed in SNS, the more is the stock price likely to increase, while stock price changes may not cause more SNS mentions.

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

The Preparation of Magnetic Chitosan Nanoparticles with GABA and Drug Adsorption-Release (GABA를 담지한 자성 키토산 나노입자 제조와 약물의흡수 및 방출 연구)

  • Yoon, Hee-Soo;Kang, Ik-Joong
    • Korean Chemical Engineering Research
    • /
    • v.58 no.4
    • /
    • pp.541-549
    • /
    • 2020
  • The Drug Delivery System (DDS) is defined as a technology for designing existing or new drug formulations and optimizing drug treatment. DDS is designed to efficiently deliver drugs for the care of diseases, minimize the side effects of drug, and maximize drug efficacy. In this study, the optimization of tripolyphosphate (TPP) concentration on the size of Chitosan nanoparticles (CNPs) produced by crosslinking with chitosan was measured. In addition, the characteristics of Fe3O4-CNPs according to the amount of iron oxide (Fe3O4) were measured, and it was confirmed that the higher the amount of Fe3O4, the better the characteristics as a magnetic drug carrier were displayed. Through the ninhydrin reaction, a calibration curve was obtained according to the concentration of γ-aminobutyric acid (GABA) of Y = 0.00373exp(179.729X)-0.0114 (R2 = 0.989) in the low concentration (0.004 to 0.02 wt%) and Y = 21.680X-0.290 (R2 = 0.999) in the high concentration (0.02 to 0.1 wt%). Absorption was constant at about 62.5% above 0.04 g of initial GABA. In addition, the amount of GABA released from GABA-Fe3O4-CNPs over time was measured to confirm that drug release was terminated after about 24 hr. Finally, GABA-Fe3O4-CNPs performed under the optimal conditions were spherical particles of about 150 nm, and it was confirmed that the properties of the particles appear well, indicating that GABA-Fe3O4-CNPs were suitable as drug carriers.

Mass Screening of Lovastatin High-yielding Mutants through Statistical Optimization of Sporulation Medium and Application of Miniaturized Fungal Cell Cultures (Lovastatin 고생산성 변이주의 신속 선별을 위해 통계적 방법을 적용한 Sporulation 배지 개발 및 Miniature 배양 방법 개발)

  • Ahn, Hyun-Jung;Jeong, Yong-Seob;Kim, Pyeung-Hyeun;Chun, Gie-Taek
    • KSBB Journal
    • /
    • v.22 no.5
    • /
    • pp.297-304
    • /
    • 2007
  • For large and rapid screening of high-yielding mutants of lovastatin produced by filamentous fungal cells of Aspergillus terreus, one of the most important stage is to test as large amounts of mutated strains as possible. For this purpose, we intended to develop a miniaturized cultivation method using $7m{\ell}$ culture tube instead of traditional $250m{\ell}$ flask (working volume $50m{\ell}$). For obtaining large amounts of conidiospores to be used as inoculums for miniaturized cultures, 4 components i.e., glucose, sucrose, yeast extract and $KH_2PO_4$ were intensively investigated, which had been observed to show positive effect on enhancement of spore production through Plackett-Burman design experimet. When optimum concentrations of these components that were determined through application of response surface method (RSM) based on central composite design (CCD) were used, maximum spore numbers amounting to $1.9\times10^{10}$ spores/plate were obtained, resulting in approximately 190 fold increase as compared to the commonly used PDA sporulation medium. Using the miniaturized cultures, intensive strain development programs were carried out for screening of lovastatin high-yielding as well as highly reproducible mutants. It was observed that, for maximum production of lovastatin, the producers should be activated through 'PaB' adaptation process during the early solid culture stage. In addition, they should be proliferated in condensed filamentous forms in miniaturized growth cultures, so that optimum amounts of highly active cells could be transferred to the production culture-tube as reproducible inoculums. Under these highly controlled fermentation conditions, compact-pelleted morphology of optimum size (less than 1 mm in diameter) was successfully induced in the miniaturized production cultures, which proved essential for maximal utilization of the producers' physiology leading to significantly enhanced production of lovastatin. As a result of continuous screening in the miniaturized cultures, lovastatin production levels of the 81% of the daughter cells derived from the high-yielding producers turned out to be in the range of 80%$\sim$120% of the lovastatin production level of the parallel flask cultures. These results demonstrate that the miniaturized cultivation method developed in this study is efficient high throughput system for large and rapid screening of highly stable and productive strains.

Optimization of Production Medium by Response Surface Method and Development of Fermentation Condition for Monascus pilosus Culture (Monascus pilosus 배양을 위한 반응표면분석법에 의한 생산배지 최적화 및 발효조건 확립)

  • Yoon, Sang-Jin;Shin, Woo-Shik;Chun, Gie-Taek;Jeong, Yong-Seob
    • KSBB Journal
    • /
    • v.22 no.5
    • /
    • pp.288-296
    • /
    • 2007
  • Monascus pilosus (KCCM 60160) in submerged culture was optimized based on culture medium and fermentation conditions. Monacolin-K (Iovastatin), one of the cholesterol lowing-agent which was produced by Monascus pilosus may maintain a healthy lipid level by inhibiting the biosynthesis of cholesterol. Plackett-Burman design and response surface method were employed to study the culture medium for the desirable monacolin-K production. As a result of experimental designs, optimized production medium components and concentrations (g/L) were determined on soluble starch 96, malt extract 44.5, beef extract 30.23, yeast extract 15, $(NH_4)_2SO_4$ 4.03, $Na_2HPO_4{\cdot}12H_2O$ 0.5, L-Histidine 3.0, $KHSO_4$ 1.0, respectively. Monacolin-K production was improved about 3 times in comparison with shake flask fermentation of the basic production medium. The effect of agitation speed (300, 350, 400 and 450 rpm) on the monacolin-K production were also observed in a batch fermenter. Maximum monacolin-K production with the basic production medium was 68 mg/L when agitation speed was 500 rpm. And it was found that all spherical pellets (average diameter of $1.0{\sim}1.5mm$) were dominant during fermentation. Based on the results, the maximum production of 185 mg/L of monacolin-K with the optimized production medium was obtained at pH (controlled) 6.5, agitation rate 400 rpm, aeration rate 1 vvm, and inoculum size 3%.

The Optimization of Reconstruction Method Reducing Partial Volume Effect in PET/CT 3D Image Acquisition (PET/CT 3차원 영상 획득에서 부분용적효과 감소를 위한 재구성법의 최적화)

  • Hong, Gun-Chul;Park, Sun-Myung;Kwak, In-Suk;Lee, Hyuk;Choi, Choon-Ki;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.13-17
    • /
    • 2010
  • Purpose: Partial volume effect (PVE) is the phenomenon to lower the accuracy of image due to low estimate, which is to occur from PET/CT 3D image acquisition. The more resolution is declined and the lesion is small, the more it causes a big error. So that it can influence the test result. Studied the optimum image reconstruction method by using variation of parameter, which can influence the PVE. Materials and Methods: It acquires the image in each size spheres which is injected $^{18}F$-FDG to hot site and background in the ratio 4:1 for 10 minutes by using NEMA 2001 IEC phantom in GE Discovey STE 16. The iterative reconstruction is used and gives variety to iteration 2-50 times, subset number 1-56. The analysis's fixed region of interest in detail part of image and compute % difference and signal to noise ratio (SNR) using $SUV_{max}$. Results: It's measured that $SUV_{max}$ of 10 mm spheres, which is changed subset number to 2, 5, 8, 20, 56 in fixed iteration to times, SNR is indicated 0.19, 0.30, 0.40, 0.48, 0.45. As well as each sphere's of total SNR is measured 2.73, 3.38, 3.64, 3.63, 3.38. Conclusion: In iteration 6th to 20th, it indicates similar value in % difference and SNR ($3.47{\pm}0.09$). Over 20th, it increases the phenomenon, which is placed low value on $SUV_{max}$ through the influence of noise. In addition, the identical iteration, it indicates that SNR is high value in 8th to 20th in variation of subset number. Therefore, to reduce partial volume effect of small lesion, it can be declined the partial volume effect in iteration 6 times, subset number 8~20 times, considering reconstruction time.

  • PDF

Development of lumped model to analyze the hydrological effects landuse change (토지이용 변화에 따른 수문 특성의 변화를 추적하기 위한 Lumped모형의 개발)

  • Son, Ill
    • Journal of the Korean Geographical Society
    • /
    • v.29 no.3
    • /
    • pp.233-252
    • /
    • 1994
  • One of major advantages of Lumped model is its ability to simulate extended flows. A further advantage is that it requires only conventional, readily available hydrological data (rainfall, evaporation and runoff). These two advantages commend the use of this type of model for the analysis of the hydrological effects of landuse change. Experimental Catchment(K11) of Kimakia site in Kenga experienced three phases of landuse change for sixteen and half years. The Institute of Hydrology offered the hydrological data from the catchment for this research. On basis of Blackie's(l972) 9-parameter model, a new model(R1131) was reorganized in consideration of the following aspects to reflect the hydrological characteristics of the catchment: 1) The evapotranspiration necessary for the landuse hydrology, 2) high permeable soils, 3) small catchment, 4) input option for initial soil moisture deficit, and 5) othel modules for water budget analysis. The new model is constructed as a 11-parameter, 3-storage, 1-input option model. Using a number of initial conditions, the model was optimized to the data of three landuse phases. The model efficiencies were 96.78%, 97.20%, 94.62% and the errors of total flow were -1.78%, -3.36%, -5.32%. The bias of the optimized models were tested by several techniques, The extended flows were simulated in the prediction mode using the optimized model and the data set of the whole series of experimental periods. They are used to analyse the change of daily high and low-flow caused by landuse change. The relative water use ratio of the clearing and seedling phase was 60.21%, but that of the next two phases were 81.23% and 83.78% respectively. The annual peak flows of second and third phase at a 1.5-year return period were decreased by 31.3% and 31.2% compared to that of the first phase. The annual peak flow at a 50-year return period in the second phase was an increase of only 4.8%, and that in the third phase was an increase of 12.9%. The annual minimum flow at a 1.5-year return period was decreased by 34.2% in the second phase, and 34.3% in the third phase. The changes in the annual minimum flows were decreased for the larger return periods; a 20.2% decrease in the second phase and 20.9% decrease in the third phase at a 50-year return period. From the results above, two aspects could be concluded. Firstly, the flow regime in Catchment K11 was changed due to the landuse conversion from the clearing and seedling phade to the intermediate stage of pine plantation. But, The flow regime was little affected after the pine trees reached a certain height. Secondly, the effects of the pine plantation on the daily high- and low-flow were reduced with the increase in flood size and the severity of drought.

  • PDF