• Title/Summary/Keyword: Statistical Methodology

Search Result 1,307, Processing Time 0.032 seconds

Three Dimensional Quantitative Structure-Activity Relationship Analyses on the Fungicidal Activities of New Novel 2-Alkoxyphenyl-3-phenylthioisoindoline-1-one Derivatives Using the Comparative Molecular Similarity Indices Analyses (CoMSIA) Methodology Based on the Different Alignment Approaches (상이한 정렬에 따른 비교분자 유사성 지수분석(CoMSIA) 방법을 이용한 새로운 2-Alkoxyphenyl-3-phenylthioisoindoline-1-one 유도체들의 살균활성에 관한 3차원적인 정량적 구조와 활성과의 관계)

  • Sung, Nack-Do;Yoon, Tae-Yong;Song, Jong-Hwan;Jung, Hoon-Sung
    • The Korean Journal of Pesticide Science
    • /
    • v.9 no.1
    • /
    • pp.26-34
    • /
    • 2005
  • 3D-QSAR studies for the fungicidal activities against resistance phytophthora blight (RPC; 95CC7303) and sensitive phytophthora blight (Phytopthora capsici) (SPC; 95CC7105) by a series of new 2-alkoxyphenyl-3-phenylthioisoindoline-1-one derivatives (A & B) were studieded using comparative molecular similarity indices analyses (CoMSIA) methodology. From the based on the results, the two CoMSIA models, R5 and S1: as the best models were derivated. The statistical results of the models showed the best predictability and fitness for the fungicidal activities based on the cross- validated value ($q^2=0.714{\sim}0.823$) and non cross-validated, value ($r^2_{ncv.}=0.918{\sim}0.954$), respectively. The model R5 for fungicidal activity of RPC generated from the field fit alignment and combination of electrostatic field, H-bond acceptor field and LUMO molecular orbital field. The model S1 (or S5) for fungicidal activity of SPC generated from the atom based fit alignment and combination of steric field and HOMO molecular orbital field. The models also shows that inclusion of H-bond acceptor field (A) improved the statistical significance of the models. From the based graphical analyses of CoMSIA contribution maps, it was revealed that the novel selective character for fungicidal activities between the two fungi by modify of X-sub-stituent on the N-phenyl group and R-substituent on the S-phenyl group will be able to achivement.

Online news-based stock price forecasting considering homogeneity in the industrial sector (산업군 내 동질성을 고려한 온라인 뉴스 기반 주가예측)

  • Seong, Nohyoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.1-19
    • /
    • 2018
  • Since stock movements forecasting is an important issue both academically and practically, studies related to stock price prediction have been actively conducted. The stock price forecasting research is classified into structured data and unstructured data, and it is divided into technical analysis, fundamental analysis and media effect analysis in detail. In the big data era, research on stock price prediction combining big data is actively underway. Based on a large number of data, stock prediction research mainly focuses on machine learning techniques. Especially, research methods that combine the effects of media are attracting attention recently, among which researches that analyze online news and utilize online news to forecast stock prices are becoming main. Previous studies predicting stock prices through online news are mostly sentiment analysis of news, making different corpus for each company, and making a dictionary that predicts stock prices by recording responses according to the past stock price. Therefore, existing studies have examined the impact of online news on individual companies. For example, stock movements of Samsung Electronics are predicted with only online news of Samsung Electronics. In addition, a method of considering influences among highly relevant companies has also been studied recently. For example, stock movements of Samsung Electronics are predicted with news of Samsung Electronics and a highly related company like LG Electronics.These previous studies examine the effects of news of industrial sector with homogeneity on the individual company. In the previous studies, homogeneous industries are classified according to the Global Industrial Classification Standard. In other words, the existing studies were analyzed under the assumption that industries divided into Global Industrial Classification Standard have homogeneity. However, existing studies have limitations in that they do not take into account influential companies with high relevance or reflect the existence of heterogeneity within the same Global Industrial Classification Standard sectors. As a result of our examining the various sectors, it can be seen that there are sectors that show the industrial sectors are not a homogeneous group. To overcome these limitations of existing studies that do not reflect heterogeneity, our study suggests a methodology that reflects the heterogeneous effects of the industrial sector that affect the stock price by applying k-means clustering. Multiple Kernel Learning is mainly used to integrate data with various characteristics. Multiple Kernel Learning has several kernels, each of which receives and predicts different data. To incorporate effects of target firm and its relevant firms simultaneously, we used Multiple Kernel Learning. Each kernel was assigned to predict stock prices with variables of financial news of the industrial group divided by the target firm, K-means cluster analysis. In order to prove that the suggested methodology is appropriate, experiments were conducted through three years of online news and stock prices. The results of this study are as follows. (1) We confirmed that the information of the industrial sectors related to target company also contains meaningful information to predict stock movements of target company and confirmed that machine learning algorithm has better predictive power when considering the news of the relevant companies and target company's news together. (2) It is important to predict stock movements with varying number of clusters according to the level of homogeneity in the industrial sector. In other words, when stock prices are homogeneous in industrial sectors, it is important to use relational effect at the level of industry group without analyzing clusters or to use it in small number of clusters. When the stock price is heterogeneous in industry group, it is important to cluster them into groups. This study has a contribution that we testified firms classified as Global Industrial Classification Standard have heterogeneity and suggested it is necessary to define the relevance through machine learning and statistical analysis methodology rather than simply defining it in the Global Industrial Classification Standard. It has also contribution that we proved the efficiency of the prediction model reflecting heterogeneity.

User-Perspective Issue Clustering Using Multi-Layered Two-Mode Network Analysis (다계층 이원 네트워크를 활용한 사용자 관점의 이슈 클러스터링)

  • Kim, Jieun;Kim, Namgyu;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.93-107
    • /
    • 2014
  • In this paper, we report what we have observed with regard to user-perspective issue clustering based on multi-layered two-mode network analysis. This work is significant in the context of data collection by companies about customer needs. Most companies have failed to uncover such needs for products or services properly in terms of demographic data such as age, income levels, and purchase history. Because of excessive reliance on limited internal data, most recommendation systems do not provide decision makers with appropriate business information for current business circumstances. However, part of the problem is the increasing regulation of personal data gathering and privacy. This makes demographic or transaction data collection more difficult, and is a significant hurdle for traditional recommendation approaches because these systems demand a great deal of personal data or transaction logs. Our motivation for presenting this paper to academia is our strong belief, and evidence, that most customers' requirements for products can be effectively and efficiently analyzed from unstructured textual data such as Internet news text. In order to derive users' requirements from textual data obtained online, the proposed approach in this paper attempts to construct double two-mode networks, such as a user-news network and news-issue network, and to integrate these into one quasi-network as the input for issue clustering. One of the contributions of this research is the development of a methodology utilizing enormous amounts of unstructured textual data for user-oriented issue clustering by leveraging existing text mining and social network analysis. In order to build multi-layered two-mode networks of news logs, we need some tools such as text mining and topic analysis. We used not only SAS Enterprise Miner 12.1, which provides a text miner module and cluster module for textual data analysis, but also NetMiner 4 for network visualization and analysis. Our approach for user-perspective issue clustering is composed of six main phases: crawling, topic analysis, access pattern analysis, network merging, network conversion, and clustering. In the first phase, we collect visit logs for news sites by crawler. After gathering unstructured news article data, the topic analysis phase extracts issues from each news article in order to build an article-news network. For simplicity, 100 topics are extracted from 13,652 articles. In the third phase, a user-article network is constructed with access patterns derived from web transaction logs. The double two-mode networks are then merged into a quasi-network of user-issue. Finally, in the user-oriented issue-clustering phase, we classify issues through structural equivalence, and compare these with the clustering results from statistical tools and network analysis. An experiment with a large dataset was performed to build a multi-layer two-mode network. After that, we compared the results of issue clustering from SAS with that of network analysis. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The sample dataset contains 150 million transaction logs and 13,652 news articles of 5,000 panels over one year. User-article and article-issue networks are constructed and merged into a user-issue quasi-network using Netminer. Our issue-clustering results applied the Partitioning Around Medoids (PAM) algorithm and Multidimensional Scaling (MDS), and are consistent with the results from SAS clustering. In spite of extensive efforts to provide user information with recommendation systems, most projects are successful only when companies have sufficient data about users and transactions. Our proposed methodology, user-perspective issue clustering, can provide practical support to decision-making in companies because it enhances user-related data from unstructured textual data. To overcome the problem of insufficient data from traditional approaches, our methodology infers customers' real interests by utilizing web transaction logs. In addition, we suggest topic analysis and issue clustering as a practical means of issue identification.

Optimization of Medium for the Carotenoid Production by Rhodobacter sphaeroides PS-24 Using Response Surface Methodology (반응 표면 분석법을 사용한 Rhodobacter sphaeroides PS-24 유래 carotenoid 생산 배지 최적화)

  • Bong, Ki-Moon;Kim, Kong-Min;Seo, Min-Kyoung;Han, Ji-Hee;Park, In-Chul;Lee, Chul-Won;Kim, Pyoung-Il
    • Korean Journal of Organic Agriculture
    • /
    • v.25 no.1
    • /
    • pp.135-148
    • /
    • 2017
  • Response Surface Methodology (RSM), which is combining with Plackett-Burman design and Box-Behnken experimental design, was applied to optimize the ratios of the nutrient components for carotenoid production by Rhodobacter sphaeroides PS-24 in liquid state fermentation. Nine nutrient ingredients containing yeast extract, sodium acetate, NaCl, $K_2HPO_4$, $MgSO_4$, mono-sodium glutamate, $Na_2CO_3$, $NH_4Cl$ and $CaCl_2$ were finally selected for optimizing the medium composition based on their statistical significance and positive effects on carotenoid yield. Box-Behnken design was employed for further optimization of the selected nutrient components in order to increase carotenoid production. Based on the Box-Behnken assay data, the secondary order coefficient model was set up to investigate the relationship between the carotenoid productivity and nutrient ingredients. The important factors having influence on optimal medium constituents for carotenoid production by Rhodobacter sphaeroides PS-24 were determined as follows: yeast extract 1.23 g, sodium acetate 1 g, $NH_4Cl$ 1.75 g, NaCl 2.5 g, $K_2HPO_4$ 2 g, $MgSO_4$ 1.0 g, mono-sodium glutamate 7.5 g, $Na_2CO_3$ 3.71 g, $NH_4Cl$ 3.5g, $CaCl_2$ 0.01 g, per liter. Maximum carotenoid yield of 18.11 mg/L was measured by confirmatory experiment in liquid culture using 500 L fermenter.

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.

Optimization of the Indole-3-Acetic Acid Production Medium of Pantoea agglomerans SRCM 119864 using Response Surface Methodology (반응표면분석법을 활용한 Pantoea agglomerans SRCM 119864의 Indole-3-acetic acid 생산 배지 최적화)

  • Ho Jin, Jeong;Gwangsu, Ha;Su Ji, Jeong;Myeong Seon, Ryu;JinWon, Kim;Do-Youn, Jeong;Hee-Jong, Yang
    • Journal of Life Science
    • /
    • v.32 no.11
    • /
    • pp.872-881
    • /
    • 2022
  • In this study, we optimized the composition of the indole-3-acetic acid (IAA) production medium using response surface methodology on Pantoea agglomerans SRCM 119864 isolated from soil. IAA-producing P. aglomerans SRCM 119864 was identified by 16S rRNA gene sequencing. There are 11 intermediate components known to affect IAA production, hence the effect of each component on IAA production was investigated using a Plackett-Burman design (PBD). Based on the PBD, sucrose, tryptone, and sodium chloride were selected as the main factors that enhanced the IAA production at optimal L-tryptophan concentration. The predicted maximum IAA production (64.34 mg/l) was obtained for a concentration of sucrose of 13.38 g/l, of tryptone of 18.34 g/l, of sodium chloride of 9.71 g/l, and of L-tryptophan of 6.25 g/l using a the hybrid design experimental model. In the experiment, the nutrient broth medium supplemented with 0.1% L-tryptophan as the basal medium produced 45.24 mg/l of IAA, whereas the optimized medium produced 65.40 mg/l of IAA, resulting in a 44.56% increase in efficiency. It was confirmed that the IAA production of the designed optimal composition medium was very similar to the predicted IAA production. The statistical significance and suitability of the experimental model were verified through analysis of variance (ANOVA). Therefore, in this study, we determined the optimal growth medium concentration for the maximum production of IAA, which can contribute to sustainable agriculture and increase crop yield.

Cases of Ethical Violation in Research Publications: Through Editorial Decision Making Process (편집심사업무 관점에서 학술지 윤리강화를 위한 표절 검증사례)

  • Hwang, Hee-Joong;Lee, Jung-Wan;Kim, Dong-Ho;Shin, Dong-Jin;Kim, Byoung-Goo;Kim, Tae-Joong;Lee, Yong-Ki;Kim, Wan-Ki;Youn, Myoung-Kil
    • Journal of Distribution Science
    • /
    • v.15 no.5
    • /
    • pp.49-52
    • /
    • 2017
  • Purpose - To improve and strengthen existing publication and research ethics, KODISA has identified and presented various cases which have violated publication and research ethics and principles in recent years. The editorial office of KODISA has been providing and continues to provide advice and feedback on publication ethics to researchers during peer review and editorial decision making process. Providing advice and feedback on publication ethics will ensure researchers to have an opportunity to correct their mistakes or make appropriate decisions and avoid any violations in research ethics. The purpose of this paper is to identify different cases of ethical violation in research and inform and educate researchers to avoid any violations in publication and research ethics. Furthermore, this article will demonstrate how KODISA journals identify and penalize ethical violations and strengthens its publication ethics and practices. Research design, data and methodology - This paper examines different types of ethical violation in publication and research ethics. The paper identifies and analyzes all ethical violations in research and combines them into five general categories. Those five general types of ethical violations are thoroughly examined and discussed. Results - Ethical violations of research occur in various forms at regular intervals; in other words, unethical researchers tend to commit different types of ethical violations repeatedly at same time. The five categories of ethical violation in research are as follows: (1) Arbitrary changes or additions in author(s) happen frequently in thesis/dissertation related publications. (2) Self plagiarism, submitting same work or mixture of previous works with or without using proper citations, also occurs frequently, but the most common type of plagiarism is changing the statistical results and using them to present as the results of the empirical analysis; (3) Translation plagiarism, another ethical violation in publication, is difficult to detect but occurs frequently; (4) Fabrication of data or statistical analysis also occurs frequently. KODISA requires authors to submit the results of the empirical analysis of the paper (the output of the statistical program) to prevent this type of ethical violation; (5) Mashup or aggregator plagiarism, submitting a mix of several different works with or without proper citations without alterations, is very difficult to detect, and KODISA journals consider this type of plagiarism as the worst ethical violation. Conclusions - There are some individual cases of ethical violation in research and publication that could not be included in the five categories presented throughout the paper. KODISA and its editorial office should continue to develop, revise, and strengthen their publication ethics, to learn and share different ways to detect any ethical violations in research and publication, to train and educate its editorial members and researchers, and to analyze and share different cases of ethical violations with the scholarly community.

지점우량 자료의 분포형 설정과 내용안전년수에 따르는 확률강우량에 관한 고찰 - 국내 3개지점 서울, 부산 및 대구를 중심으로 -

  • Lee, Won-Hwan;Lee, Gil-Chun;Jeong, Yeon-Gyu
    • Water for future
    • /
    • v.5 no.1
    • /
    • pp.27-36
    • /
    • 1972
  • This thesis is the study of the rainfall probability depth in the major areas of Korea, such as Seoul, Pusan and Taegu. The purpose of the paper is to analyze the rainfall in connection with the safe planning of the hydraulic structures and with the project life. The methodology used in this paper is the statistical treatment of the rainfall data in the above three areas. The scheme of the paper is the following. 1. The complementation of the rainfall data We tried to select the maximm values among the values gained by the three methods: Fourier Series Method, Trend Diagram Method and Mean Value Method. By the selection of the maximum values we tried to complement the rainfall data lacking in order to prevent calamities. 2. The statistical treatment of the data The data are ordered by the small numbers, transformed into log, $\sqrt{}, \sqrt[3]{}, \sqrt[4], and$\sqrt[5], and calculated their statistical values through the electronic computer. 3. The examination of the distribution types and the determination of the optimum distibution types By the $x^2-Test$ the distribution types of rainfall data are examined, and rejected some part of the data in order to seek the normal rainfall distribution types. In this way, the optimum distribution types are determined. 4. The computation of rainfall probability depth in the safety project life We tried to study the interrelation between the return period and the safety project life, and to present the rainfall probability depth of the safety project life. In conclusion we set up the optimum distribution types of the rainfall depths, formulated the optimum distributions, and presented the chart of the rainfall probability depth about the factor of safety and the project life.ct life.

  • PDF

A study on the Degradation and By-products Formation of NDMA by the Photolysis with UV: Setup of Reaction Models and Assessment of Decomposition Characteristics by the Statistical Design of Experiment (DOE) based on the Box-Behnken Technique (UV 공정을 이용한 N-Nitrosodimethylamine (NDMA) 광분해 및 부산물 생성에 관한 연구: 박스-벤켄법 실험계획법을 이용한 통계학적 분해특성평가 및 반응모델 수립)

  • Chang, Soon-Woong;Lee, Si-Jin;Cho, Il-Hyoung
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.1
    • /
    • pp.33-46
    • /
    • 2010
  • We investigated and estimated at the characteristics of decomposition and by-products of N-Nitrosodimethylamine (NDMA) using a design of experiment (DOE) based on the Box-Behken design in an UV process, and also the main factors (variables) with UV intensity($X_2$) (range: $1.5{\sim}4.5\;mW/cm^2$), NDMA concentration ($X_2$) (range: 100~300 uM) and pH ($X_2$) (rang: 3~9) which consisted of 3 levels in each factor and 4 responses ($Y_1$ (% of NDMA removal), $Y_2$ (dimethylamine (DMA) reformation (uM)), $Y_3$ (dimethylformamide (DMF) reformation (uM), $Y_4$ ($NO_2$-N reformation (uM)) were set up to estimate the prediction model and the optimization conditions. The results of prediction model and optimization point using the canonical analysis in order to obtain the optimal operation conditions were $Y_1$ [% of NDMA removal] = $117+21X_1-0.3X_2-17.2X_3+{2.43X_1}^2+{0.001X_2}^2+{3.2X_3}^2-0.08X_1X_2-1.6X_1X_3-0.05X_2X_3$ ($R^2$= 96%, Adjusted $R^2$ = 88%) and 99.3% ($X_1:\;4.5\;mW/cm^2$, $X_2:\;190\;uM$, $X_3:\;3.2$), $Y_2$ [DMA conc] = $-101+18.5X_1+0.4X_2+21X_3-{3.3X_1}^2-{0.01X_2}^2-{1.5X_3}^2-0.01X_1X_2+0.07X_1X_3-0.01X_2X_3$ ($R^2$= 99.4%, 수정 $R^2$ = 95.7%) and 35.2 uM ($X_1$: 3 $mW/cm^2$, $X_2$: 220 uM, $X_3$: 6.3), $Y_3$ [DMF conc] = $-6.2+0.2X_1+0.02X_2+2X_3-0.26X_1^2-0.01X_2^2-0.2X_3^2-0.004X_1X_2+0.1X_1X_3-0.02X_2X_3$ ($R^2$= 98%, Adjusted $R^2$ = 94.4%) and 3.7 uM ($X_1:\;4.5\;$mW/cm^2$, $X_2:\;290\;uM$, $X_3:\;6.2$) and $Y_4$ [$NO_2$-N conc] = $-25+12.2X_1+0.15X_2+7.8X_3+{1.1X_1}^2+{0.001X_2}^2-{0.34X_3}^2+0.01X_1X_2+0.08X_1X_3-3.4X_2X_3$ ($R^2$= 98.5%, Adjusted $R^2$ = 95.7%) and 74.5 uM ($X_1:\;4.5\;mW/cm^2$, $X_2:\;220\;uM$, $X_3:\;3.1$). This study has demonstrated that the response surface methodology and the Box-Behnken statistical experiment design can provide statistically reliable results for decomposition and by-products of NDMA by the UV photolysis and also for determination of optimum conditions. Predictions obtained from the response functions were in good agreement with the experimental results indicating the reliability of the methodology used.

A Survey on Fish Habitat Conditions of Domestic Rivers and Construction of Its Database (국내 어류 서식환경 조사 및 데이터베이스 구축)

  • Jung, Jin-Hong;Park, Ji-Young;Yoon, Young-Han;Lim, Hyun-Man;Kim, Weon-Jae
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.36 no.3
    • /
    • pp.221-230
    • /
    • 2014
  • In order to restore an ecologically damaged river, freshwater fish which inhabit at the target aquatic ecosystem have a great applicability as one of the essential indicators. Although the informations about the habitat conditions of freshwater fish are key elements reflecting biological, physical, and chemical properties of the aquatic environment, because of the lack of preceding related research and insufficient database with scattered data, they have not been applied effectively for the ecological river restoration projects in Korea. To cope with these problems, based on the nation-wide detailed investigation for domestic freshwater fish habitat conditions, we have selected 70 species considering the possibility for the candidates of flagship species, constructed a database for their population, physical, and chemical habitat properties, and suggested its application methodology for the river restoration projects. In particular, the utilization of the database has been enhanced by the additional statistical analysis to present their resistance and optimum ranges for physical, and chemical habitat properties respectively. It is expected that the database constructed in this study can be utilized for the calculation and evaluation of the appropriate ecological flow rate and target water quality for the selected flagship species (fish), and the basic data for the restoration of river environment.