• Title/Summary/Keyword: Input Parameters

Search Result 3,501, Processing Time 0.037 seconds

Water Quality Variation Dynamics between Artificial Reservoir and the Effected Downstream Watershed: the Case Study (인공댐과 그 영향을 받는 하류하천의 수질변동 역동성 : 사례 연구)

  • Han, Jung-Ho;An, Kwang-Guk
    • Korean Journal of Ecology and Environment
    • /
    • v.41 no.3
    • /
    • pp.382-394
    • /
    • 2008
  • The objective of this study was to analyze temporal trends of water chemistry and spatial heterogeneity between the dam site (Daecheong Reservoir, S1) and the downstream (S2$\sim$S4) using water quality dataset (obtained from the Ministry of Environment, Korea) during 2000$\sim$2007. Water quality, based on eight physical and chemical parameters, varied largely depending on the years, sampling sites, and the discharge volume. Conductivity and nutrients (TN and TP) showed a decreasing trend in the downstream (S4) rather than the dam site during the monsoon. Spatial variation increased toward downstream (S4) from Daecheong Reservoir (S1). Also, BOD and COD increased toward downstream. Because of input of nutrient and pollutant nearby S1, lentic ecosystem in monsoon, BOD and COD were slightly increased. whereas relatively decreased in S4, lotic ecosystem in monsoon, by dilution effect of nutrient and pollutant by discharge from upper dam, S1. Spatial variation of SS increased toward downstream (S4) by the side of Daecheong Reservoir (S1). Based on the dataset, efficient water quality management in the point source tributary streams is required for better water quality of downstream. Monthly characteristics of DO showed the lowest value in the monsoon that tend to increase water temperature. DO was lowest in October at S1 because turbid water, input to the Daecheong Reservoir in the monsoon affect to the postmonsoon period. In contrast, water temperature increased toward summer monsoon, in spite of some differences showed between S1 and S4 environment. Overall, the characteristics of water quality in downstream region have close correlation with discharge amount of Daecheong Reservoir. Thus, those characteristics can explain that discharge control of upper dam mainly affect to the water quality variation in downstream reach.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Design and Implementation of an Execution-Provenance Based Simulation Data Management Framework for Computational Science Engineering Simulation Platform (계산과학공학 플랫폼을 위한 실행-이력 기반의 시뮬레이션 데이터 관리 프레임워크 설계 및 구현)

  • Ma, Jin;Lee, Sik;Cho, Kum-won;Suh, Young-kyoon
    • Journal of Internet Computing and Services
    • /
    • v.19 no.1
    • /
    • pp.77-86
    • /
    • 2018
  • For the past few years, KISTI has been servicing an online simulation execution platform, called EDISON, allowing users to conduct simulations on various scientific applications supplied by diverse computational science and engineering disciplines. Typically, these simulations accompany large-scale computation and accordingly produce a huge volume of output data. One critical issue arising when conducting those simulations on an online platform stems from the fact that a number of users simultaneously submit to the platform their simulation requests (or jobs) with the same (or almost unchanging) input parameters or files, resulting in charging a significant burden on the platform. In other words, the same computing jobs lead to duplicate consumption computing and storage resources at an undesirably fast pace. To overcome excessive resource usage by such identical simulation requests, in this paper we introduce a novel framework, called IceSheet, to efficiently manage simulation data based on execution metadata, that is, provenance. The IceSheet framework captures and stores each provenance associated with a conducted simulation. The collected provenance records are utilized for not only inspecting duplicate simulation requests but also performing search on existing simulation results via an open-source search engine, ElasticSearch. In particular, this paper elaborates on the core components in the IceSheet framework to support the search and reuse on the stored simulation results. We implemented as prototype the proposed framework using the engine in conjunction with the online simulation execution platform. Our evaluation of the framework was performed on the real simulation execution-provenance records collected on the platform. Once the prototyped IceSheet framework fully functions with the platform, users can quickly search for past parameter values entered into desired simulation software and receive existing results on the same input parameter values on the software if any. Therefore, we expect that the proposed framework contributes to eliminating duplicate resource consumption and significantly reducing execution time on the same requests as previously-executed simulations.

Influence of Land Cover Map and Its Vegetation Emission Factor on Ozone Concentration Simulation (토지피복 지도와 식생 배출계수가 오존농도 모의에 미치는 영향)

  • Kyeongsu Kim;Seung-Jae Lee
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.1
    • /
    • pp.48-59
    • /
    • 2023
  • Ground-level ozone affects human health and plant growth. Ozone is produced by chemical reactions between oxides of nitrogen (NOx) and volatile organic compounds (VOCs) from anthropogenic and biogenic sources. In this study, two different land cover and emission factor datasets were input to the MEGAN v2.1 emission model to examine how these parameters contribute to the biogenic emissions and ozone production. Four input sensitivity scenarios (A, B, C and D) were generated from land cover and vegetation emission factors combination. The effects of BVOCs emissions by scenario were also investigated. From air quality modeling result using CAMx, maximum 1 hour ozone concentrations were estimated 62 ppb, 60 ppb, 68 ppb, 65 ppb, 55 ppb for scenarios A, B, C, D and E, respectively. For maximum 8 hour ozone concentration, 57 ppb, 56 ppb, 63 ppb, 60 ppb, and 53 ppb were estimated by scenario. The minimum difference by land cover was up to 25 ppb and by emission factor that was up to 35 ppb. From the modeling performance evaluation using ground ozone measurement over the six regions (East Seoul, West Seoul, Incheon, Namyangju, Wonju, and Daegu), the model performed well in terms of the correlation coefficient (0.6 to 0.82). For the 4 urban regions (East Seoul, West Seoul, Incheon, and Namyangju), ozone simulations were not quite sensitive to the change of BVOC emissions. For rural regions (Wonju and Daegu) , however, BVOC emission affected ozone concentration much more than previously mentioned regions, especially in case of scenario C. This implies the importance of biogenic emissions on ozone production over the sub-urban to rural regions.

Investigating Data Preprocessing Algorithms of a Deep Learning Postprocessing Model for the Improvement of Sub-Seasonal to Seasonal Climate Predictions (계절내-계절 기후예측의 딥러닝 기반 후보정을 위한 입력자료 전처리 기법 평가)

  • Uran Chung;Jinyoung Rhee;Miae Kim;Soo-Jin Sohn
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.2
    • /
    • pp.80-98
    • /
    • 2023
  • This study explores the effectiveness of various data preprocessing algorithms for improving subseasonal to seasonal (S2S) climate predictions from six climate forecast models and their Multi-Model Ensemble (MME) using a deep learning-based postprocessing model. A pipeline of data transformation algorithms was constructed to convert raw S2S prediction data into the training data processed with several statistical distribution. A dimensionality reduction algorithm for selecting features through rankings of correlation coefficients between the observed and the input data. The training model in the study was designed with TimeDistributed wrapper applied to all convolutional layers of U-Net: The TimeDistributed wrapper allows a U-Net convolutional layer to be directly applied to 5-dimensional time series data while maintaining the time axis of data, but every input should be at least 3D in U-Net. We found that Robust and Standard transformation algorithms are most suitable for improving S2S predictions. The dimensionality reduction based on feature selections did not significantly improve predictions of daily precipitation for six climate models and even worsened predictions of daily maximum and minimum temperatures. While deep learning-based postprocessing was also improved MME S2S precipitation predictions, it did not have a significant effect on temperature predictions, particularly for the lead time of weeks 1 and 2. Further research is needed to develop an optimal deep learning model for improving S2S temperature predictions by testing various models and parameters.

Prediction of Ammonia Emission Rate from Field-applied Animal Manure using the Artificial Neural Network (인공신경망을 이용한 시비된 분뇨로부터의 암모니아 방출량 예측)

  • Moon, Young-Sil;Lim, Youngil;Kim, Tae-Wan
    • Korean Chemical Engineering Research
    • /
    • v.45 no.2
    • /
    • pp.133-142
    • /
    • 2007
  • As the environmental pollution caused by excessive uses of chemical fertilizers and pesticides is aggravated, organic farming using pasture and livestock manure is gaining an increased necessity. The application rate of the organic farming materials to the field is determined as a function of crops and soil types, weather and cultivation surroundings. When livestock manure is used for organic farming materials, the volatilization of ammonia from field-spread animal manure is a major source of atmospheric pollution and leads to a significant reduction in the fertilizer value of the manure. Therefore, an ammonia emission model should be presented to reduce the ammonia emission and to know appropriate application rate of manure. In this study, the ammonia emission rate from field-applied pig manure is predicted using an artificial neural network (ANN) method, where the Michaelis-Menten equation is employed for the ammonia emission rate model. Two model parameters (total loss of ammonia emission rate and time to reach the half of the total emission rate) of the model are predicted using a feedforward-backpropagation ANN on the basis of the ALFAM (Ammonia Loss from Field-applied Animal Manure) database in Europe. The relative importance among 15 input variables influencing ammonia loss is identified using the weight partitioning method. As a result, the ammonia emission is influenced mush by the weather and the manure state.

Efficient Structral Safety Monitoring of Large Structures Using Substructural Identification (부분구조추정법을 이용한 대형구조물의 효율적인 구조안전도 모니터링)

  • 윤정방;이형진
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.1 no.2
    • /
    • pp.1-15
    • /
    • 1997
  • This paper presents substructural identification methods for the assessment of local damages in complex and large structural systems. For this purpose, an auto-regressive and moving average with stochastic input (ARMAX) model is derived for a substructure to process the measurement data impaired by noises. Using the substructural methods, the number of unknown parameters for each identification can be significantly reduced, hence the convergence and accuracy of estimation can be improved. Secondly, the damage index is defined as the ratio of the current stiffness to the baseline value at each element for the damage assessment. The indirect estimation method was performed using the estimated results from the identification of the system matrices from the substructural identification. To demonstrate the proposed techniques, several simulation and experimental example analyses are carried out for structural models of a 2-span truss structure, a 3-span continuous beam model and 3-story building model. The results indicate that the present substructural identification method and damage estimation methods are effective and efficient for local damage estimation of complex structures.

  • PDF

Estimation on the Radius of Maximum Wind Speed using RSMC Best Track Data (RSMC 최적경로 자료를 이용한 태풍의 최대풍속반경 산정)

  • Ko, Dong Hui;Jeong, Shin Taek;Cho, Hongyeon;Jun, Ki Cheon;Kim, Yoon Chil
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.25 no.5
    • /
    • pp.291-300
    • /
    • 2013
  • Typhoon simulation method is widely used to estimate sea surface wind speeds during the typhoon periods. Holland (1980) model has been regarded to provide relatively better results for observed wind data. JTWC or RSMC best track data are available for typhoon modeling, but these data show slightly different because the data generation process are different. In this paper, a Newton-Raphson method is used to solve the two nonlinear equations based on the Holland model that is formed by the two typhoon parameters, i.e. the longest radius of 25 m/s and 15 m/s wind speeds, respectively. The solution is the radius of maximum wind speed which is of importance for typhoon modeling. This method is based on the typhoon wind profile of JMA and it shows that Holland model appears to fit better the characteristics of typhoons on the temporal and spatial changes than that of the other models. In case of using RSMC best track data, the method suggested in this study shows better and more reasonable results for the estimation of radius of maximum wind speed because the consistency of the input data is assured.

Calculation of the Electromagnetic Fields Distribution around the Human Body and Study of Transmission Loss Related with the Human Body Communication (인체 통신에 따른 인체 주변에서의 전기장 분포 계산 및 전송 손실 연구)

  • Ju, Young-Jun;Gimm, Youn-Myoung
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.23 no.2
    • /
    • pp.251-257
    • /
    • 2012
  • Human body communication means transmitting and receiving data through human body medium or through free space along with the human body skin. Electric field distribution around the human body between the transmitter and the receiver were calculated at five different frequencies with 5 MHz interval between 10 MHz and 30 MHz. Commercial electromagnetic simulation tool was used for the calculation of E-field distributions applying the Korean standard male model including 29 different kinds of human tissues. After calculating specific absorption rate(SAR) values on back of the hand, it was compared with International Commission on Non-Ionizing Radiation Protection(ICNIRP) human protection guideline. While conductivities(${\sigma}$) and relative permittivities(${\varepsilon}_r$) of the human tissues for each frequency were input as the analyzing parameters, electric field intensities near both hands were integrated along the integral line between the nearby electrodes for the calculation of the transmitting and receiving voltages whose ratio was defined as channel loss. The calculated channel losses were about ($75{\pm}1$) dB and showed nearly flat response all through the evaluated frequencies.

Development and Application of the Catchment Hydrologic Cycle Assessment Tool Considering Urbanization (I) - Model Development - (도시화에 따른 물순환 영향 평가 모형의 개발 및 적용(I) - 모형 개발 -)

  • Kim, Hyeon-Jun;Jang, Cheol-Hee;Noh, Seong-Jin
    • Journal of Korea Water Resources Association
    • /
    • v.45 no.2
    • /
    • pp.203-215
    • /
    • 2012
  • The objective of this study is to develop a catchment hydrologic cycle assessment model which can assess the impact of urban development and designing water cycle improvement facilities. Developed model might contribute to minimize the damage caused by urban development and to establish sustainableurban environments. The existing conceptual lumped models have a potential limitation in their capacity to simulate the hydrologic impacts of land use changes and assess diverse urban design. The distributed physics-based models under active study are data demanding; and much time is required to gather and check input data; and the cost of setting up a simulation and computational demand are required. The Catchment Hydrologic Cycle Assessment Tool (hereinafter the CAT) is a water cycle analysis model based on physical parameters and it has a link-node model structure. The CAT model can assess the characteristics of the short/long-term changes in water cycles before and after urbanization in the catchment. It supports the effective design of water cycle improvement facilities by supplementing the strengths and weaknesses of existing conceptual parameter-based lumped hydrologic models and physical parameter-based distributed hydrologic models. the model was applied to Seolma-cheon catchment, also calibrated and validated using 6 years (2002~2007) hourly streamflow data in Jeonjeokbigyo station, and the Nash-Sutcliffe model efficiencies were 0.75 (2002~2004) and 0.89 (2005~2007).