• Title/Summary/Keyword: 함수 표현

Search Result 1,664, Processing Time 0.033 seconds

Comparison of Disk Tension Infiltrometer and van Genuchten-Mualem Model on Estimation of Unsaturated Hydraulic Conductivity (장력 침투계(Disk Tension Infiltrometer)와 van Genuchten-Mualem 모형 적용에 따른 불포화수리 전도도의 비교 해석)

  • Hur, Seung-Oh;Jung, Kang-Ho;Park, Chan-Won;Ha, Sang-Keun;Kim, Geong-Gyu
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.39 no.5
    • /
    • pp.259-267
    • /
    • 2006
  • Hydraulic conductivity is the rate of water flux on hydraulic gradient. The van Genuchten Mualem (VGM) model is frequently used for describing unsaturated state of soils, that is composed with the function of soil water potential and soil water content and requests various parameters. This study is to get the value of VGM parameters used Rosetta computer program based on neural network analysis method and to calculate VGM parameters. VGM parameters included Ko(effective saturated hydraulic conductivity), ${\theta}r$(residual soil water content), ${\theta}s$(saturated soil water content), L, n and m. The unsaturated hydraulic conductivity at 10 kPa was calculated by using Rosetta program. Unsaturated hydraulic conductivities of 17 soil series at 1, 3, 5, 7 kPa were also obtained by applying saturated hydraulic conductivity by disk tension infiltrometer based on Gardner and Wooding's equation. Water flow at the water potential of 3 kPa was very low except Namgye, Hagog, Baegsan, Sangju, Seogcheon, Yesan soil series. Unsaturated hydraulic conductivity at 1 kPa showed the highest value for Samgag soil series and was in order of Yesan, Hwabong, Hagog and Baegsan soil series. Those of Gacheon, Seocheon and Ugog soil series were very low. When the value by VGM was compared with the value by disc tension infiltrometer, there was a tendency with exponential function to soils without gravel but there was no tendency to soils including gravel. Conclusively, it would be limited that VGM model for unsaturated hydraulic conductivity analysis applies to Korean agricultural land including gravel and having steep slope, shallow soil depth.

Upper Boundary Line Analysis of Rice Yield Response to Meteorological Condition for Yield Prediction I. Boundary Line Analysis and Construction of Yield Prediction Model (최대경계선을 이용한 벼 수량의 기상반응분석과 수량 예측 I. 최대경계선 분석과 수량예측모형 구축)

  • 김창국;이변우;한원식
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.46 no.3
    • /
    • pp.241-247
    • /
    • 2001
  • Boundary line method was adopted to analyze the relationships between rice yield and meteorological conditions during rice growing period. Boundary lines of yield responses to mean temperature($T_a$) and sunshine hour( $S_{h}$) and diurnal temperature range($T_r$) were well-fitted to hyperbolic functions of f($T_a$) =$$\beta$_{0t}$(1-EXP(-$$\beta$_{1t}$ $\times$ ($T_a$) ) and f( $S_{h}$)=$$\beta$_{0t}$((1-EXP($$\beta$_{1t}$$\times$ $S_{h}$)), to quadratic function of f($T_r$) =$\beta$$_{0r}$(1-($T_r$ 1r)$^2$), respectively. to take into account to, the sterility caused by low temperature during reproductive stage, cooling degree days [$T_c$ =$\Sigma$(20-$T_a$] for 30 days before heading were calculated. Boundary lines of yield responses to $T_c$ were fitted well to exponential function of f($T_c$) )=$\beta$$_{0c}$exp(-$$\beta$_{1c}$$\times$$T_c$ ). Excluding the constants of $\beta$$_{0s}$ from the boundary line functions, formed are the relative function values in the range of 0 to 1. And these were used as yield indices of the meteorological elements which indicate the degree of influence on rice yield. Assuming that the meteorological elements act multiplicatively and independently from each other, meteorological yield index (MIY) was calculated by the geometric mean of indices for each meteorological elements. MIY in each growth period showed good linear relationship with rice yield. The MIY's during 31 to 45 days after transplanting(DAT) in vegetative stage, during 30 to 16 days before heading (DBH) in reproductive stage and during 20 days after heading (DAH) in ripening stage showed greater explainablity for yield variation in each growth stage. MIY for the whole growth period was calculated by the following three methods of geometric mean of the indices for vegetative stage (MIVG), reproductive stage (HIRG) and ripening stage (HIRS). MI $Y_{I}$ was calculated by the geometric mean of meteorological indices showing the highest determination coefficient n each growth stage of rice. That is, (equation omitted) was calculated by the geometric mean of all the MIY's for all the growth periods devided into 15 to 20 days intervals from transplanting to 40 DAH. MI $Y_{III}$ was calculated by the geometric mean of MIY's for 45 days of vegetative stage (MIV $G_{0-45}$ ), 30 days of reproductive stage (MIR $G_{30-0}$) and 40 days of ripening stage (MIR $S_{0-40}$). MI $Y_{I}$, MI $Y_{II}$ and MI $Y_{III}$ showed good linear relationships with grain yield, the coefficients of determination being 0.651, 0.670 and 0.613, respectively.and 0.613, respectively.

  • PDF

The Development of Theoretical Model for Relaxation Mechanism of Sup erparamagnetic Nano Particles (초상자성 나노 입자의 자기이완 특성에 관한 이론적 연구)

  • 장용민;황문정
    • Investigative Magnetic Resonance Imaging
    • /
    • v.7 no.1
    • /
    • pp.39-46
    • /
    • 2003
  • Purpose : To develop a theoretical model for magnetic relaxation behavior of the superparamagnetic nano-particle agent, which demonstrates multi-functionality such as liver- and lymp node-specificity. Based on the developed model, the computer simulation was performed to clarify the relationship between relaxation time and the applied magnetic field strength. Materials and Methods : The ultrasmall superparamagnetic iron oxide (USPIO) was encapsulated with biocompatiable polymer, to develop a relaxation model based on outsphere mechanism, which was resulting from diffusion and/or electron spin fluctuation. In addition, Brillouin function was introduced to describe the full magnetization by considering the fact that the low-field approximation, which was adapted in paramagnetic case, is no longer valid. The developed model describes therefore the T1 and T2 relaxation behavior of superparamagnetic iron oxide both in low-field and in high-field. Based on our model, the computer simulation was performed to test the relaxation behavior of superparamagnetic contrast agent over various magnetic fields using MathCad (MathCad, U.S.A.), a symbolic computation software. Results : For T1 and T2 magnetic relaxation characteristics of ultrasmall superparamagnetic iron oxide, the theoretical model showed that at low field (<1.0 Mhz), $\tau_{S1}(\tau_{S2}$, in case of T2), which is a correlation time in spectral density function, plays a major role. This suggests that realignment of nano-magnetic particles is most important at low magnetic field. On the other hand, at high field, $\tau$, which is another correlation time in spectral density function, plays a major role. Since $\tau$ is closely related to particle size, this suggests that the difference in R1 and R2 over particle sizes, at high field, is resulting not from the realignment of particles but from the particle size itself. Within normal body temperature region, the temperature dependence of T1 and T2 relaxation time showed that there is no change in T1 and T2 relaxation times at high field. Especially, T1 showed less temperature dependence compared to T2. Conclusion : We developed a theoretical model of r magnetic relaxation behavior of ultrasmall superparamagnetic iron oxide (USPIO), which was reported to show clinical multi-functionality by utilizing physical properties of nano-magnetic particle. In addition, based on the developed model, the computer simulation was performed to investigate the relationship between relaxation time of USPIO and the applied magnetic field strength.

  • PDF

Comparison of Breeding Value by Establishment of Genomic Relationship Matrix in Pure Landrace Population (유전체 관계행렬 구성에 따른 Landrace 순종돈의 육종가 비교)

  • Lee, Joon-Ho;Cho, Kwang-Hyun;Cho, Chung-Il;Park, Kyung-Do;Lee, Deuk Hwan
    • Journal of Animal Science and Technology
    • /
    • v.55 no.3
    • /
    • pp.165-171
    • /
    • 2013
  • Genomic relationship matrix (GRM) was constructed using whole genome SNP markers of swine and genomic breeding value was estimated by substitution of the numerator relationship matrix (NRM) based on pedigree information to GRM. Genotypes of 40,706 SNP markers from 448 pure Landrace pigs were used in this study and five kinds of GRM construction methods, G05, GMF, GOF, $GOF^*$ and GN, were compared with each other and with NRM. Coefficients of GOF considering each of observed allele frequencies showed the lowest deviation with coefficients of NRM and as coefficients of GMF considering the average minor allele frequency showed huge deviation from coefficients of NRM, movement of mean was expected by methods of allele frequency consideration. All GRM construction methods, except for $GOF^*$, showed normally distributed Mendelian sampling. As the result of breeding value (BV) estimation for days to 90 kg (D90KG) and average back-fat thickness (ABF) using NRM and GRM, correlation between BV of NRM and GRM was the highest by GOF and as genetic variance was overestimated by $GOF^*$, it was confirmed that scale of GRM is closely related with estimation of genetic variance. With the same amount of phenotype information, accuracy of BV based on genomic information was higher than BV based on pedigree information and these symptoms were more obvious for ABF then D90KG. Genetic evaluation of animal using relationship matrix by genomic information could be useful when there is lack of phenotype or relationship and prediction of BV for young animals without phenotype.

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.

Biosynthesis of the extracellular enzymes in de novo during the differentiation of Aspergillus niger (검정곰팡이의 형태분화에 따른 세포외성효소의 신생적생합성에 관한 연구)

  • Kim, Jong-Hyup
    • The Korean Journal of Mycology
    • /
    • v.6 no.2
    • /
    • pp.1-10
    • /
    • 1978
  • In de novo biosynthesis of the extracellulor enzymes-proteinsaes, alpha and gluc-amylases during the synchronized differentiation of Aspergillus niger in submerged culture and surface liquid culture were investigated. Gluc-amylase was synthesized in the stage of presporulation in which phialide formation is involved. Proteinase was synthesized both in the stages of conidiophore formation and presporulation. Alpha-amylase was synthesized during presporulation and sporulation stages, the activity of enzyme lasted for seven days on surface liquid culture. It seemed that the synthesis was occured in de novo partly repressed by the catabolite, and its nature was found to be constitutive since it is produced in non-starch medium. Polyacrylamide gel electrophoresis have shown that presporulating and sporulating body produced diverse types of the proteins whereas the earlier stages of vegetative body showed simpler profiles. The uptake of C-14 uracil into RNA and C-14 glutamate into protein were shown to be vigorous in presporulating body rather than those in sporulating body. Coincidence of alpha-amylase biosynthesis in de novo and sporulation may be significant in the study of differentiation in which gene expression is involved.

  • PDF

Development and Application of a Methodologyfor Climate Change Vulnerability Assessment-Sea Level Rise Impact ona Coastal City (기후변화 취약성 평가 방법론의 개발 및 적용 해수면 상승을 중심으로)

  • Yoo, Ga-Young;Park, Sung-Woo;Chung, Dong-Ki;Kang, Ho-Jeong;Hwang, Jin-Hwan
    • Journal of Environmental Policy
    • /
    • v.9 no.2
    • /
    • pp.185-205
    • /
    • 2010
  • Climate change vulnerability assessment based on local conditions is a prerequisite for establishment of climate change adaptation policies. While some studies have developed a methodology for vulnerability assessment at the national level using statistical data, few attempts, whether domestic or overseas, have been made to develop methods for local vulnerability assessments that are easily applicable to a single city. Accordingly, the objective of this study was to develop a conceptual framework for climate change vulnerability, and then develop a general methodology for assessment at the regional level applied to a single coastal city, Mokpo, in Jeolla province, Korea. We followed the conceptual framework of climate change vulnerability proposed by the IPCC (1996) which consists of "climate exposure," "systemic sensitivity," and "systemic adaptive capacity." "Climate exposure" was designated as sea level rises of 1, 2, 3, 4, and 5 meter(s), allowing for a simple scenario for sea level rises. Should more complex forecasts of sea level rises be required later, the methodology developed herein can be easily scaled and transferred to other projects. Mokpo was chosen as a seaside city on the southwest coast of Korea, where all cities have experienced rising sea levels. Mokpo has experienced the largest sea level increases of all, and is a region where abnormal high tide events have become a significant threat; especially subsequent to the construction of an estuary dam and breakwaters. Sensitivity to sea level rises was measured by the percentage of flooded area for each administrative region within Mokpo evaluated via simulations using GIS techniques. Population density, particularly that of senior citizens, was also factored in. Adaptive capacity was considered from both the "hardware" and "software" aspects. "Hardware" adaptive capacity was incorporated by considering the presence (or lack thereof) of breakwaters and seawalls, as well as their height. "Software" adaptive capacity was measured using a survey method. The survey questionnaire included economic status, awareness of climate change impact and adaptation, governance, and policy, and was distributed to 75 governmental officials working for Mokpo. Vulnerability to sea level rises was assessed by subtracting adaptive capacity from the sensitivity index. Application of the methodology to Mokpo indicated vulnerability was high for seven out of 20 administrative districts. The results of our methodology provides significant policy implications for the development of climate change adaptation policy as follows: 1) regions with high priority for climate change adaptation measures can be selected through a correlation diagram between vulnerabilities and records of previous flood damage, and 2) after review of existing short, mid, and long-term plans or projects in high priority areas, appropriate adaptation measures can be taken as per this study. Future studies should focus on expanding analysis of climate change exposure from sea level rises to other adverse climate related events, including heat waves, torrential rain, and drought etc.

  • PDF

Pseudo Image Composition and Sensor Models Analysis of SPOT Satellite Imagery of Non-Accessible Area (비접근 지역에 대한 SPOT 위성영상의 Pseudo영상 구성 및 센서모델 분석)

  • 방기인;조우석
    • Proceedings of the KSRS Conference
    • /
    • 2001.03a
    • /
    • pp.140-148
    • /
    • 2001
  • The satellite sensor model is typically established using ground control points acquired by ground survey Of existing topographic maps. In some cases where the targeted area can't be accessed and the topographic maps are not available, it is difficult to obtain ground control points so that geospatial information could not be obtained from satellite image. The paper presents several satellite sensor models and satellite image decomposition methods for non-accessible area where ground control points can hardly acquired in conventional ways. First, 10 different satellite sensor models, which were extended from collinearity condition equations, were developed and then the behavior of each sensor model was investigated. Secondly, satellite images were decomposed and also pseudo images were generated. The satellite sensor model extended from collinearity equations was represented by the six exterior orientation parameters in 1$^{st}$, 2$^{nd}$ and 3$^{rd}$ order function of satellite image row. Among them, the rotational angle parameters such as $\omega$(omega) and $\phi$(phi) correlated highly with positional parameters could be assigned to constant values. For non-accessible area, satellite images were decomposed, which means that two consecutive images were combined as one image. The combined image consists of one satellite image with ground control points and the other without ground control points. In addition, a pseudo image which is an imaginary image, was prepared from one satellite image with ground control points and the other without ground control points. In other words, the pseudo image is an arbitrary image bridging two consecutive images. For the experiments, SPOT satellite images exposed to the similar area in different pass were used. Conclusively, it was found that 10 different satellite sensor models and 5 different decomposed methods delivered different levels of accuracy. Among them, the satellite camera model with 1$^{st}$ order function of image row for positional orientation parameters and rotational angle parameter of kappa, and constant rotational angle parameter omega and phi provided the best 60m maximum error at check point with pseudo images arrangement.

  • PDF

A Stochastic User Equilibrium Transit Assignment Algorithm for Multiple User Classes (다계층을 고려한 대중교통 확률적사용자균형 알고리즘 개발)

  • Yu, Soon-Kyoung;Lim, Kang-Won;Lee, Young-Ihn;Lim, Yong-Taek
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.7 s.85
    • /
    • pp.165-179
    • /
    • 2005
  • The object of this study is a development of a stochastic user equilibrium transit assignment algorithm for multiple user classes considering stochastic characteristics and heterogeneous attributes of passengers. The existing transit assignment algorithms have limits to attain realistic results because they assume a characteristic of passengers to be equal. Although one group with transit information and the other group without it have different trip patterns, the past studies could not explain the differences. For overcoming the problems, we use following methods. First, we apply a stochastic transit assignment model to obtain the difference of the perceived travel cost between passengers and apply a multiple user class assignment model to obtain the heterogeneous qualify of groups to get realistic results. Second, we assume that person trips have influence on the travel cost function in the development of model. Third, we use a C-logit model for solving IIA(independence of irrelevant alternatives) problems. According to repetition assigned trips and equivalent path cost have difference by each group and each path. The result comes close to stochastic user equilibrium and converging speed is very fast. The algorithm of this study is expected to make good use of evaluation tools in the transit policies by applying heterogeneous attributes and OD data.