• Title/Summary/Keyword: Term-Structure Interest Rate

Search Result 25, Processing Time 0.024 seconds

A STUDY ON THE DIRECTION OF THE FUTURE WELFARE SYSTEM (미래 복지체계의 방향성에 관한 연구)

  • Kim, Youn-Jae;Keum, Ki-Youn
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.6 no.4
    • /
    • pp.153-171
    • /
    • 2011
  • The welfare system and the direction of the welfare policy have been unable to escape greatly from the frame of the past against the facts that the external environments of the national economy of the world including Korea have been changing in quick and rapid speed. Such results have caused the issues related with the welfare, economy and society ended in the ideological collision in connection with the goals of the policy, the right agreement between the policies lacked, and the intervention and conflict between the interest group concerning the policy continued. Social policy of Korea in the past had the level of complementing the parts which could not be solved through the growth. Employment creation had been achieved continuously backed up by the high rate of growth. And the low aging level, the young population structure, and the high rate of childbirth had been the structures that made such achievement possible. New economic, social and welfare environment at home and abroad has been requesting new change in welfare policy. Goal of the economic and social policy is to construct the safe economic and social system. And what has been requested has been the formation of the economic and social policy orienting the welfare nation in form of social investment and welfare expansion. Also the direction in strengthening the welfare system of Korea shall have the balance between the protection and activation strength with the necessity of converting to the prevention welfare from the post welfare. Also the public part, market, the 3rd sector and the share of the role of an individual shall be achieved. And what is needed is the achievement of the transfer from the paradigm of residual welfare to the universal welfare. And such improvements of the welfare system will be able to elevate the possible continuity of the system in long term basis through the improvement of the welfare system.

  • PDF

Engineering Approach to Crop Production in Space (우주에서 작물 생산을 위한 공학적 접근)

  • Kim Yong-Hyeon
    • Journal of Bio-Environment Control
    • /
    • v.14 no.3
    • /
    • pp.218-231
    • /
    • 2005
  • This paper reviews the engineering approach needed to support humans during their long-term missions in space. This approach includes closed plant production systems under microgravity or low pressure, mass recycling, air revitalization, water purification, waste management, elimination of trace contaminants, lighting, and nutrient delivery systems in controlled ecological life support system (CELSS). Requirements of crops f3r space use are high production, edibility, digestibility, many culinary uses, capability of automation, short stems, and high transpiration. Low pressure on Mars is considered to be a major obstacle for the design of greenhouses fer crop production. However interest in Mars inflatable greenhouse applicable to planetary surface has increased. Structure, internal pressure, material, method of lighting, and shielding are principal design parameters for the inflatable greenhouse. The inflatable greenhouse operating at low pressure can reduce the structural mass and atmosphere leakage rate. Plants growing at reduced pressure show an increasing transpiration rates and a high water loss. Vapor pressure increases as moisture is added to the air through transpiration or evaporation from leaks in the hydroponic system. Fluctuations in vapor pressure will significantly influence total pressure in a closed system. Thus hydroponic systems should be as tight as possible to reduce the quantity of water that evaporates from leaks. And the environmental control system to maintain high relative humidity at low pressure should be developed. The essence of technologies associated with CELSS can support human lift even at extremely harsh conditions such as in deserts, polar regions, and under the ocean on Earth as well as in space.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Estimation of the Korean Yield Curve via Bayesian Variable Selection (베이지안 변수선택을 이용한 한국 수익률곡선 추정)

  • Koo, Byungsoo
    • Economic Analysis
    • /
    • v.26 no.1
    • /
    • pp.84-132
    • /
    • 2020
  • A central bank infers market expectations of future yields based on yield curves. The central bank needs to precisely understand the changes in market expectations of future yields in order to have a more effective monetary policy. This need explains why a range of models have attempted to produce yield curves and market expectations that are as accurate as possible. Alongside the development of bond markets, the interconnectedness between them and macroeconomic factors has deepened, and this has rendered understanding of what macroeconomic variables affect yield curves even more important. However, the existence of various theories about determinants of yields inevitably means that previous studies have applied different macroeconomics variables when estimating yield curves. This indicates model uncertainties and naturally poses a question: Which model better estimates yield curves? Put differently, which variables should be applied to better estimate yield curves? This study employs the Dynamic Nelson-Siegel Model and takes the Bayesian approach to variable selection in order to ensure precision in estimating yield curves and market expectations of future yields. Bayesian variable selection may be an effective estimation method because it is expected to alleviate problems arising from a priori selection of the key variables comprising a model, and because it is a comprehensive approach that efficiently reflects model uncertainties in estimations. A comparison of Bayesian variable selection with the models of previous studies finds that the question of which macroeconomic variables are applied to a model has considerable impact on market expectations of future yields. This shows that model uncertainties exert great influence on the resultant estimates, and that it is reasonable to reflect model uncertainties in the estimation. Those implications are underscored by the superior forecasting performance of Bayesian variable selection models over those models used in previous studies. Therefore, the use of a Bayesian variable selection model is advisable in estimating yield curves and market expectations of yield curves with greater exactitude in consideration of the impact of model uncertainties on the estimation.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF