• Title/Summary/Keyword: Dynamic Feature

Search Result 678, Processing Time 0.027 seconds

A Multi-agent System to Assess Land-use and Cover Changes Caused by Forest Management Policy Scenarios (다행위자시스템을 이용한 산림정책별 토지이용 변화와 영향 분석)

  • Park, Soojin;An, Yoo Soon;Shin, Yujin;Lee, Sooyoun;Sim, Woojin;Moon, Jiyoon;Jeong, Gwan Young;Kim, Ilkwon;Shin, Hyesop;Huh, Dongsuk;Sung, Joo Han;Park, Chan Ryul
    • Journal of the Korean Geographical Society
    • /
    • v.50 no.3
    • /
    • pp.255-276
    • /
    • 2015
  • This paper presents a multi-agent system model of land-use and cover changes, which is developed and applied to the Gariwang-san and its vicinity, located in Pyeongchang and Jeongseon-gun, Gangwon province, Korea. The Land Use Dynamics Simulator (LUDAS) framework of this study is well suited for representing the spatial heterogeneity and dynamic interactions between human and natural environment, and capturing the impacts of forest-opening policy interventions to future socio-economic and natural environment changes. The model consists of four components: (1) a system of human population, (2) a system of landscape environment, (3) decision-making procedures integrating human(or household), environmental and policy information into forest land-use decisions, and (4) a set of policy scenarios that are related to the forest-opening. The results of model simulation by different combination of various forest management scenarios are assessed by the levels of household income, ecosystem service value and income inequality in the study region. As a result, the optimal scenario of forest-opening policies in the study region is to open the forest to local residential community for the purpose of recreation, considering the distinctive topographical feature. The model developed in this research is expected to contribute to a decision support system for sustainable forest management and various land-use policies in Korea.

  • PDF

A Study on Soviet Constructive Fashion in 1920s (1920년대 소비에트 구성주의 패션에 관한 연구)

  • 조윤경;금기숙
    • Journal of the Korean Society of Costume
    • /
    • v.36
    • /
    • pp.183-203
    • /
    • 1998
  • The wave of Avant-garde swept away all in the unique social background so called 'October Revolution' and the early 1900 Russian society which was able to absorb and accept anything. The Russian avant-garde has been affected by the Cubism and the Futurism those had peculiarly appeared in the early twentieth century, spreaded out to three spheres: the Suprematism, the Rayonism and the Constructivism. The Russian Constructivism has appeared in this background, concretely and ideally ex-pressed the ideology of the revolution into the artistic form and made an huge influence to the whole Russian society. The Constructivist like Tatlin, naum Gabo, Pevaner, Rodchenko, Stepanova, Popova and Exter gave great effect on the Soviet Constructive fashion design in 1920's after the Revolution. The Soviet costume in 1920s hold in common the characteristicss of the Constructive graphic as it is, geometrical and abstractive form, energetic and motility. In fashion design, these graphic qualities have been showed as the application of geometrical form and architectural image, physical distortion and transformation. And in textile design, the simple, dynamical presentation has been appeared. We can classify the Soviet costume at this time into three occasions. The first term is from late 1910 th mid 1920, and it is altered from folk costume design to modern one. With Lamanova as the first on the list, using the folk mitif, the Constructive expression of simple form has been gradually revealed in design. Designers like Makarova, Pribylskaia and Mukhina produced the plane, simple chemise style with the decoration of the Russian traditional motif. From early to late 1920 is the second term, and it is at the pick of the most active processing of the Constructive design. Not only at the costume in daily life but also at the theatrical costume and textile, the con-structive design has been represented all avail-able fields. Many Constructivists including Stepanova, Popova, Exter and Rodchenko took part in the textile design and costume design so as to evlvo their aesthetic concept. The third term is from late 1920 to early 1930. The socialistic realism has dominated over the whole culture and art, the revolutionary dynamic motif has been presented also in textile design. The formative features of Soviet Constructive fashion design are; silhouette, from, motif, color and fabric. The first, the silhouette : a straight rectangular silhouetted has been expressed through the whole period and a volumed one with distorted human body shape has introduced in the theatrical costume design. The second, the form: many lengthened rectangular forms have been made at beginnings, but to the middle period, geometrical, architectural forms have been more showed and there are energy and movement in design. At the last period, only a partial feature-division has been seen. The third, the motif; no pattern or ethnic motif has been partly used at beginnings, a figure like circle, tri-angle has gradually appeared in textile design. At latter period, a real-existent motif like an airplane has been represented with graphing and simplicity. The fourth, the color ; because of insufficient dyeing, neutral color like black or grey color has been mainly covered, but after middle term, a primary color or pastel tone has been seen, contrast of the fabric; without much development of textile industry after the Revolution, thick and durable fabrics have been the main stream, but as time had going to the last period, fabrics such as linen, cotton, velvet and silk have been varously choesn. At the theatrical costume, new materials like plastics and metals that were able to accentuate the form. The pursuit of popularity, simplicity and functionalism that the basic concept of Constructive fashion is one of the "beauty" which has been searching in modern fashion. And now we can appreciate how innovative and epochal this Soviet Constructive fashion movement was.ement was.

  • PDF

Study for making movie poster applied Augmented Reality (증강현실 영화포스터 제작연구)

  • Lee, Ki Ho
    • Cartoon and Animation Studies
    • /
    • s.48
    • /
    • pp.359-383
    • /
    • 2017
  • 3,000 years ago, since the first poster of humanity appeared in Egypt, the invention of printing technique and the development of civilization have accelerated the poster production technology. In keeping with this, the expression of poster has also been developed as an attempt to express artistic sensibility in a simple arrangement of characters, and now it has become an art form that has become a domain of professional designers. However, the technological development in the expression of poster is keep staying in two-dimensional, and is dependent on printing only that it is irrelevant to the change of ICT environment based on modern multimedia. Especially, among the many kinds of posters, the style of movie posters, which are the only objects for video, are still printed on paper, and many attempts have been made so far, but the movie industry still does not consider ICT integration at all. This study started with the feature that the object of the movie poster dealt with the video and attempted to introduce the augmented reality to apply the dynamic image of the movie to the static poster. In the graduation work of the media design major of a university in Korea, the poster of each works for promoting the visual work of the students was designed and printed in the form of a commercial film poster. Among them, 6 artworks that are considered to be suitable for augmented reality were selected and augmented reality was introduced and exhibited. Content that appears matched to the poster through the mobile device is reproduced on a poster of a scene of the video, but the text informations of the original poster are kept as they are, so that is able to build a moving poster looked like a wanted from the movie "Harry Potter". In order to produce this augmented reality poster, we applied augmented reality to posters of existing commercial films produced in two different formats, and found a way to increase the characteristics of AR contents. Through this, we were able to understand poster design suitable for AR representation, and technical expression for stable operation of augmented reality can be summarized in the matching process of augmented reality contents production.

Review of Remote Sensing Studies on Groundwater Resources (원격탐사의 지하수 수자원 적용 사례 고찰)

  • Lee, Jeongho
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_3
    • /
    • pp.855-866
    • /
    • 2017
  • Several research cases using remote sensing methods to analyze changes of storage and dynamics of groundwater aquifer were reviewed in this paper. The status of groundwater storage, in an area with regional scale, could be qualitatively inferred from geological feature, surface water altimetry and topography, distribution of vegetation, and difference between precipitation and evapotranspiration. These qualitative indicators could be measured by geological lineament analysis, airborne magnetic survey, DEM analysis, LAI and NDVI calculation, and surface energy balance modeling. It is certain that GRACE and InSAR have received remarkable attentions as direct utilization from satellite data for quantification of groundwater storage and dynamics. GRACE, composed of twin satellites having acceleration sensors, could detect global or regional microgravity changes and transform them into mass changes of water on surface and inside of the Earth. Numerous studies in terms of groundwater storage using GRACE sensor data were performed with several merits such that (1) there is no requirement of sensor data, (2) auxiliary data for quantification of groundwater can be entirely obtained from another satellite sensors, and (3) algorithms for processing measured data have continuously progressed from designated data management center. The limitations of GRACE for groundwater storage measurement could be defined as follows: (1) In an area with small scale, mass change quantification of groundwater might be inaccurate due to detection limit of the acceleration sensor, and (2) the results would be overestimated in case of combination between sensor and field survey data. InSAR can quantify the dynamic characteristics of aquifer by measuring vertical micro displacement, using linear proportional relation between groundwater head and vertical surface movement. However, InSAR data might now constrain their application to arid or semi-arid area whose land cover appear to be simple, and are hard to apply to the area with the anticipation of loss of coherence with surface. Development of GRACE and InSAR sensor data preprocessing algorithms optimized to topography, geology, and natural conditions of Korea should be prioritized to regionally quantify the mass change and dynamics of the groundwater resources of Korea.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Dynamic Limit and Predatory Pricing Under Uncertainty (불확실성하(不確實性下)의 동태적(動態的) 진입제한(進入制限) 및 약탈가격(掠奪價格) 책정(策定))

  • Yoo, Yoon-ha
    • KDI Journal of Economic Policy
    • /
    • v.13 no.1
    • /
    • pp.151-166
    • /
    • 1991
  • In this paper, a simple game-theoretic entry deterrence model is developed that integrates both limit pricing and predatory pricing. While there have been extensive studies which have dealt with predation and limit pricing separately, no study so far has analyzed these closely related practices in a unified framework. Treating each practice as if it were an independent phenomenon is, of course, an analytical necessity to abstract from complex realities. However, welfare analysis based on such a model may give misleading policy implications. By analyzing limit and predatory pricing within a single framework, this paper attempts to shed some light on the effects of interactions between these two frequently cited tactics of entry deterrence. Another distinctive feature of the paper is that limit and predatory pricing emerge, in equilibrium, as rational, profit maximizing strategies in the model. Until recently, the only conclusion from formal analyses of predatory pricing was that predation is unlikely to take place if every economic agent is assumed to be rational. This conclusion rests upon the argument that predation is costly; that is, it inflicts more losses upon the predator than upon the rival producer, and, therefore, is unlikely to succeed in driving out the rival, who understands that the price cutting, if it ever takes place, must be temporary. Recently several attempts have been made to overcome this modelling difficulty by Kreps and Wilson, Milgram and Roberts, Benoit, Fudenberg and Tirole, and Roberts. With the exception of Roberts, however, these studies, though successful in preserving the rationality of players, still share one serious weakness in that they resort to ad hoc, external constraints in order to generate profit maximizing predation. The present paper uses a highly stylized model of Cournot duopoly and derives the equilibrium predatory strategy without invoking external constraints except the assumption of asymmetrically distributed information. The underlying intuition behind the model can be summarized as follows. Imagine a firm that is considering entry into a monopolist's market but is uncertain about the incumbent firm's cost structure. If the monopolist has low cost, the rival would rather not enter because it would be difficult to compete with an efficient, low-cost firm. If the monopolist has high costs, however, the rival will definitely enter the market because it can make positive profits. In this situation, if the incumbent firm unwittingly produces its monopoly output, the entrant can infer the nature of the monopolist's cost by observing the monopolist's price. Knowing this, the high cost monopolist increases its output level up to what would have been produced by a low cost firm in an effort to conceal its cost condition. This constitutes limit pricing. The same logic applies when there is a rival competitor in the market. Producing a high cost duopoly output is self-revealing and thus to be avoided. Therefore, the firm chooses to produce the low cost duopoly output, consequently inflicting losses to the entrant or rival producer, thus acting in a predatory manner. The policy implications of the analysis are rather mixed. Contrary to the widely accepted hypothesis that predation is, at best, a negative sum game, and thus, a strategy that is unlikely to be played from the outset, this paper concludes that predation can be real occurence by showing that it can arise as an effective profit maximizing strategy. This conclusion alone may imply that the government can play a role in increasing the consumer welfare, say, by banning predation or limit pricing. However, the problem is that it is rather difficult to ascribe any welfare losses to these kinds of entry deterring practices. This difficulty arises from the fact that if the same practices have been adopted by a low cost firm, they could not be called entry-deterring. Moreover, the high cost incumbent in the model is doing exactly what the low cost firm would have done to keep the market to itself. All in all, this paper suggests that a government injunction of limit and predatory pricing should be applied with great care, evaluating each case on its own basis. Hasty generalization may work to the detriment, rather than the enhancement of consumer welfare.

  • PDF

A Study on Industries's Leading at the Stock Market in Korea - Gradual Diffusion of Information and Cross-Asset Return Predictability- (산업의 주식시장 선행성에 관한 실증분석 - 자산간 수익률 예측 가능성 -)

  • Kim Jong-Kwon
    • Proceedings of the Safety Management and Science Conference
    • /
    • 2004.11a
    • /
    • pp.355-380
    • /
    • 2004
  • I test the hypothesis that the gradual diffusion of information across asset markets leads to cross-asset return predictability in Korea. Using thirty-six industry portfolios and the broad market index as our test assets, I establish several key results. First, a number of industries such as semiconductor, electronics, metal, and petroleum lead the stock market by up to one month. In contrast, the market, which is widely followed, only leads a few industries. Importantly, an industry's ability to lead the market is correlated with its propensity to forecast various indicators of economic activity such as industrial production growth. Consistent with our hypothesis, these findings indicate that the market reacts with a delay to information in industry returns about its fundamentals because information diffuses only gradually across asset markets. Traditional theories of asset pricing assume that investors have unlimited information-processing capacity. However, this assumption does not hold for many traders, even the most sophisticated ones. Many economists recognize that investors are better characterized as being only boundedly rational(see Shiller(2000), Sims(2201)). Even from casual observation, few traders can pay attention to all sources of information much less understand their impact on the prices of assets that they trade. Indeed, a large literature in psychology documents the extent to which even attention is a precious cognitive resource(see, eg., Kahneman(1973), Nisbett and Ross(1980), Fiske and Taylor(1991)). A number of papers have explored the implications of limited information- processing capacity for asset prices. I will review this literature in Section II. For instance, Merton(1987) develops a static model of multiple stocks in which investors only have information about a limited number of stocks and only trade those that they have information about. Related models of limited market participation include brennan(1975) and Allen and Gale(1994). As a result, stocks that are less recognized by investors have a smaller investor base(neglected stocks) and trade at a greater discount because of limited risk sharing. More recently, Hong and Stein(1999) develop a dynamic model of a single asset in which information gradually diffuses across the investment public and investors are unable to perform the rational expectations trick of extracting information from prices. Hong and Stein(1999). My hypothesis is that the gradual diffusion of information across asset markets leads to cross-asset return predictability. This hypothesis relies on two key assumptions. The first is that valuable information that originates in one asset reaches investors in other markets only with a lag, i.e. news travels slowly across markets. The second assumption is that because of limited information-processing capacity, many (though not necessarily all) investors may not pay attention or be able to extract the information from the asset prices of markets that they do not participate in. These two assumptions taken together leads to cross-asset return predictability. My hypothesis would appear to be a very plausible one for a few reasons. To begin with, as pointed out by Merton(1987) and the subsequent literature on segmented markets and limited market participation, few investors trade all assets. Put another way, limited participation is a pervasive feature of financial markets. Indeed, even among equity money managers, there is specialization along industries such as sector or market timing funds. Some reasons for this limited market participation include tax, regulatory or liquidity constraints. More plausibly, investors have to specialize because they have their hands full trying to understand the markets that they do participate in

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.