• Title/Summary/Keyword: argument model

Search Result 167, Processing Time 0.034 seconds

Korean Semantic Role Labeling Based on Suffix Structure Analysis and Machine Learning (접사 구조 분석과 기계 학습에 기반한 한국어 의미 역 결정)

  • Seok, Miran;Kim, Yu-Seop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.555-562
    • /
    • 2016
  • Semantic Role Labeling (SRL) is to determine the semantic relation of a predicate and its argu-ments in a sentence. But Korean semantic role labeling has faced on difficulty due to its different language structure compared to English, which makes it very hard to use appropriate approaches developed so far. That means that methods proposed so far could not show a satisfied perfor-mance, compared to English and Chinese. To complement these problems, we focus on suffix information analysis, such as josa (case suffix) and eomi (verbal ending) analysis. Korean lan-guage is one of the agglutinative languages, such as Japanese, which have well defined suffix structure in their words. The agglutinative languages could have free word order due to its de-veloped suffix structure. Also arguments with a single morpheme are then labeled with statistics. In addition, machine learning algorithms such as Support Vector Machine (SVM) and Condi-tional Random Fields (CRF) are used to model SRL problem on arguments that are not labeled at the suffix analysis phase. The proposed method is intended to reduce the range of argument instances to which machine learning approaches should be applied, resulting in uncertain and inaccurate role labeling. In experiments, we use 15,224 arguments and we are able to obtain approximately 83.24% f1-score, increased about 4.85% points compared to the state-of-the-art Korean SRL research.

The Model of Appraisal Method on Authentic Records (전자기록의 진본 평가 시스템 모형 연구)

  • Kim, Ik-Han
    • The Korean Journal of Archival Studies
    • /
    • no.14
    • /
    • pp.91-117
    • /
    • 2006
  • Electronic Records need to be appraised the authenticity as well as the value itself. There has been various kinds of discussion about how records to be appraised the value of themselves, but there's little argument about how electronic records to be appraised the authenticity of themselves. Therefore this article is modeling some specific authenticity appraisal methods and showing each stages those methods should or may be applied. At the Ingest stage, integrity verification right after records creation in the organization which produced the records, quality and integrity verification about the transferred in the organization which received the records and integrity check between SIP and AIP in the organization which received and preserved the records are essential. At the Preservation stage, integrity check between same AIPs stored in different medium separately and validation of records where or not damaged and recovery damaged records are needed. At the various Processing stages, suitability evaluation after changing the record's management control meta data and changing the record's classification, integrity check after records migration and periodical validation and integrity verification about DIPs are required. For those activities, the appraisal methods including integrity verification, content consistency check, suitability evaluation about record's meta data, feasibility check of unauthorized update and physical status validation should be applied to the electronic records management process.

Is Backwards Causation Possible? (후향적인 인과성은 가능한가?)

  • Ahn, Gan-Hun
    • Journal of Korean Philosophical Society
    • /
    • v.105
    • /
    • pp.269-290
    • /
    • 2008
  • The purpose of this paper is to explore the possibility of backwards causation. For study, this paper was divided into four views as follows: The first view was sometimes suggested by the people such as M. Dummett who distinguished observers from behaviors. According to observers' view, backwards causation is impossible, whereas behaviors' view possible. However, in a real or genuine sense, it is incorrect for us to argue for impossibility of backwards causation from the observer aspect. The second view was supported by J. H. Schmidt. He analyzed the possibility of backwards causation in terms of macro and micro level analysis about the causal events. According to micro level analysis, backwards causation is possible, but macro level analysis impossible. Usually the latter makes the former something miraculous. Under the macro level analysis, backwards causation, at first, seems to be miraculous phenomena which belongs to the micro level analysis. The third view had to do with physical equation, and the fourth view physical phenomena, respectively. John Earman argued for the backwards causation by the transformation from Lorentz­-Dirac equation to a second-order integro-differential one in the field of electrodynamic acceleration. His argument was criticized because of his misunderstanding about the relationship between two equations. On the other hand, Phil Dowe defended a version of Reichenbach's own theory about the direction of causation founded on the fork asymmetrical causal relation. However his view was different from Reichenbach's because the former defended the backwards causation model of Bell phenomena in quantum mechanics. On the contrary, Reichenbach put stressed on the priority of cause in the causal process. Subjectivism has recently been defended by H. Price, under the label of perspectivism. According to him, in a certain sense causal asymmetry is not in the world, but is rather a product of our own asymmetric perspective on the world. He also suggested causal net, the symmetry of microphysics, and so on. As mentioned above, there are many kind of suggestions of backwards causation. However none of them replaced objectively the main streams of the direction of causal process. The main stream has been usually defended by pragmatical ground. That is, effects do not precede their causes although causes cannot be without their effects.

Exploring Preservice Teachers' Science PCK and the Role of Argumentation Structure as a Pedagogical Reasoning Tool (교수적 추론 도구로서 논증구조를 활용한 과학과 예비교사들의 가족유사성 PCK 특성 탐색)

  • Youngsun Kwak
    • Journal of the Korean Society of Earth Science Education
    • /
    • v.16 no.1
    • /
    • pp.56-71
    • /
    • 2023
  • The purpose of this study is to explore the role and effectiveness of argumentation structure and the developmental characteristics of science PCK with Earth science preservice teachers who used argumentation structure as a pedagogical reasoning tool. Since teachers demonstrate PCK in a series of pedagogical reasoning processes using argumentation structures, we explored the characteristics of future-oriented family resemblance-PCK shown by preservice science teachers using argumentation structures. At the end of the semester, we conducted in-depth interviews with 15 earth science preservice teachers who had experienced lesson design and teaching practice using the argumentation structure. Qualitative analysis including a semantic network analysis was conducted based on the in-depth interview to analyze the characteristics of preservice teachers' family resemblance-PCK. Results include that preservice teachers organized their classes systematically by applying the argumentation structure, and structured classes by differentiating argumentation elements from facts to conclusions. Regarding the characteristics of each component of the argumentation structure, preservice teachers had difficulty finding warrant, rebuttal, and qualifier. The area of PCK most affected by the argumentation structure is the science teaching practice, and preservice teachers emphasized the selection of a instructional model suitable for lesson content, the use of various teaching methods and inquiry activities to persuade lesson content, and developing of data literacy and digital competency. Discussed in the conclusion are the potential and usability of argument structure as a pedagogical reasoning tool, the possibility of developing science inquiry and reasoning competency of secondary school students who experience science classes using argumentation structure, and the need for developing a teacher education protocol using argumentation structure as a pedagogical reasoning tool.

In Search of "Excess Competition" (과당경쟁(過當競爭)과 정부규제(政府規制))

  • Nam, II-chong;Kim, Jong-seok
    • KDI Journal of Economic Policy
    • /
    • v.13 no.4
    • /
    • pp.31-57
    • /
    • 1991
  • Korean firms of all sizes, from virtually every industry, have used and are using the term "excessive competition" to describe the state of their industry and to call for government interventions. Moreover, the Korean government has frequently responded to such calls in various ways favorable to the firms, such as controlling entry, curbing capacity investments, or allowing collusion. Despite such interventions' impact on the overall efficiency on the Korean economy as well as on the wealth distribution among diverse groups of economic agents, the term "excessive competition", the basis for the interventions, has so far escaped rigorous scrutiny. The objective of this paper is to clarify the notion of "excessive competition" and "over-investment" which usually accompanies "excessive competition", and to examine the circumstances under which they might occur. We first survey the cases where the terms are most widely used and proceed to examine those cases to determine if competition is indeed excessive, and if so, what causes "excessive competition". Our main concern deals with the case in which the firms must make investment decisions that involve large sunk costs while facing uncertain demand. In order to analyze this case, we developed a two period model of capacity precommitment and the ensuing competition. In the first period, oligopolistic firms make capacity investments that are irreversible. Demand is uncertain in period 1 and only the distribution is known. Thus, firms must make investment decisions under uncertainty. In the second period, demand is realized, and the firms compete with quantity under realized demand and capacity constraints. In the above setting, we find that there is "no over-investment," en ante, and there is "no excessive competition," ex post. As measured by the information available in period 1, expected return from investment of a firm is non-negative, overall industry capacity does not exceed the socially optimal level, and competition in the second period yields an outcome that gives each operating firm a non-negative second period profit. Thus, neither "excessive competition" nor "over-investment" is possible. This result will generally hold true if there is no externality and if the industry is not a natural monopoly. We also extend this result by examining a model in which the government is an active participant in the game with a well defined preference. Analysis of this model shows that over-investment arises if the government cannot credibly precommit itself to non-intervention when ex post idle capacity occurs, due to socio-political reasons. Firms invest in capacities that exceed socially optimal levels in this case because they correctly expect that the government will find it optimal for itself to intervene once over-investment and ensuing financial problems for the firms occur. Such planned over-investment and ensuing government intervention are the generic problems under the current system. These problems are expected to be repeated in many industries in years to come, causing a significant loss of welfare in the long run. As a remedy to this problem, we recommend a non-intervention policy by the government which creates and utilizes uncertainty. Based upon an argument which is essentially the same as that of Kreps and Wilson in the context of a chain-store game, we show that maintaining a consistent non-intervention policy will deter a planned over-investment by firms in the long run. We believe that the results obtained in this paper has a direct bearing on the public policies relating to many industries including the petrochemical industry that is currently in the center of heated debates.

  • PDF

Dynamics of Technology Adoption in Markets Exhibiting Network Effects

  • Hur, Won-Chang
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.127-140
    • /
    • 2010
  • The benefit that a consumer derives from the use of a good often depends on the number of other consumers purchasing the same goods or other compatible items. This property, which is known as network externality, is significant in many IT related industries. Over the past few decades, network externalities have been recognized in the context of physical networks such as the telephone and railroad industries. Today, as many products are provided as a form of system that consists of compatible components, the appreciation of network externality is becoming increasingly important. Network externalities have been extensively studied among economists who have been seeking to explain new phenomena resulting from rapid advancements in ICT (Information and Communication Technology). As a result of these efforts, a new body of theories for 'New Economy' has been proposed. The theoretical bottom-line argument of such theories is that technologies subject to network effects exhibit multiple equilibriums and will finally lock into a monopoly with one standard cornering the entire market. They emphasize that such "tippiness" is a typical characteristic in such networked markets, describing that multiple incompatible technologies rarely coexist and that the switch to a single, leading standard occurs suddenly. Moreover, it is argued that this standardization process is path dependent, and the ultimate outcome is unpredictable. With incomplete information about other actors' preferences, there can be excess inertia, as consumers only moderately favor the change, and hence are themselves insufficiently motivated to start the bandwagon rolling, but would get on it once it did start to roll. This startup problem can prevent the adoption of any standard at all, even if it is preferred by everyone. Conversely, excess momentum is another possible outcome, for example, if a sponsoring firm uses low prices during early periods of diffusion. The aim of this paper is to analyze the dynamics of the adoption process in markets exhibiting network effects by focusing on two factors; switching and agent heterogeneity. Switching is an important factor that should be considered in analyzing the adoption process. An agent's switching invokes switching by other adopters, which brings about a positive feedback process that can significantly complicate the adoption process. Agent heterogeneity also plays a important role in shaping the early development of the adoption process, which has a significant impact on the later development of the process. The effects of these two factors are analyzed by developing an agent-based simulation model. ABM is a computer-based simulation methodology that can offer many advantages over traditional analytical approaches. The model is designed such that agents have diverse preferences regarding technology and are allowed to switch their previous choice. The simulation results showed that the adoption processes in a market exhibiting networks effects are significantly affected by the distribution of agents and the occurrence of switching. In particular, it is found that both weak heterogeneity and strong network effects cause agents to start to switch early and this plays a role of expediting the emergence of 'lock-in.' When network effects are strong, agents are easily affected by changes in early market shares. This causes agents to switch earlier and in turn speeds up the market's tipping. The same effect is found in the case of highly homogeneous agents. When agents are highly homogeneous, the market starts to tip toward one technology rapidly, and its choice is not always consistent with the populations' initial inclination. Increased volatility and faster lock-in increase the possibility that the market will reach an unexpected outcome. The primary contribution of this study is the elucidation of the role of parameters characterizing the market in the development of the lock-in process, and identification of conditions where such unexpected outcomes happen.

Dynamic Limit and Predatory Pricing Under Uncertainty (불확실성하(不確實性下)의 동태적(動態的) 진입제한(進入制限) 및 약탈가격(掠奪價格) 책정(策定))

  • Yoo, Yoon-ha
    • KDI Journal of Economic Policy
    • /
    • v.13 no.1
    • /
    • pp.151-166
    • /
    • 1991
  • In this paper, a simple game-theoretic entry deterrence model is developed that integrates both limit pricing and predatory pricing. While there have been extensive studies which have dealt with predation and limit pricing separately, no study so far has analyzed these closely related practices in a unified framework. Treating each practice as if it were an independent phenomenon is, of course, an analytical necessity to abstract from complex realities. However, welfare analysis based on such a model may give misleading policy implications. By analyzing limit and predatory pricing within a single framework, this paper attempts to shed some light on the effects of interactions between these two frequently cited tactics of entry deterrence. Another distinctive feature of the paper is that limit and predatory pricing emerge, in equilibrium, as rational, profit maximizing strategies in the model. Until recently, the only conclusion from formal analyses of predatory pricing was that predation is unlikely to take place if every economic agent is assumed to be rational. This conclusion rests upon the argument that predation is costly; that is, it inflicts more losses upon the predator than upon the rival producer, and, therefore, is unlikely to succeed in driving out the rival, who understands that the price cutting, if it ever takes place, must be temporary. Recently several attempts have been made to overcome this modelling difficulty by Kreps and Wilson, Milgram and Roberts, Benoit, Fudenberg and Tirole, and Roberts. With the exception of Roberts, however, these studies, though successful in preserving the rationality of players, still share one serious weakness in that they resort to ad hoc, external constraints in order to generate profit maximizing predation. The present paper uses a highly stylized model of Cournot duopoly and derives the equilibrium predatory strategy without invoking external constraints except the assumption of asymmetrically distributed information. The underlying intuition behind the model can be summarized as follows. Imagine a firm that is considering entry into a monopolist's market but is uncertain about the incumbent firm's cost structure. If the monopolist has low cost, the rival would rather not enter because it would be difficult to compete with an efficient, low-cost firm. If the monopolist has high costs, however, the rival will definitely enter the market because it can make positive profits. In this situation, if the incumbent firm unwittingly produces its monopoly output, the entrant can infer the nature of the monopolist's cost by observing the monopolist's price. Knowing this, the high cost monopolist increases its output level up to what would have been produced by a low cost firm in an effort to conceal its cost condition. This constitutes limit pricing. The same logic applies when there is a rival competitor in the market. Producing a high cost duopoly output is self-revealing and thus to be avoided. Therefore, the firm chooses to produce the low cost duopoly output, consequently inflicting losses to the entrant or rival producer, thus acting in a predatory manner. The policy implications of the analysis are rather mixed. Contrary to the widely accepted hypothesis that predation is, at best, a negative sum game, and thus, a strategy that is unlikely to be played from the outset, this paper concludes that predation can be real occurence by showing that it can arise as an effective profit maximizing strategy. This conclusion alone may imply that the government can play a role in increasing the consumer welfare, say, by banning predation or limit pricing. However, the problem is that it is rather difficult to ascribe any welfare losses to these kinds of entry deterring practices. This difficulty arises from the fact that if the same practices have been adopted by a low cost firm, they could not be called entry-deterring. Moreover, the high cost incumbent in the model is doing exactly what the low cost firm would have done to keep the market to itself. All in all, this paper suggests that a government injunction of limit and predatory pricing should be applied with great care, evaluating each case on its own basis. Hasty generalization may work to the detriment, rather than the enhancement of consumer welfare.

  • PDF

Bibliographic Study on 『ChungMinKongKeicho (忠愍公啓草)』 by YI Sun-sin (이순신의 『충민공계초(忠愍公啓草)』에 대한 서지적 고찰)

  • Ro, Seung-Suk
    • Korean Journal of Heritage: History & Science
    • /
    • v.49 no.2
    • /
    • pp.4-19
    • /
    • 2016
  • Jangkei(狀啓) made to the Royal Court by Yi Sun-sin during the Japanese invasions of Korea is handed down under the names of Jangcho(狀草), Keicho(啓草), Keibon(啓本) and others depending on copying patterns of those times and later times as it was copied out by a third person. In particular, "YimjinJangcho(壬辰狀草)" which Yi drew up during his service as the director of the naval forces in Jeolla Jwasooyeong is known as the most popular Jangkei. "ChungMinKongKeicho" which has been re-located recently after loss is a national treasure level cultural property as valuable as "YimjinJangcho" and should be treated as a model of Yi Sun-sin's other Jangkeis by next generations. As of now, however it is not confirmed if it is a totally new book related to Yi Sun-sin or is supplementary to the lost Jangkei, this study decided to ascertain relevant information through a bibliographic discussion on the question. "Chungmin(忠愍)" was the title that was used after the death of Yi Sun-sin, and "ChungMinKongKeicho" was completed when Jangkei was copied in 1662. 12 books that would not be found in YimjinJangcho are included in the book and such books are also present in the Jangkei supplement which has been known lost so far. What should be especially focused on here is that the forms and contents of these (11) photographs that Japanese shot from "ChungMinKongKeicho" in 1928 turned out to be completely identical to those of the original copy. The point that Korean History Compilation Committee added the 12 books to Jangkei as referring to the book as "One Keicho(啓草) partially copied(抄寫) in separation" and that Cho Sung-do categorized the 12 books into a supplement and others can be solid proofs to make the Jangkei supplement called "ChungMinKongKeicho". In terms of "ChungMooKongKeicho", since it consists of 62 books in total, it is not reasonable to see the book as Jangkei supplement which has the extra 12 more books for itself. "ChungMooKongKeibon" in "ChungMooKongYusa" was written with a total of 16 books. In the body, Yidumun is only clearly present, and the three books in the later part are same with the original copy of "ChungMooKongKeicho". "YimjinJangcho" by Korean History Compilation Committee has been the only book in which Yidumun was observed so far but now, it is assumed that the publication date of "ChungMooKongKeibon" goes before that of the former. The counterargument to the opinion that "ChungMinKongKeicho" is the supplement to Jangkei is based on Lee Eun-sang's comment "One page of a log in the Jangkei copy supplement." At first Seol Ui-sik introduced a piece photo of the rough draft of "MoosulIlki" in a drawing form through "Nanjung Ilkicho by Yi Sun-sin" in 1953. Lee Eun-sang also added two pages of the handwritten Yilkicho in the Jangkeichobon supplement to "MoosulIlki" and for the second time, the phrase "One page of a log written during the last 10 days after the Jangkei copy supplement" and "Supplement" were used. Those views are originated from the comment "One photograph of the rough draft of "MoosulIlki"" which Seol Ui-sik introduced without knowledge of the exact source. Lee Eun-sang said, "One page of a log in the Jangkei copy supplement" because Lee mistook "ChungMooKongYusa" for a book related to Jangkei. Since it is the wrong argument different from the actual situation of the original copy, if it has to be corrected, it should be rephrased "One page of a log in ChungMooKongYusa." After all, the source of the counterargument is the mistake because there has never been the Jangkei supplement with one page of a log included. All the Jangkeis other than "YimjinJangcho" can be said as the Jangkei supplements but still, they are separated from the other Jangkeis for the extra 12 more books are present in the commonly-called Jangkei supplement. Due to that reason, the argument on how "ChungMinKongKeicho" with the 12 books added is the popular Jangkei supplement should be considered more reasonable.

Understanding the Mismatch between ERP and Organizational Information Needs and Its Responses: A Study based on Organizational Memory Theory (조직의 정보 니즈와 ERP 기능과의 불일치 및 그 대응책에 대한 이해: 조직 메모리 이론을 바탕으로)

  • Jeong, Seung-Ryul;Bae, Uk-Ho
    • Asia pacific journal of information systems
    • /
    • v.22 no.2
    • /
    • pp.21-38
    • /
    • 2012
  • Until recently, successful implementation of ERP systems has been a popular topic among ERP researchers, who have attempted to identify its various contributing factors. None of these efforts, however, explicitly recognize the need to identify disparities that can exist between organizational information requirements and ERP systems. Since ERP systems are in fact "packages" -that is, software programs developed by independent software vendors for sale to organizations that use them-they are designed to meet the general needs of numerous organizations, rather than the unique needs of a particular organization, as is the case with custom-developed software. By adopting standard packages, organizations can substantially reduce many of the potential implementation risks commonly associated with custom-developed software. However, it is also true that the nature of the package itself could be a risk factor as the features and functions of the ERP systems may not completely comply with a particular organization's informational requirements. In this study, based on the organizational memory mismatch perspective that was derived from organizational memory theory and cognitive dissonance theory, we define the nature of disparities, which we call "mismatches," and propose that the mismatch between organizational information requirements and ERP systems is one of the primary determinants in the successful implementation of ERP systems. Furthermore, we suggest that customization efforts as a coping strategy for mismatches can play a significant role in increasing the possibilities of success. In order to examine the contention we propose in this study, we employed a survey-based field study of ERP project team members, resulting in a total of 77 responses. The results of this study show that, as anticipated from the organizational memory mismatch perspective, the mismatch between organizational information requirements and ERP systems makes a significantly negative impact on the implementation success of ERP systems. This finding confirms our hypothesis that the more mismatch there is, the more difficult successful ERP implementation is, and thus requires more attention to be drawn to mismatch as a major failure source in ERP implementation. This study also found that as a coping strategy on mismatch, the effects of customization are significant. In other words, utilizing the appropriate customization method could lead to the implementation success of ERP systems. This is somewhat interesting because it runs counter to the argument of some literature and ERP vendors that minimized customization (or even the lack thereof) is required for successful ERP implementation. In many ERP projects, there is a tendency among ERP developers to adopt default ERP functions without any customization, adhering to the slogan of "the introduction of best practices." However, this study asserts that we cannot expect successful implementation if we don't attempt to customize ERP systems when mismatches exist. For a more detailed analysis, we identified three types of mismatches-Non-ERP, Non-Procedure, and Hybrid. Among these, only Non-ERP mismatches (a situation in which ERP systems cannot support the existing information needs that are currently fulfilled) were found to have a direct influence on the implementation of ERP systems. Neither Non-Procedure nor Hybrid mismatches were found to have significant impact in the ERP context. These findings provide meaningful insights since they could serve as the basis for discussing how the ERP implementation process should be defined and what activities should be included in the implementation process. They show that ERP developers may not want to include organizational (or business processes) changes in the implementation process, suggesting that doing so could lead to failed implementation. And in fact, this suggestion eventually turned out to be true when we found that the application of process customization led to higher possibilities of failure. From these discussions, we are convinced that Non-ERP is the only type of mismatch we need to focus on during the implementation process, implying that organizational changes must be made before, rather than during, the implementation process. Finally, this study found that among the various customization approaches, bolt-on development methods in particular seemed to have significantly positive effects. Interestingly again, this finding is not in the same line of thought as that of the vendors in the ERP industry. The vendors' recommendations are to apply as many best practices as possible, thereby resulting in the minimization of customization and utilization of bolt-on development methods. They particularly advise against changing the source code and rather recommend employing, when necessary, the method of programming additional software code using the computer language of the vendor. As previously stated, however, our study found active customization, especially bolt-on development methods, to have positive effects on ERP, and found source code changes in particular to have the most significant effects. Moreover, our study found programming additional software to be ineffective, suggesting there is much difference between ERP developers and vendors in viewpoints and strategies toward ERP customization. In summary, mismatches are inherent in the ERP implementation context and play an important role in determining its success. Considering the significance of mismatches, this study proposes a new model for successful ERP implementation, developed from the organizational memory mismatch perspective, and provides many insights by empirically confirming the model's usefulness.

  • PDF

Revisiting the trilemma of modern welfare states - Application of the fuzzy-set ideal type analysis - (복지국가 트릴레마 양상의 변화 - 퍼지셋 이상형 분석의 적용 -)

  • Shin, Dong-Myeon;Choi, Young Jun
    • 한국사회정책
    • /
    • v.19 no.3
    • /
    • pp.119-147
    • /
    • 2012
  • This paper aims to explore whether the trilemma of welfare states has been a valid argument about the recent change of welfare states. Based on fuzzy-set ideal type analysis of data from seventeen OECD countries, it examines that welfare states have achieved three core policy objectives -income equality, employment growth and fiscal discipline- in the service economy during the period between 1981 and 2010. The evidence presented in this paper does not support the trilemma of the service economy where only two goals can be pursued successfully at one time, at a cost of the other remained goal. The trilemma has been effective only to the countries in liberal welfare regime where employment growth and fiscal discipline has been achieved at a cost of higher levels of income equality. However, conservative welfare-state regimes have experienced the deterioration of income equality and fiscal restraint after the mid 1980s and it seems that they have diverged into various models. In the countries of the social democratic welfare regime, the goals of equality and employment have been achieved simultaneously together with fiscal discipline since the early 2000s. While they are classified as the perfect model in the research, Southern European welfare states including Greece and Italy, classified as 'the crisis model', have not performed well in all the three aspects. On the evidence presented in this paper, it can be said that the trilemma of welfare states in the service economy is not effective to explain the policy goals of welfare state as well as the result of redistributive politics in the service economy.