• Title/Summary/Keyword: Entity-based

Search Result 753, Processing Time 0.029 seconds

A Study on Factors Influencing the Progress of Housing Construction Project by Regional Housing Association (지역주택조합의 주택건설사업 추진에 영향을 미치는 요인에 관한 연구)

  • Lee, Sangchul;Lee, Sangyoub
    • Korean Journal of Construction Engineering and Management
    • /
    • v.22 no.2
    • /
    • pp.72-79
    • /
    • 2021
  • This study intends to explore the factors influencing the progress of housing construction projects by regional housing associations. In order to develop the importance weight of factors classified into 11 factors with 4 categories, AHP and Fuzzy methodologies are implemented based on survey analysis by field experts and project participants. Research findings indicate that the four categories of land, business, legal entity, and copartner, and the factors of professionalism, location, transparency, purchasing cost, administrative supervision, landlord participation, liability for damages, etc are in order of importance. It is noteworthy that the contractor, financial institution, developer, legal expert, and association consider professionalism, location, purchasing cost, and transparency as the most important factors respectively. This study aims to help provide the implication for factors Influencing the progress of housing construction project to project participants.

A Study on Collective Self-esteem of Public Librarian Servant and Supporting Factors in their Work Environment (공무원사서의 집단자존감과 직무환경 지원 요인에 관한 연구)

  • Lee, Ja-Young;Hong, Hyun-Jin
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.32 no.1
    • /
    • pp.295-314
    • /
    • 2021
  • The purpose of this study is to find out the factors that affect the 'public librarian servant's' Collective Self-Esteem (CSE) in Korean relation-oriented collectivism. The research was organized into a research model by setting up a hypothesis on the impact relationship the between the group Collective self-esteem, the work environment factors and cognition of Superior institution of the Public librarian servant. The relevant data collection was conducted for public librarian servant working in public libraries nationwide through the method of survey. From April 3, 2020 to May 14, 2020, it was conducted for Public Librarian Servant in public libraries nationwide with Civil Service System, Office of Education, Ministry of Culture·Sports and Tourism based on responses for 301 Public Librarian Servant from 559 institution. According to the analysis, the cognition of the superior institution of Public Librarian Servant influences the cognition of Collective Self-Esteem among the sub-factors of Collective - Self - Esteem of Collective Self-Esteem. This study is meaningful in that 'Public Librarian Servant' who was treated as an individual entity in the study of Library and Information Science study, as being has relationship with various groups and is dealt with multi-dimensionally in Korean relation-oriented collectivism culture. Research suggests that when Public Librarian Servant exist as one of the subordinate organizations to vast Civil service system, it suggest that Public Librarian Servants should find out a balance for improve their relatively low social values.

A Study of Public Library Untact Service Operation Way Based on a User Perception Survey (이용자 인식조사를 기반으로 한 공공도서관 비대면 서비스 운영 방향에 관한 연구)

  • Yun, Dayoung;Noh, Younghee
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.32 no.4
    • /
    • pp.161-188
    • /
    • 2021
  • Libraries are experiencing numerous modifications to multifunctional spaces in response to the declaration of WHO pandemic. Most libraries throughout the globe have reported their closures, including national libraries in every country, and most established facing services have been transformed to untact services. However, the breadth of the service varies by local autonomous entity and library, making it difficult for librarians to act according to their conviction, and also users are feeling inconvenienced. As a result, this study used a user perception survey to determine the extent of non-face-to-face services in the library and to suggest the direction of operation of non-face-to-face services in a pandemic condition. To that purpose, beginning February 12, 2020, an online survey of library customers in 37 public libraries was performed for about three weeks, and the 117 replies were examined using the SPSS statistics tool to perform frequency analysis, regression analysis, correlation analysis, and reliability analysis. By the survey and data survey results, it was recommended that library services be re-established in the direction of non-face-to-face library service operation, that library non-face-to-face services be expanded, that user education be conducted, that non-face-to-face services be promoted, and that user opinions be collected.

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.

A Study on the Strategy of IoT Industry Development in the 4th Industrial Revolution: Focusing on the direction of business model innovation (4차 산업혁명 시대의 사물인터넷 산업 발전전략에 관한 연구: 기업측면의 비즈니스 모델혁신 방향을 중심으로)

  • Joeng, Min Eui;Yu, Song-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.57-75
    • /
    • 2019
  • In this paper, we conducted a study focusing on the innovation direction of the documentary model on the Internet of Things industry, which is the most actively industrialized among the core technologies of the 4th Industrial Revolution. Policy, economic, social, and technical issues were derived using PEST analysis for global trend analysis. It also presented future prospects for the Internet of Things industry of ICT-related global research institutes such as Gartner and International Data Corporation. Global research institutes predicted that competition in network technologies will be an issue for industrial Internet (IIoST) and IoT (Internet of Things) based on infrastructure and platforms. As a result of the PEST analysis, developed countries are pushing policies to respond to the fourth industrial revolution through cooperation of private (business/ research institutes) led by the government. It was also in the process of expanding related R&D budgets and establishing related policies in South Korea. On the economic side, the growth tax of the related industries (based on the aggregate value of the market) and the performance of the entity were reviewed. The growth of industries related to the fourth industrial revolution in advanced countries overseas was found to be faster than other industries, while in Korea, the growth of the "technical hardware and equipment" and "communication service" sectors was relatively low among industries related to the fourth industrial revolution. On the social side, it is expected to cause enormous ripple effects across society, largely due to changes in technology and industrial structure, changes in employment structure, changes in job volume, etc. On the technical side, changes were taking place in each industry, representing the health and medical sectors and manufacturing sectors, which were rapidly changing as they merged with the technology of the Fourth Industrial Revolution. In this paper, various management methodologies for innovation of existing business model were reviewed to cope with rapidly changing industrial environment due to the fourth industrial revolution. In addition, four criteria were established to select a management model to cope with the new business environment: 'Applicability', 'Agility', 'Diversity' and 'Connectivity'. The expert survey results in an AHP analysis showing that Business Model Canvas is best suited for business model innovation methodology. The results showed very high importance, 42.5 percent in terms of "Applicability", 48.1 percent in terms of "Agility", 47.6 percent in terms of "diversity" and 42.9 percent in terms of "connectivity." Thus, it was selected as a model that could be diversely applied according to the industrial ecology and paradigm shift. Business Model Canvas is a relatively recent management strategy that identifies the value of a business model through a nine-block approach as a methodology for business model innovation. It identifies the value of a business model through nine block approaches and covers the four key areas of business: customer, order, infrastructure, and business feasibility analysis. In the paper, the expansion and application direction of the nine blocks were presented from the perspective of the IoT company (ICT). In conclusion, the discussion of which Business Model Canvas models will be applied in the ICT convergence industry is described. Based on the nine blocks, if appropriate applications are carried out to suit the characteristics of the target company, various applications are possible, such as integration and removal of five blocks, seven blocks and so on, and segmentation of blocks that fit the characteristics. Future research needs to develop customized business innovation methodologies for Internet of Things companies, or those that are performing Internet-based services. In addition, in this study, the Business Model Canvas model was derived from expert opinion as a useful tool for innovation. For the expansion and demonstration of the research, a study on the usability of presenting detailed implementation strategies, such as various model application cases and application models for actual companies, is needed.

Archival Appraisal of Public Records Regarding Urban Planning in Japanese Colonial Period (조선총독부 공문서의 기록학적 평가 -조선총독부 도시계획 관련 공문서군을 중심으로-)

  • Lee, Seung Il
    • The Korean Journal of Archival Studies
    • /
    • no.12
    • /
    • pp.179-235
    • /
    • 2005
  • In this article, the task of evaluating the official documents that were created and issued by the Joseon Governor General office during the Japanese occupation period, with new perspectives based upon the Macro-Appraisal approaches developed by the Canadian scholars and personnel, will be attempted. Recently, the Canadian people and the authorities have been showing a tendency of evaluating the meaning and importance of a particular document with perspectives considering the historical situation and background conditions that gave birth to that document to be a more important factor, even than considering the quality and condition of that very document. Such approach requires the archivists to determine whether they should preserve a certain document or not based upon the meaning, functions and status of the entity that produced the document or the meaning of the documentation practice itself, rather than the actual document. With regard to the task of evaluating the official documents created and issued by the Joseon Governor General office and involved the city plans devised by the office back then, this author established total of 4 primary tasks that would prove crucial in the process of determining whether or not a particular theme, or event, or an ideology should be selected and documents involving those themes, events and ideologies should be preserved as important sources of information regarding the Korean history of the Japanese occupation period. Those four tasks are as follow: First, the archivists should study the current and past trends of historical researches. The archivists, who are usually not in the position of having comprehensive access to historical details, must consult the historians' studies and also the trends mirrored in such studies, in their efforts of selecting important historical events and themes. Second, the archivists should determine the level of importance of the officials who worked inside the Joseon Governor General office as they were the entities that produced the very documents. It is only natural to assume that the level of importance of a particular document must have been determined by the level of importance(in terms of official functions) of the official who authorized the document and ordered it to be released. Third, the archivists should be made well aware of the inner structure and official functions of the Joseon Governor General office, so that they can have more appropriate analyses. Fourth, in order to collect historically important documents that involved the Koreans(the Joseon people), the archivists should analyze not only the functions of the Joseon Governor General office in general but also certain areas of the Office's business in which the Japanese officials and the Koreans would have interacted with each other. The act of analyzing the documents only based upon their respective levels of apparent importance might lead the archivists to miss certain documents that reflected the Koreans' situation or were related to the general interest of the Korean people. This kind of evaluation should provide data that are required in appraising how well the Joseon Governor General office's function of devising city plans were documented back then, and how well they are preserved today, utilizing a comparative study involving the Joseon Governor General office's own evaluations of its documentations and the current status of documents that are in custody of the National Archive. The task would also end up proposing a specialized strategy of collecting data and documents that is direly needed in establishing a well-designed comprehensive archives. We should establish a plan regarding certain documents that were documented by the Joseon Governor General office but do not remain today, and devise a task model for the job of primary collecting that would take place in the future.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

A Study on the Relationship of Learning, Innovation Capability and Innovation Outcome (학습, 혁신역량과 혁신성과 간의 관계에 관한 연구)

  • Kim, Kui-Won
    • Journal of Korea Technology Innovation Society
    • /
    • v.17 no.2
    • /
    • pp.380-420
    • /
    • 2014
  • We increasingly see the importance of employees acquiring enough expert capability or innovation capability to prepare for ever growing uncertainties in their operation domains. However, despite the above circumstances, there have not been an enough number of researches on how operational input components for employees' innovation outcome, innovation activities such as acquisition, exercise and promotion effort of employee's innovation capability, and their resulting innovation outcome interact with each other. This trend is believed to have been resulted because most of the current researches on innovation focus on the units of country, industry and corporate entity levels but not on an individual corporation's innovation input components, innovation outcome and innovation activities themselves. Therefore, this study intends to avoid the currently prevalent study frames and views on innovation and focus more on the strategic policies required for the enhancement of an organization's innovation capabilities by quantitatively analyzing employees' innovation outcomes and their most suggested relevant innovation activities. The research model that this study deploys offers both linear and structural model on the trio of learning, innovation capability and innovation outcome, and then suggests the 4 relevant hypotheses which are quantitatively tested and analyzed as follows: Hypothesis 1] The different levels of innovation capability produce different innovation outcomes (accepted, p-value = 0.000<0.05). Hypothesis 2] The different amounts of learning time produce different innovation capabilities (rejected, p-value = 0.199, 0.220>0.05). Hypothesis 3] The different amounts of learning time produce different innovation outcomes. (accepted, p-value = 0.000<0.05). Hypothesis 4] the innovation capability acts as a significant parameter in the relationship of the amount of learning time and innovation outcome (structural modeling test). This structural model after the t-tests on Hypotheses 1 through 4 proves that irregular on-the-job training and e-learning directly affects the learning time factor while job experience level, employment period and capability level measurement also directly impacts on the innovation capability factor. Also this hypothesis gets further supported by the fact that the patent time absolutely and directly affects the innovation capability factor rather than the learning time factor. Through the 4 hypotheses, this study proposes as measures to maximize an organization's innovation outcome. firstly, frequent irregular on-the-job training that is based on an e-learning system, secondly, efficient innovation management of employment period, job skill levels, etc through active sponsorship and energization community of practice (CoP) as a form of irregular learning, and thirdly a model of Yί=f(e, i, s, t, w)+${\varepsilon}$ as an innovation outcome function that is soundly based on a smart system of capability level measurement. The innovation outcome function is what this study considers the most appropriate and important reference model.

Topic Continuity in Korea Narrative (한국 설화문에서의 화제표현의 연속성)

  • Hi-JaChong
    • Korean Journal of Cognitive Science
    • /
    • v.2 no.2
    • /
    • pp.405-428
    • /
    • 1990
  • Language has a social function to communicate information. Linguists have gradually paid their attention to the function of language since the nineteen sixties, especially to the relationship of form, meaning and the function. The relationship could be more clearly grasped through disciyrse-based analysis than through sentence-based analysis. Many researches were centered on the discourse functional notion of topic. In the early 1970's the subject was defined as the grammatiocalized topic the topic as a discrete single constituent of the clause. In the late 1970's several lingusts including Givon suggerted that the topic was not an atomic, disctete entity, and that the clause could have more than one topic. The purpose of the present study is, following Givon, to study grammatical coding devices of topic and to measure the relative topic continuity/discontinuity of participant argu, ents in Korean narratives. By so doing, I would like to shed some light on effective ways of communicating information. The grammatical coding devices analyzed are the following eight structures: zero-anaphora, personal pronous, demonstrative pronouns, names, noun phrases following demonstratives, noun phrases following possessives, definite noun phrases and indefinite referentials. The narrative studied for the count was taken from the KoreanCIA chief's Testiomny:Revolution and Idol by Hyung Wook Kim. It was chosen because it was assumed that Kim's purpose in the novel was to tell a true story, which would not distort the natural use of language for literary effect. The measures taken in the analysis wre those of 'lookback', 'persistence', ambiguity'. The first of these, 'lookback', is a measure of the size of gap between the previous occurrence of a referent and its current occurence in the clause. The meausure of persistence, which is a measure of the speaker's topocal intent, reflects the topic's importance in the discourse. The third measure is a measure of ambiguity. This is necessary for assessing the disruptive effects that other topics within five previous clauses may have on topic identification. The more other topics are present within five previous clauses, the more difficult is the task of correct identification of a topic. The results of the present study show that the humanness of entities is the most powerful factior in topic continutiy in narrative discourse. The semantic roles of human arguments in narrative discourse tend to be agents or experiences. Since agents and experiences have high topicality in discourse, human entities clearly become clausal or discoursal topics. The results also show that the grammatical devices signal varying degrees of topic continuity discontinuity in continuous discourse. The more continuous a topic argument is, the less it is coded. For example, personal pronouns have the most continutiy and indefinite referentials have the least continutiy. The study strongly shows that topic continuity discontinutiy is controlled not only by grammatical devices available in the language but by socio-cultural factors and writer's intentions.