• 제목/요약/키워드: Content-based approach

검색결과 780건 처리시간 0.029초

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • 제24권2호
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

An Expert System for the Estimation of the Growth Curve Parameters of New Markets (신규시장 성장모형의 모수 추정을 위한 전문가 시스템)

  • Lee, Dongwon;Jung, Yeojin;Jung, Jaekwon;Park, Dohyung
    • Journal of Intelligence and Information Systems
    • /
    • 제21권4호
    • /
    • pp.17-35
    • /
    • 2015
  • Demand forecasting is the activity of estimating the quantity of a product or service that consumers will purchase for a certain period of time. Developing precise forecasting models are considered important since corporates can make strategic decisions on new markets based on future demand estimated by the models. Many studies have developed market growth curve models, such as Bass, Logistic, Gompertz models, which estimate future demand when a market is in its early stage. Among the models, Bass model, which explains the demand from two types of adopters, innovators and imitators, has been widely used in forecasting. Such models require sufficient demand observations to ensure qualified results. In the beginning of a new market, however, observations are not sufficient for the models to precisely estimate the market's future demand. For this reason, as an alternative, demands guessed from those of most adjacent markets are often used as references in such cases. Reference markets can be those whose products are developed with the same categorical technologies. A market's demand may be expected to have the similar pattern with that of a reference market in case the adoption pattern of a product in the market is determined mainly by the technology related to the product. However, such processes may not always ensure pleasing results because the similarity between markets depends on intuition and/or experience. There are two major drawbacks that human experts cannot effectively handle in this approach. One is the abundance of candidate reference markets to consider, and the other is the difficulty in calculating the similarity between markets. First, there can be too many markets to consider in selecting reference markets. Mostly, markets in the same category in an industrial hierarchy can be reference markets because they are usually based on the similar technologies. However, markets can be classified into different categories even if they are based on the same generic technologies. Therefore, markets in other categories also need to be considered as potential candidates. Next, even domain experts cannot consistently calculate the similarity between markets with their own qualitative standards. The inconsistency implies missing adjacent reference markets, which may lead to the imprecise estimation of future demand. Even though there are no missing reference markets, the new market's parameters can be hardly estimated from the reference markets without quantitative standards. For this reason, this study proposes a case-based expert system that helps experts overcome the drawbacks in discovering referential markets. First, this study proposes the use of Euclidean distance measure to calculate the similarity between markets. Based on their similarities, markets are grouped into clusters. Then, missing markets with the characteristics of the cluster are searched for. Potential candidate reference markets are extracted and recommended to users. After the iteration of these steps, definite reference markets are determined according to the user's selection among those candidates. Then, finally, the new market's parameters are estimated from the reference markets. For this procedure, two techniques are used in the model. One is clustering data mining technique, and the other content-based filtering of recommender systems. The proposed system implemented with those techniques can determine the most adjacent markets based on whether a user accepts candidate markets. Experiments were conducted to validate the usefulness of the system with five ICT experts involved. In the experiments, the experts were given the list of 16 ICT markets whose parameters to be estimated. For each of the markets, the experts estimated its parameters of growth curve models with intuition at first, and then with the system. The comparison of the experiments results show that the estimated parameters are closer when they use the system in comparison with the results when they guessed them without the system.

Dehydration of Foamed Fish (Sardine)-Starch Paste by Microwave Heating 1. Formulation and Processing Conditions (어육(정어러) 발포건조제품가공에 관한 연구 1. 원료$\cdot$첨가물의 배합 및 가공조건)

  • LEE Kang-Ho;LEE Byeong-Ho;You Byeong-Jin;SONG Dong-Suck;SUH Jae-Soo;JEA YOi-Guan;RYU Hong-Soo
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • 제15권4호
    • /
    • pp.283-290
    • /
    • 1982
  • Sardine and mackerel so called dark muscled fish have been underutilized due to the disadvantages in bloody meat color, high content of fat, and postmortem instability of protein. Recent efforts were made to overcome these defects and develope new types of product such as texturized protein concentrates and dark muscle eliminated minced fish. Approach of this study is based on the rapid dehydration of foamed fish-starch paste by dielectric heating. In process comminuted sardine meat was washed more than three times by soaking and decanting in chilled water and finally centrifuged. The meat was ground in a stone mortar added Ivith adequate amounts of salt, foaming agent, and other ingredients for aid to elasticity and foam stability. The ground meat paste was extruded in finger shape and heated in a microwave oven to give foamed, expanded, and porous solid structure by dehydration. Dielectric onstant $(\varepsilon')$ and dielect.ic loss $(\varepsilon")$ values of sardine meat paste were influenced by wavelength and moisture level. Those values at 100 KHz and 15 MHz were ranged 2.25-9.86; 2.22-4,18 for E' and 0.24-19.24; 0.16-1.20 for E", respectively, at the moisture levels of $4.2-13.8\%$. For a formula for fish-starch paste preparation, addition of $20-30\%$ starch (potato starch) to the weight of fish meat, $2-4\%$ salt, and $5-10\%$ soybean protein was adequate to yield 4-5 folds of expansion in volume when heated. Addition of e99 yolk was of benefit to micronize foam size and better crispness. In order to provide better foaming and dehydration, addition of $0.2-0.5\%$sodium bicarbonate, foaming agent, was proper to result in foam size of 0.5-0.7 mm and foam density of $200-400\;/cm^2$ which gave a good crispness. Heating time was depended upon the moisture level of fish-starch paste. For a finger shaped paste (1.0cm. $D\times10cm.L$) heating for 150-200 sec. in a microwave oven (700W. 2.45GHz) was sufficient to generate foams, expand, and solidify the porous structure of fish-starch paste. When the moisture content was above $55\%$ browning and scorching was deepened due to over-expansion and over-heating whereas the crispness was hardened by insufficient expansion at lower moisture content. In quality evaluation of the product, chemical composition of $30\%$ starch and $3\%$ salt added product was moisture $8.8\%$, lipid $2.4\%$, carbohydrate $46.7\%$, protein $36.1\%$, and ash $6.0\%$. Eleven membered panel test evaluated that fish-starch paste was acceptable in color, crisp-ness, taste, except a trace of fishy odour which could be masked by the addition of spice extracts.

  • PDF

A Study on the Management of Manhwa Contents Records and Archives (만화기록 관리 방안 연구)

  • Kim, Seon Mi;Kim, Ik Han
    • The Korean Journal of Archival Studies
    • /
    • 제28호
    • /
    • pp.35-81
    • /
    • 2011
  • Manhwa is a mass media (to expose all faces of an era such as politics, society, cultures, etc with the methodology of irony, parody, etc). Since the Manhwa records is primary culture infrastructure, it can create the high value-added industry by connecting with fancy, character, game, movie, drama, theme park, advertising business. However, due to lack of active and systematic aquisition system, as precious Manhwa manuscript is being lost every year and the contents hard to preserve such as Manhwa content in the form of electronic records are increasing, the countermeasure of Manhwa contents management is needed desperately. In this study, based on these perceptions, the need of Manhwa records management is examined, and the characteristics and the components of Manhwa records were analyzed. And at the same time, the functions of record management process reflecting the characteristics of Manhwa records were extracted by analyzing various cases of overseas Cartoon Archives. And then, the framework of record-keeping regime was segmented into each of acquisition management service areas and the general Manhwa records archiving strategy, which manages the Manhwa contents records, was established and suggested. The acquired Manhwa content records will secure the context among records and warrant the preservation of records and provide diverse access points by reflecting multi classification and multi-level descriptive element. The Manhwa records completed the intellectual arrangement will be preserved after the conservation in an environment equipped with preservation facilities or preserved using digital format in case of electronic records or when there is potential risk of damaging the records. Since the purpose of the Manhwa records is to use them, the information may be provided to diverse classes of users through the exhibition, the distribution, and the development of archival information content. Since the term of "Manhwa records" is unfamiliar yet and almost no study has been conducted in the perspective of records management, it will be the limit of this study only presenting acquisition strategy, management and service strategy of Manhwa contents and suggesting simple examples. However, if Manhwa records management strategy are possibly introduced practically to Manhwa manuscript repositories through archival approach, it will allow systematic acquisition, preservation, arrangement of Manhwa records and will contribute greatly to form a foundation for future Korean culture contents management.

Current Wheat Quality Criteria and Inspection Systems of Major Wheat Producing Countries (밀 품질평가 현황과 검사제도)

  • 이춘기;남중현;강문석;구본철;김재철;박광근;박문웅;김용호
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • 제47권
    • /
    • pp.63-94
    • /
    • 2002
  • On the purpose to suggest an advanced scheme in assessing the domestic wheat quality, this paper reviewed the inspection systems of wheat in major wheat producing countries as well as the quality criteria which are being used in wheat grading and classification. Most wheat producing countries are adopting both classifications of class and grade to provide an objective evaluation and an official certification to their wheat. There are two main purposes in the wheat classification. The first objectives of classification is to match the wheat with market requirements to maximize market opportunities and returns to growers. The second is to ensure that payments to glowers aye made on the basis of the quality and condition of the grain delivered. Wheat classes has been assigned based on the combination of cultivation area, seed-coat color, kernel and varietal characteristics that are distinctive. Most reputable wheat marketers also employ a similar approach, whereby varieties of a particular type are grouped together, designed by seed coat colour, grain hardness, physical dough properties, and sometimes more precise specification such as starch quality, all of which are genetically inherited characteristics. This classification in simplistic terms is the categorization of a wheat variety into a commercial type or style of wheat that is recognizable for its end use capabilities. All varieties registered in a class are required to have a similar end-use performance that the shipment be consistent in processing quality, cargo to cargo and year to year, Grain inspectors have historically determined wheat classes according to visual kernel characteristics associated with traditional wheat varieties. As well, any new wheat variety must not conflict with the visual distinguishability rule that is used to separate wheats of different classes. Some varieties may possess characteristics of two or more classes. Therefore, knowledge of distinct varietal characteristics is necessary in making class determinations. The grading system sets maximum tolerance levels for a range of characteristics that ensure functionality and freedom from deleterious factors. Tests for the grading of wheat include such factors as plumpness, soundness, cleanliness, purity of type and general condition. Plumpness is measured by test weight. Soundness is indicated by the absence or presence of musty, sour or commercially objectionable foreign odors and by the percentage of damaged kernels that ave present in the wheat. Cleanliness is measured by determining the presence of foreign material after dockage has been removed. Purity of class is measured by classification of wheats in the test sample and by limitation for admixtures of different classes of wheat. Moisture does not influence the numerical grade. However, it is determined on all shipments and reported on the official certificate. U.S. wheat is divided into eight classes based on color, kernel Hardness and varietal characteristics. The classes are Durum, Hard Red Spring, Hard Red Winter, Soft Red Winter, Hard White, soft White, Unclassed and Mixed. Among them, Hard Red Spring wheat, Durum wheat, and Soft White wheat are further divided into three subclasses, respectively. Each class or subclass is divided into five U.S. numerical grades and U.S. Sample grade. Special grades are provided to emphasize special qualities or conditions affecting the value of wheat and are added to and made a part of the grade designation. Canadian wheat is also divided into fourteen classes based on cultivation area, color, kernel hardness and varietal characteristics. The classes have 2-5 numerical grades, a feed grade and sample grades depending on class and grading tolerance. The Canadian grading system is based mainly on visual evaluation, and it works based on the kernel visual distinguishability concept. The Australian wheat is classified based on geographical and quality differentiation. The wheat grown in Australia is predominantly white grained. There are commonly up to 20 different segregations of wheat in a given season. Each variety grown is assigned a category and a growing areas. The state governments in Australia, in cooperation with the Australian Wheat Board(AWB), issue receival standards and dockage schedules annually that list grade specifications and tolerances for Australian wheat. AWB is managing "Golden Rewards" which is designed to provide pricing accuracy and market signals for Australia's grain growers. Continuous payment scales for protein content from 6 to 16% and screenings levels from 0 to 10% based on varietal classification are presented by the Golden Rewards, and the active payment scales and prices can change with market movements.movements.

Finding Influential Users in the SNS Using Interaction Concept : Focusing on the Blogosphere with Continuous Referencing Relationships (상호작용성에 의한 SNS 영향유저 선정에 관한 연구 : 연속적인 참조관계가 있는 블로고스피어를 중심으로)

  • Park, Hyunjung;Rho, Sangkyu
    • The Journal of Society for e-Business Studies
    • /
    • 제17권4호
    • /
    • pp.69-93
    • /
    • 2012
  • Various influence-related relationships in Social Network Services (SNS) among users, posts, and user-and-post, can be expressed using links. The current research evaluates the influence of specific users or posts by analyzing the link structure of relevant social network graphs to identify influential users. We applied the concept of mutual interactions proposed for ranking semantic web resources, rather than the voting notion of Page Rank or HITS, to blogosphere, one of the early SNS. Through many experiments with network models, where the performance and validity of each alternative approach can be analyzed, we showed the applicability and strengths of our approach. The weight tuning processes for the links of these network models enabled us to control the experiment errors form the link weight differences and compare the implementation easiness of alternatives. An additional example of how to enter the content scores of commercial or spam posts into the graph-based method is suggested on a small network model as well. This research, as a starting point of the study on identifying influential users in SNS, is distinctive from the previous researches in the following points. First, various influence-related properties that are deemed important but are disregarded, such as scraping, commenting, subscribing to RSS feeds, and trusting friends, can be considered simultaneously. Second, the framework reflects the general phenomenon where objects interacting with more influential objects increase their influence. Third, regarding the extent to which a bloggers causes other bloggers to act after him or her as the most important factor of influence, we treated sequential referencing relationships with a viewpoint from that of PageRank or HITS (Hypertext Induced Topic Selection).

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • 제25권3호
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

Brutal sorigeuk of the use of educational view of (잔혹소리극 <내다리내놔>의 가치 교육적 활용에 대한 고찰)

  • Kim, Jeong Sun
    • (The) Research of the performance art and culture
    • /
    • 제32호
    • /
    • pp.595-628
    • /
    • 2016
  • Pansori of a creative group pansori 2006 demonstration factory floor sound brutal sorigeuk the home of is a legend 'deokttaegol' in pansori, a creative for adaptation to remakes Work is. Evil Twin 'deokttaegol' called "Give me my leg back" in of Ghost Stories, broadcast on a kbs of lines from breakneck work is considered to be a pronoun. Sound and shadow play and playing drums and payments sentiments of the cruelty I've come across in this 'Give me my leg back' audience to be deployed to the cruel is formed by the center. Based on emotional horror of cruelty. When I was little, ever heard of Korean Ghost Stories, a bedrock of the main feeling revulsion of value in a short time and is contained in a story of filial piety, while in education, to the target Provided. Done in our lives using genre called 'pansori' sentiment and efficient learning can move about the value education can know. Sound and stories, many carefree a stimulus such as Pansori is a great gesture can be a means of education. Valued with any information, work is performed in pansori, depending upon efficient and the various, education and made an emotional cultivation resulting from the value. In my life friendly, our own via a variety of materials that can easily access many values and sentiments, and to culture for each age group on languages and customs Each age groups and instructive preferred allowing them access through their rhythm, pansori, access to the target is persistent about it with curiosity and interest. Can have interest. This wealth not belong to the traditional pansori and new together private and to the tune called creative work for the Pansori. Therefore, our language and customs, their poems span a friendly, the pansori and created using the vocabulary for each age group creative content is educational effects if used in education It is expected to be big thing. These effective approach for each age group and based on the vocabulary by the content easily understood lessons by causing only a smoothly acquired Can to provide an opportunity. Therefore, the Pansori of a creative education is important to take advantage of educational value.

Path Analysis of Factors Limiting Crop Yield in Rice Paddy and Upland Corn Fields (벼와 옥수수 재배 포장에서 경로분석을 이용한 작물 수확량 제한요인 분석)

  • Chung S. O.;Sudduth K. A.;Chang Y. C.
    • Journal of Biosystems Engineering
    • /
    • 제30권1호
    • /
    • pp.45-55
    • /
    • 2005
  • Knowledge of the relationship between crop yield and yield-limiting factors is essential for precision farming. However, developing this knowledge is not easy because these yield-limiting factors are interrelated and affect crop yield in different ways. In this study, data for grain yield and yield-limiting factors, including crop chlorophyll content, soil chemical properties, and topography were collected for a small (0.3 ha) rice paddy field in Korea and a large (36 ha) upland corn field in the USA, and relationships were investigated with path analysis. Using this approach, the effects of limiting factors on crop yield could be separated into direct effects and indirect effects acting through other factors. Path analysis provided more insight into these complex relationships than did simple correlation or multiple linear regression analysis. Results of correlation analysis for the rice paddy field showed that EC, Ca, and $SiO_2$ had significant (P<0.1) correlations with rice yield, while pH, Ca, Mg, Na, $SiO_2,\;and\;P_2O_5$ had significant correlations with the SPAD chlorophyll reading. Path analysis provided additional information about the importance and contribution paths of soil variables to rice yield and growth. Ca had the highest direct effect (0.52) and indirect effect via Mg (-0.37) on rice yield. The indirect effect of Mg through Ca (0.51) was higher than the direct effect (-0.38). Path analysis also enabled more appropriate selection of important factors limiting crop yield by considering cause-and-effect relationships among predictor and response variables. For example, although pH showed a positive correlation (r=0.35) with SPAD readings, the correlation was mainly due to the indirect positive effects acting through Mg and $SiO_2$, while pH not only showed negative direct effects, but also negatively impacted indirect effects of other variables on SPAD readings. For the large upland Missouri corn field, two topographic factors, elevation and slope, had significant (P<0.1) direct effects on yield and highly significant (P<0.01) correlations with other limiting factors. Based on the correlation analysis alone, P and K were determined to be nutrients that would increase corn yield for this field. With the help of path analysis, however, increases in Mg could also be expected to increase corn yield in this case. In general, path analysis results were consistent with published optimum ranges of nutrients for rice and com production. We conclude that path analysis can be a useful tool to investigate interrelationships between crop yield and yield limiting factors on a site-specific basis.

Effects of dietary enzyme cocktail on diarrhea and immune responses of weaned pigs

  • Kang, Joowon;Cho, Jeeyeon;Jang, Kibeom;Kim, Junsu;Kim, Sheena;Mun, Daye;Kim, Byeonghyeon;Kim, Younghwa;Park, Juncheol;Choe, Jeehwan;Song, Minho
    • Korean Journal of Agricultural Science
    • /
    • 제44권4호
    • /
    • pp.525-530
    • /
    • 2017
  • Weaning is the most stressful event for nursery pigs because they are moved from familiar to unfamiliar environments. In addition, weaned pigs have immature digestive and immune systems. This situation makes weaned pigs susceptible to diseases and makes the absorption of nutrients from diets difficult. A feed approach, such as dietary enzyme supplementation, can be considered a solution. This study investigated the effects of dietary enzyme cocktail on diarrhea and immune responses of weaned pigs. A total 36 weaned pigs ($5.92{\pm}0.48kg\;BW$; 28 d old) were randomly allotted to 2 dietary treatments (3 pigs/pen, 6 replicates/treatment) in a randomized complete block design. The dietary treatments were a typical diet based on corn and soybean meal (CON) and CON with 0.05% enzyme cocktail (Cocktail; combination of xylanase, ${\alpha}-amylase$, protease, ${\beta}-glucanase$, and pectinase). Pigs were fed their respective diets for 6 wk. Incidence of diarrhea, packed cell volume (PCV), white blood cells (WBC) count, and immunoglobulin content were measured. A significantly lower incidence of diarrhea (p < 0.05) was observed in the Cocktail group as compared with the CON group. The Cocktail group also showed a decreased PCV (p < 0.1) on d 3 after weaning than the CON group. However, no differences were observed for number of WBC and contents of immunoglobulin G, M, and A between the Cocktail and CON groups. Consequently, inclusion of an enzyme cocktail in diets for weaned pigs had a positive influence on gut health by reducing the incidence of diarrhea in the present study.