• Title/Summary/Keyword: Process systems engineering

Search Result 6,736, Processing Time 0.038 seconds

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

A Study on Public Interest-based Technology Valuation Models in Water Resources Field (수자원 분야 공익형 기술가치평가 시스템에 대한 연구)

  • Ryu, Seung-Mi;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.177-198
    • /
    • 2018
  • Recently, as economic property it has become necessary to acquire and utilize the framework for water resource measurement and performance management as the property of water resources changes to hold "public property". To date, the evaluation of water technology has been carried out by feasibility study analysis or technology assessment based on net present value (NPV) or benefit-to-cost (B/C) effect, however it is not yet systemized in terms of valuation models to objectively assess an economic value of technology-based business to receive diffusion and feedback of research outcomes. Therefore, K-water (known as a government-supported public company in Korea) company feels the necessity to establish a technology valuation framework suitable for technical characteristics of water resources fields in charge and verify an exemplified case applied to the technology. The K-water evaluation technology applied to this study, as a public interest goods, can be used as a tool to measure the value and achievement contributed to society and to manage them. Therefore, by calculating the value in which the subject technology contributed to the entire society as a public resource, we make use of it as a basis information for the advertising medium of performance on the influence effect of the benefits or the necessity of cost input, and then secure the legitimacy for large-scale R&D cost input in terms of the characteristics of public technology. Hence, K-water company, one of the public corporation in Korea which deals with public goods of 'water resources', will be able to establish a commercialization strategy for business operation and prepare for a basis for the performance calculation of input R&D cost. In this study, K-water has developed a web-based technology valuation model for public interest type water resources based on the technology evaluation system that is suitable for the characteristics of a technology in water resources fields. In particular, by utilizing the evaluation methodology of the Institute of Advanced Industrial Science and Technology (AIST) in Japan to match the expense items to the expense accounts based on the related benefit items, we proposed the so-called 'K-water's proprietary model' which involves the 'cost-benefit' approach and the FCF (Free Cash Flow), and ultimately led to build a pipeline on the K-water research performance management system and then verify the practical case of a technology related to "desalination". We analyze the embedded design logic and evaluation process of web-based valuation system that reflects characteristics of water resources technology, reference information and database(D/B)-associated logic for each model to calculate public interest-based and profit-based technology values in technology integrated management system. We review the hybrid evaluation module that reflects the quantitative index of the qualitative evaluation indices reflecting the unique characteristics of water resources and the visualized user-interface (UI) of the actual web-based evaluation, which both are appended for calculating the business value based on financial data to the existing web-based technology valuation systems in other fields. K-water's technology valuation model is evaluated by distinguishing between public-interest type and profitable-type water technology. First, evaluation modules in profit-type technology valuation model are designed based on 'profitability of technology'. For example, the technology inventory K-water holds has a number of profit-oriented technologies such as water treatment membranes. On the other hand, the public interest-type technology valuation is designed to evaluate the public-interest oriented technology such as the dam, which reflects the characteristics of public benefits and costs. In order to examine the appropriateness of the cost-benefit based public utility valuation model (i.e. K-water specific technology valuation model) presented in this study, we applied to practical cases from calculation of benefit-to-cost analysis on water resource technology with 20 years of lifetime. In future we will additionally conduct verifying the K-water public utility-based valuation model by each business model which reflects various business environmental characteristics.

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.

A Study on the Cutting Optimal Power Requirements of Fast Growing Trees by Circular Saw (원형톱에 의한 속성수 절단 적정 소요동력 산정에 관한 연구)

  • Choi, Yun Sung;Kim, Dae Hyun;Oh, Jae Heun
    • Journal of Korean Society of Forest Science
    • /
    • v.103 no.3
    • /
    • pp.402-407
    • /
    • 2014
  • In this study, Italy poplar(Populus euramericana) was selected for test specimen to measure cutting power when it harvested. The experiment has been controlled as three levels of feed rate (0.41, 1.25 and 2.5 m/s), sawing speed (800, 1,000 and 1,200 rpm), and the five levels of root collar diameter (50, 70, 90 and 110, 130 mm). The harvested volume after 3 years (root collar diameter 50 mm) was 10.5 tons, which falls short of the target amount of biomass is 20~30 ton/ha. In addition, the biomass amount of diameter 90 and 110 mm which reached the target amount were estimated to be 23.5 and 32.5 ton/ha respectively. As a result of experiment, it was found out that power of 128.2 and 175.8 W are consumed in case of cutting with the feed rate of 0.41m/s and minimum sawing speed (800 rpm) respectively. With the working area of 0.3 ha/h, it is considered to present working capacities of 16.5 and 22.8 ton/h respectively. The power consumed at the feed rate of 1.25 m/s is estimated to be 113.8 and 153.7W respectively and working capacity in a working area of 1 ha/h is estimated to be 23.5 and 32.5 ton/h. The power consumed at the feed rate of 2.5 m/s is estimated to be 119.8 and 166.9 W respectively and working capacity in a working area of 2 ha/h is estimated to be 47.0 and 65.5 ton/ha respectively. Therefore, the power source of harvest machine at the feed rate of 1.25, 2.50 m/s and sawing speed of 800 rpm shall be selected as it can process the target amount of estimated biomass.

A Graph Layout Algorithm for Scale-free Network (척도 없는 네트워크를 위한 그래프 레이아웃 알고리즘)

  • Cho, Yong-Man;Kang, Tae-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.5_6
    • /
    • pp.202-213
    • /
    • 2007
  • A network is an important model widely used in natural and social science as well as engineering. To analyze these networks easily it is necessary that we should layout the features of networks visually. These Graph-Layout researches have been performed recently according to the development of the computer technology. Among them, the Scale-free Network that stands out in these days is widely used in analyzing and understanding the complicated situations in various fields. The Scale-free Network is featured in two points. The first, the number of link(Degree) shows the Power-function distribution. The second, the network has the hub that has multiple links. Consequently, it is important for us to represent the hub visually in Scale-free Network but the existing Graph-layout algorithms only represent clusters for the present. Therefor in this thesis we suggest Graph-layout algorithm that effectively presents the Scale-free network. The Hubity(hub+ity) repulsive force between hubs in suggested algorithm in this thesis is in inverse proportion to the distance, and if the degree of hubs increases in a times the Hubity repulsive force between hubs is ${\alpha}^{\gamma}$ times (${\gamma}$??is a connection line index). Also, if the algorithm has the counter that controls the force in proportion to the total node number and the total link number, The Hubity repulsive force is independent of the scale of a network. The proposed algorithm is compared with Graph-layout algorithm through an experiment. The experimental process is as follows: First of all, make out the hub that exists in the network or not. Check out the connection line index to recognize the existence of hub, and then if the value of connection line index is between 2 and 3, then conclude the Scale-free network that has a hub. And then use the suggested algorithm. In result, We validated that the proposed Graph-layout algorithm showed the Scale-free network more effectively than the existing cluster-centered algorithms[Noack, etc.].

Evaluation of $^{14}C$ Behavior Characteristic in Reactor Coolant from Korean PWR NPP's (국내 경수로형 원자로 냉각재 중의 $^{14}C$ 거동 특성 평가)

  • Kang, Duk-Won;Yang, Yang-Hee;Park, Kyong-Rok
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.7 no.1
    • /
    • pp.1-7
    • /
    • 2009
  • This study has been focused on determining the chemical composition of $^{14}C$ - in terms of both organic and inorganic $^{14}C$ contents - in reactor coolant from 3 different PWR's reactor type. The purpose was to evaluate the characteristic of $^{14}C$ that can serve as a basis for reliable estimation of the environmental release at domestic PWR sites. $^{14}C$ is the most important nuclide in the inventory, since it contributes one of the main dose contributors in future release scenarios. The reason for this is its high mobility in the environment, biological availability and long half-life(5730yr). More recent studies - where a more detailed investigation of organic $^{14}C$ species believed to be formed in the coolant under reducing conditions have been made - show that the organic compounds not only are limited to hydrocarbons and CO. Possible organic compounds formed including formaldehyde, formic acid and acetic acid, etc. Under oxidizing conditions shows the oxidized carbon forms, possibly mainly carbon dioxide and bicarbonate forms. Measurements of organic and inorganic $^{14}C$ in various water systems were also performed. The $^{14}C$ inventory in the reactor water was found to be 3.1 GBq/kg in PWR of which less than 10% was in inorganic form. Generally, the $^{14}C$ activity in the water was divided equally between the gas- and water- phase. Even though organic $^{14}C$ compound shows that dominant species during the reactor operation, But during the releasing of $^{14}C$ from the plant stack, chemical forms of $^{14}C$ shows the different composition due to the operation conditions such as temperature, pH, volume control tank venting and shut down chemistry.

  • PDF

Network Planning on the Open Spaces in Geumho-dong, Seoul (서울 금호동 오픈스페이스 네트워크 계획)

  • Kang, Yon-Ju;Pae, Jeong-Hann
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.40 no.5
    • /
    • pp.51-62
    • /
    • 2012
  • Geumho-dong, Seoul, a redeveloped residential area, is located in the foothills of Mt. Eungbong. The geographical undulation, the composition of a large apartment complex, and the partial implementation of the redevelopment project have caused the severe physical and social disconnections in this area. In order to recover functioning in the disconnected community, this study pays attention to the regeneration of the open spaces as an everyday place and in the form a network system among those open spaces. Various types of the open spaces are classified into points or faces, 'bases' and linear 'paths' analyze the network status. More than half of the open space have connecting-distance of 500m or more. Furthermore, many areas are not even included in the service-area of the open spaces. Analysis of the connectivity and integration value using the axial map has carried out to check weak linkages and to choose the sections where additional bases are required. In addition, to improve the quality of the bases and the paths, a field investigation is conducted and problems are diagnosed. The network planning of the open spaces in Geumho-dong is established, ensuring the quality and quantity of bases and paths. The plan includes the construction of an additional major base in the central area and six secondary bases in other parts, and comes up with ways to improve the environment of underdeveloped secondary bases. In the neighborhood parks at Mt. Daehyun areas, the major path are added, and the environment of the paths is improved in certain areas. Because of the network planning, the connecting-distances between bases are reduced significantly, the connectivity and integration value of the area are increased, and the service areas of the open spaces cover the whole area properly. Although this study has some limitations such as the needs for the legal and institutional supports and difficulties of a quantitative indexing process, its significance lies in the suggestion of a more reasonable and practical plan for the overall network system by defining complex types of open spaces simply and clearly and by examining the organic relationships quantitatively and qualitatively.

Assessment of Adsorption Capacity of Mushroom Compost in AMD Treatment Systems (광산배수 자연정화시설 내 버섯퇴비의 중금속 흡착능력 평가)

  • Yong, Bo-Young;Cho, Dong-Wan;Jeong, Jin-Woong;Lim, Gil-Jae;Ji, Sang-Woo;Ahn, Joo-Sung;Song, Ho-Cheol
    • Economic and Environmental Geology
    • /
    • v.43 no.1
    • /
    • pp.13-20
    • /
    • 2010
  • Acid mine drainage (AMD) from abandoned mine sites typically has low pH and contains high level of various heavy metals, aggravating ground- and surface water qualities and neighboring environments. This study investigated removal of heavy metals in a biological treatment system, mainly focusing on the removal by adsorption on a substrate material. Bench-scale batch experiments were performed with a mushroom compost to evaluate the adsorption characteristics of heavy metals leached out from a mine tailing sample and the role of SRB in the overall removal process. In addition, adsorption experiments were perform using an artificial AMD sample containing $Cd^{2+}$, $Cu^{2+}$, $Pb^{2+}$ and $Zn^{2+}$ to assess adsorption capacity of the mushroom compost. The results indicated Mn leached out from mine tailing was not subject to microbial stabilization or adsorption onto mushroom compost while microbially mediated stabilization played an important role in the removal of Zn. Fe leaching significantly increased in the presence of microbes as compared to autoclaved samples, and this was attributed to dissolution of Fe minerals in the mine tailing in a response to the depletion of $Fe^{3+}$ by iron reduction bacteria. Measurement of oxidation reduction potential (ORP) and pH indicated the reactive mixture maintained reducing condition and moderate pH during the reaction. The results of the adsorption experiments involving artificial AMD sample indicated adsorption removal efficiency was greater than 90% at pH 6 condition, but it decreased at pH 3 condition.

COATED PARTICLE FUEL FOR HIGH TEMPERATURE GAS COOLED REACTORS

  • Verfondern, Karl;Nabielek, Heinz;Kendall, James M.
    • Nuclear Engineering and Technology
    • /
    • v.39 no.5
    • /
    • pp.603-616
    • /
    • 2007
  • Roy Huddle, having invented the coated particle in Harwell 1957, stated in the early 1970s that we know now everything about particles and coatings and should be going over to deal with other problems. This was on the occasion of the Dragon fuel performance information meeting London 1973: How wrong a genius be! It took until 1978 that really good particles were made in Germany, then during the Japanese HTTR production in the 1990s and finally the Chinese 2000-2001 campaign for HTR-10. Here, we present a review of history and present status. Today, good fuel is measured by different standards from the seventies: where $9*10^{-4}$ initial free heavy metal fraction was typical for early AVR carbide fuel and $3*10^{-4}$ initial free heavy metal fraction was acceptable for oxide fuel in THTR, we insist on values more than an order of magnitude below this value today. Half a percent of particle failure at the end-of-irradiation, another ancient standard, is not even acceptable today, even for the most severe accidents. While legislation and licensing has not changed, one of the reasons we insist on these improvements is the preference for passive systems rather than active controls of earlier times. After renewed HTGR interest, we are reporting about the start of new or reactivated coated particle work in several parts of the world, considering the aspects of designs/ traditional and new materials, manufacturing technologies/ quality control quality assurance, irradiation and accident performance, modeling and performance predictions, and fuel cycle aspects and spent fuel treatment. In very general terms, the coated particle should be strong, reliable, retentive, and affordable. These properties have to be quantified and will be eventually optimized for a specific application system. Results obtained so far indicate that the same particle can be used for steam cycle applications with $700-750^{\circ}C$ helium coolant gas exit, for gas turbine applications at $850-900^{\circ}C$ and for process heat/hydrogen generation applications with $950^{\circ}C$ outlet temperatures. There is a clear set of standards for modem high quality fuel in terms of low levels of heavy metal contamination, manufacture-induced particle defects during fuel body and fuel element making, irradiation/accident induced particle failures and limits on fission product release from intact particles. While gas-cooled reactor design is still open-ended with blocks for the prismatic and spherical fuel elements for the pebble-bed design, there is near worldwide agreement on high quality fuel: a $500{\mu}m$ diameter $UO_2$ kernel of 10% enrichment is surrounded by a $100{\mu}m$ thick sacrificial buffer layer to be followed by a dense inner pyrocarbon layer, a high quality silicon carbide layer of $35{\mu}m$ thickness and theoretical density and another outer pyrocarbon layer. Good performance has been demonstrated both under operational and under accident conditions, i.e. to 10% FIMA and maximum $1600^{\circ}C$ afterwards. And it is the wide-ranging demonstration experience that makes this particle superior. Recommendations are made for further work: 1. Generation of data for presently manufactured materials, e.g. SiC strength and strength distribution, PyC creep and shrinkage and many more material data sets. 2. Renewed start of irradiation and accident testing of modem coated particle fuel. 3. Analysis of existing and newly created data with a view to demonstrate satisfactory performance at burnups beyond 10% FIMA and complete fission product retention even in accidents that go beyond $1600^{\circ}C$ for a short period of time. This work should proceed at both national and international level.

The Analysis of the Visitors' Experiences in Yeonnam-dong before and after the Gyeongui Line Park Project - A Text Mining Approach - (경의선숲길 조성 전후의 연남동 방문자의 경험 분석 - 블로그 텍스트 분석을 중심으로 -)

  • Kim, Sae-Ryung;Choi, Yunwon;Yoon, Heeyeun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.47 no.4
    • /
    • pp.33-49
    • /
    • 2019
  • The purpose of this study was to investigate the changes in the experiences of visitors of Yeonnam-dong during the period covering the development of a linear park, the Gyeongui Line Park. This study used a text mining technique to analyze Naver Blog postings of those who visited Yeonnam-dong from June 2013 to May 2017, divided into four periods -from June 2013 to May 2014, from June 2014 to May 2015, from June 2015 to May 2016 and from June 2016 to May 2017. The keywords used were 'Yeonnam-dong', 'Gyeongui Line' and 'Yeontral Park' and the data was further refined and resampled. A semantic network analysis was conducted on the basis of the co-occurrences of words. The results of the study were as follows. During the entire period, the main experience of visitors to Yeonnam-dong was 'food culture' consistently, but the activities related to 'market', 'browsing', and 'buy' increased. Also, activities such as 'walk', 'play' and 'rest' in the park newly appeared after the construction of the park. Moreover, more diverse opinions about the Yeonnam-dong were expressed on the blog, and Yeonnam-dong began to be recognized as a place where a variety of activities can be enjoyed. Lastly, when the visitors wrote about the theme 'food culture', the scope of the keywords expanded from simple ones, such as 'eat', 'photograph' and 'chatting' to 'market', 'browsing', and 'walk'. The sub-themes that appeared with the park also expanded to various topics with the emergence of the Gyeongui Line Book Street. This study analyzed the change of experiences of visitors objectively with text mining, a quantitative methodology. Due to the nature of text mining, however, the subjective opinions inevitably have been involved in the process of refining. Also, further research is required to assess the direct relationship between these changes and park construction.