• Title/Summary/Keyword: static studies

Search Result 927, Processing Time 0.03 seconds

A Study on the Demand for Cultural Ecosystem Services in Urban Forests Using Topic Modeling (토픽모델링을 활용한 도시림의 문화서비스 수요 특성 분석)

  • Kim, Jee-Young;Son, Yong-Hoon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.50 no.4
    • /
    • pp.37-52
    • /
    • 2022
  • The purpose of this study is to analyze the demand for cultural ecosystem services in urban forests based on user perception and experience value by using Naver blog posts and LDA topic modeling. Bukhansan National Park was used to analyze and review the feasibility of spatial assessments. Based on the results of topic modeling from blog posts, a review process was conducted considering the relevance of Bukhansan National Park's cultural services and its suitability as a spatial assessment case, and finally, an index for the spatial assessment of urban forest's cultural service was derived. Specifically, 21 topics derived through topic analysis were interpreted, and 13 topics related to cultural ecosystem services were derived based on the MA(Millennium Ecosystem Assessment)'s classification system for ecosystem services. 72.7% of all documents reviewed had data deemed useful for this study. The contents of the topic fell into one of the seven types of cultural services related to "mountainous recreation activities" (23.7%), "indirect use value linked to tourism and convenience facilities" (12.4%), "inspirational activities" (11.2%), "seasonal recreation activities" (6.2%), "natural appreciation and static recreation activities" (3.7%). Next, for the 13 cultural service topics derived from data gathered about Bukhansan National Park, the possibility of spatial assessment of the characteristics of cultural ecosystem services provided by urban forests was reviewed, and a total of 8 cultural service indicators were derived. The MA's cultural service classification system for ecosystem services, which was widely used in previous studies, has limitations in that it does not reflect the actual user demand of urban forests, but it is meaningful in that it categorizes cultural service indicators suitable for domestic circumstances. In addition, the study is significant as it presented a methodology to interpret and derive the demand for cultural services using a large amount of user awareness and experience data.

A Study of Accelerator Investment Determinants Based on Business Model Innovation Framework (비즈니스 모델 혁신 프레임워크 기반의 액셀러레이터 투자결정요인 연구)

  • Jung, Mun-Su;Kim, Eun-Hee
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.17 no.2
    • /
    • pp.65-80
    • /
    • 2022
  • Despite the uncertainty and risky factors of startups, the special and critical role of accelerators in carrying out professional nurturing and investment for them is becoming increasingly significant in the startup social-system. However, academic research on investment determinants that have a profound impact on the survival of accelerators is lacking, and there are only a few empirical studies on the classification and importance of factors, and they do not enjoy the benefits of theoretical studies. This study proposes a business model innovation framework based on the business model innovation theory that reflects the nature and properties of startups that are investment targets of accelerators and derives 12 investment decision factors. The framework defines that the target, direction, and performable force of startup innovation are a business model, strategy, and dynamic capability. Besides, the framework analyzes the investment decision factors of the existing accelerators based on the business model innovation framework to verify the suitability and sufficiency of the composition. As a result of the analysis, first, most of the items were faithfully composed from a static point of view of business model innovation, but it was found that the factors related to the core activities to evaluate the activity and customer relationship were insufficient. Second, from the strategic point of view, the necessity of developing factors that can encompass the definition and content of core resources, which are internal strategic factors, was raised. Third, from the dynamic point of view, it was found that many of the investment determinants of accelerators were concentrated on the lower level of dynamic competencies. This can be judged as a result of reflecting the characteristics of a startup that needs to develop a solution with few resources and a small number of team members. In addition, the roles and interrelationships between each factor are not clear, thus it was found as a limiting point for startups to view and evaluate the direction and process in which startups dynamically innovate their business models. This study is considerably differentiated in that it provides a business model innovation framework and offers a theoretical basis for investment determinants by deriving the investment determinants of accelerators based on the framework and design the foundation for subsequent research. The business model innovation framework presented in this study has great implications in that it contributes to the achievement of startups, accelerators, and startup support organizations.

The Photography as Technological Aesthetics (데크놀로지 미학으로서의 사진)

  • Jin, Dong-Sun
    • Journal of Science of Art and Design
    • /
    • v.11
    • /
    • pp.221-249
    • /
    • 2007
  • Today, photography is facing to the crisis of identity and dilemma of ontology from the digital imaging process in the new technology form. It is very important points to say rethinking of the traditional photographic medium, that has changed the way we view the world and ourselves is perhaps an understatement and that photography has transformed our essential understanding of reality. Now, no longer are photographic images regarded as the true automatic recording, innocent evidence and the mirror to the reality. Rather, photography constructs the world for our entertainment, helping to create the comforting illusions by which we live. The recognition that photographs are not constructions and reflections of reality, is the basis for the actual presence within the contemporary photographic world. It is shock. This thesis's aim is to look for the problems of photographic identity and ontological crisis that is controlling and regulating digital photographic imagery, allowing the reproduction of the electronic simulations era. Photography loses its special aesthetic status and becomes no more true information and, exclusively evidence by traditional film and paper that appeared both as a technological accuracy and as a medium-specific aesthetic. The result, photography is facing two crises, one is the photographic ontology(the introduction of computerized digital images) and the other is photographic epistemology(having to do broader changes in ethics, knowledge and culture). Taken together, these crises apparently threaten us with the death of photography, with the 'end' of photography and the culture it sustains. The thesis's meaning is to look into the dilemma of photography's ontology and epistemology, especially, automatical index and digital codes from its origin, meaning, and identity as the technological medium. Thus, in particular, thesis focuses on the analog imagery presence, from the nature in the material world, and the digital imagery presence from the cultural situations in our society. And also thesis's aim is to examine the main issues of the history of photography has been concentrated on the ontological arguments since the discovery of photography in 1839. Photography has never been only one static technology form. Rather, its nearly two centuries of technological development have been marked by numerous, competing of technological innovation and self revolution from the dual aspects. This thesis examines recent account of photography by the analysis of the medium's concept, meaning, identity between film base image and digital base image from the aspects of photographic ontology and epistemology. Thus, the structure of thesis is fairy straightforward to examine what appear to be two opposing view of photographic conditions and ontological situations. Thesis' view contrasts that figure out the value of photography according to its fundamental characteristic as a medium. Also, it seeks a possible solution to the dilemma of photographic ontology through the medium's origin from the early years of the nineteenth century to the raising questions about the different meaning(analog/digital) of photography, now. Finally, this thesis emphasizes and concludes that the photographic ontological crisis reflects to the paradoxical dynamic structure, that unsolved the origins of the medium, itself. Moreover, even photography is not single identity of the photographic ontology, and also can not be understood as having a static identity or singular status from the dynamic field of technologies, practices, and images.

  • PDF

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

A Study for Strategy of On-line Shopping Mall: Based on Customer Purchasing and Re-purchasing Pattern (시스템 다이내믹스 기법을 활용한 온라인 쇼핑몰의 전략에 관한 연구 : 소비자의 구매 및 재구매 행동을 중심으로)

  • Lee, Sang-Gun;Min, Suk-Ki;Kang, Min-Cheol
    • Asia pacific journal of information systems
    • /
    • v.18 no.3
    • /
    • pp.91-121
    • /
    • 2008
  • Electronic commerce, commonly known as e-commerce or eCommerce, has become a major business trend in these days. The amount of trade conducted electronically has grown extraordinarily by developing the Internet technology. Most electronic commerce has being conducted between businesses to customers; therefore, the researches with respect to e-commerce are to find customer's needs, behaviors through statistical methods. However, the statistical researches, mostly based on a questionnaire, are the static researches, They can tell us the dynamic relationships between initial purchasing and repurchasing. Therefore, this study proposes dynamic research model for analyzing the cause of initial purchasing and repurchasing. This paper is based on the System-Dynamic theory, using the powerful simulation model with some restriction, The restrictions are based on the theory TAM(Technology Acceptance Model), PAM, and TPB(Theory of Planned Behavior). This article investigates not only the customer's purchasing and repurchasing behavior by passing of time but also the interactive effects to one another. This research model has six scenarios and three steps for analyzing customer behaviors. The first step is the research of purchasing situations. The second step is the research of repurchasing situations. Finally, the third step is to study the relationship between initial purchasing and repurchasing. The purpose of six scenarios is to find the customer's purchasing patterns according to the environmental changes. We set six variables in these scenarios by (1) changing the number of products; (2) changing the number of contents in on-line shopping malls; (3) having multimedia files or not in the shopping mall web sites; (4) grading on-line communities; (5) changing the qualities of products; (6) changing the customer's degree of confidence on products. First three variables are applied to study customer's purchasing behavior, and the other variables are applied to repurchasing behavior study. Through the simulation study, this paper presents some inter-relational result about customer purchasing behaviors, For example, Active community actions are not the increasing factor of purchasing but the increasing factor of word of mouth effect, Additionally. The higher products' quality, the more word of mouth effects increase. The number of products and contents on the web sites have same influence on people's buying behaviors. All simulation methods in this paper is not only display the result of each scenario but also find how to affect each other. Hence, electronic commerce firm can make more realistic marketing strategy about consumer behavior through this dynamic simulation research. Moreover, dynamic analysis method can predict the results which help the decision of marketing strategy by using the time-line graph. Consequently, this dynamic simulation analysis could be a useful research model to make firm's competitive advantage. However, this simulation model needs more further study. With respect to reality, this simulation model has some limitations. There are some missing factors which affect customer's buying behaviors in this model. The first missing factor is the customer's degree of recognition of brands. The second factor is the degree of customer satisfaction. The third factor is the power of word of mouth in the specific region. Generally, word of mouth affects significantly on a region's culture, even people's buying behaviors. The last missing factor is the user interface environment in the internet or other on-line shopping tools. In order to get more realistic result, these factors might be essential matters to make better research in the future studies.

An Energy Efficient Cluster Management Method based on Autonomous Learning in a Server Cluster Environment (서버 클러스터 환경에서 자율학습기반의 에너지 효율적인 클러스터 관리 기법)

  • Cho, Sungchul;Kwak, Hukeun;Chung, Kyusik
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.6
    • /
    • pp.185-196
    • /
    • 2015
  • Energy aware server clusters aim to reduce power consumption at maximum while keeping QoS(Quality of Service) compared to energy non-aware server clusters. They adjust the power mode of each server in a fixed or variable time interval to let only the minimum number of servers needed to handle current user requests ON. Previous studies on energy aware server cluster put efforts to reduce power consumption further or to keep QoS, but they do not consider energy efficiency well. In this paper, we propose an energy efficient cluster management based on autonomous learning for energy aware server clusters. Using parameters optimized through autonomous learning, our method adjusts server power mode to achieve maximum performance with respect to power consumption. Our method repeats the following procedure for adjusting the power modes of servers. Firstly, according to the current load and traffic pattern, it classifies current workload pattern type in a predetermined way. Secondly, it searches learning table to check whether learning has been performed for the classified workload pattern type in the past. If yes, it uses the already-stored parameters. Otherwise, it performs learning for the classified workload pattern type to find the best parameters in terms of energy efficiency and stores the optimized parameters. Thirdly, it adjusts server power mode with the parameters. We implemented the proposed method and performed experiments with a cluster of 16 servers using three different kinds of load patterns. Experimental results show that the proposed method is better than the existing methods in terms of energy efficiency: the numbers of good response per unit power consumed in the proposed method are 99.8%, 107.5% and 141.8% of those in the existing static method, 102.0%, 107.0% and 106.8% of those in the existing prediction method for banking load pattern, real load pattern, and virtual load pattern, respectively.

Stud and Puzzle-Strip Shear Connector for Composite Beam of UHPC Deck and Inverted-T Steel Girder (초고성능 콘크리트 바닥판과 역T형 강거더의 합성보를 위한 스터드 및 퍼즐스트립 전단연결재에 관한 연구)

  • Lee, Kyoung-Chan;Joh, Changbin;Choi, Eun-Suk;Kim, Jee-Sang
    • Journal of the Korea Concrete Institute
    • /
    • v.26 no.2
    • /
    • pp.151-157
    • /
    • 2014
  • Since recently developed Ultra-High-Performance-Concrete (UHPC) provides very high strength, stiffness, and durability, many studies have been made on the application of the UHPC to bridge decks. Due to high strength and stiffness of UHPC bridge deck, the structural contribution of top flange of steel girder composite to UHPC deck would be much lower than that of conventional concrete deck. At this point of view, this study proposes a inverted-T shaped steel girder composite to UHPC deck. This girder requires a new type of shear connector because conventional shear connectors are welded on top flange. This study also proposes three different types of shear connectors, and evaluate their ultimate strength via push-out static test. The first one is a stud shear connector welded directly to the web of the girder in the transverse direction. The second one is a puzzle-strip type shear connector developed by the European Commission, and the last one is the combination of the stud and the puzzle-strip shear connectors. Experimental results showed that the ultimate strength of the transverse stud was 26% larger than that given in the AASHTO LRFD Bridge Design Specifications, but a splitting crack observed in the UHPC deck was so severe that another measure needs to be developed to prevent the splitting crack. The ultimate strength of the puzzle-strip specimen was 40% larger than that evaluated by the equation of European Commission. The specimens combined with stud and puzzle-strip shear connectors provided less strength than arithmetical sum of those. Based on the experimental observations, there appears to be no advantage of combining transverse stud and puzzle-strip shear connectors.

Analyses of the Efficiency in Hospital Management (병원 단위비용 결정요인에 관한 연구)

  • Ro, Kong-Kyun;Lee, Seon
    • Korea Journal of Hospital Management
    • /
    • v.9 no.1
    • /
    • pp.66-94
    • /
    • 2004
  • The objective of this study is to examine how to maximize the efficiency of hospital management by minimizing the unit cost of hospital operation. For this purpose, this paper proposes to develop a model of the profit maximization based on the cost minimization dictum using the statistical tools of arriving at the maximum likelihood values. The preliminary survey data are collected from the annual statistics and their analyses published by Korea Health Industry Development Institute and Korean Hospital Association. The maximum likelihood value statistical analyses are conducted from the information on the cost (function) of each of 36 hospitals selected by the random stratified sampling method according to the size and location (urban or rural) of hospitals. We believe that, although the size of sample is relatively small, because of the sampling method used and the high response rate, the power of estimation of the results of the statistical analyses of the sample hospitals is acceptable. The conceptual framework of analyses is adopted from the various models of the determinants of hospital costs used by the previous studies. According to this framework, the study postulates that the unit cost of hospital operation is determined by the size, scope of service, technology (production function) as measured by capacity utilization, labor capital ratio and labor input-mix variables, and by exogeneous variables. The variables to represent the above cost determinants are selected by using the step-wise regression so that only the statistically significant variables may be utilized in analyzing how these variables impact on the hospital unit cost. The results of the analyses show that the models of hospital cost determinants adopted are well chosen. The various models analyzed have the (goodness of fit) overall determination (R2) which all turned out to be significant, regardless of the variables put in to represent the cost determinants. Specifically, the size and scope of service, no matter how it is measured, i. e., number of admissions per bed, number of ambulatory visits per bed, adjusted inpatient days and adjusted outpatients, have overall effects of reducing the hospital unit costs as measured by the cost per admission, per inpatient day, or office visit implying the existence of the economy of scale in the hospital operation. Thirdly, the technology used in operating a hospital has turned out to have its ramifications on the hospital unit cost similar to those postulated in the static theory of the firm. For example, the capacity utilization as represented by the inpatient days per employee tuned out to have statistically significant negative impacts on the unit cost of hospital operation, while payroll expenses per inpatient cost has a positive effect. The input-mix of hospital operation, as represented by the ratio of the number of doctor, nurse or medical staff per general employee, supports the known thesis that the specialized manpower costs more than the general employees. The labor/capital ratio as represented by the employees per 100 beds is shown to have a positive effect on the cost as expected. As for the exogeneous variable's impacts on the cost, when this variable is represented by the percent of urban 100 population at the location where the hospital is located, the regression analysis shows that the hospitals located in the urban area have a higher cost than those in the rural area. Finally, the case study of the sample hospitals offers a specific information to hospital administrators about how they share in terms of the cost they are incurring in comparison to other hospitals. For example, if his/her hospital is of small size and located in a city, he/she can compare the various costs of his/her hospital operation with those of other similar hospitals. Therefore, he/she may be able to find the reasons why the cost of his/her hospital operation has a higher or lower cost than other similar hospitals in what factors of the hospital cost determinants.

  • PDF

Evaluation of Chloride and Chemical Resistance of High Performance Mortar Mixed with Mineral Admixture (광물성 혼화재료를 혼입한 고성능 모르타르의 염해 및 화학저항성 평가)

  • Lee, Kyeo-Re;Han, Seung-Yeon;Choi, Sung-Yong;Yun, Kyong-Ku
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.5
    • /
    • pp.618-625
    • /
    • 2018
  • With the passing of time, exposed concrete structures are affected by a range of environmental, chemical, and physical factors. These factors seep into the concrete and have a deleterious influence compared to the initial performance. The importance of identifying and preventing further performance degradation due to the occurrence of deterioration has been greatly emphasized. In recent years, evaluations of the target life have attracted increasing interest. During the freezing-melting effect, a part of the concrete undergoes swelling and shrinking repeatedly. At these times, chloride ions present in seawater penetrate into the concrete, and accelerate the deterioration due to the corrosion of reinforced bars in the concrete structures. For that reason, concrete structures located onshore with a freezing-melting effect are more prone to this type of deterioration than inland structures. The aim of this study was to develop a high performance mortar mixed with a mineral admixture for the durability properties of concrete structures near sea water. In addition, experimental studies were carried out on the strength and durability of mortar. The mixing ratio of the silica fume and meta kaolin was 3, 7 and 10 %, respectively. Furthermore, the ultra-fine fly ash was mixed at 5, 10, 15, and 20%. The mortar specimens prepared by mixing the admixtures were subjected to a static strength test on the 1st and 28th days of age and degradation acceleration tests, such as the chloride ion penetration resistance test, sulfuric acid resistance test, and salt resistant test, were carried out at 28 days of age. The chloride diffusion coefficient was calculated from a series of rapid chloride penetration tests, and used to estimate the life time against corrosion due to chloride ion penetration according to the KCI, ACI, and FIB codes. The life time of mortar with 10% meta kaolin was the longest with a service life of approximately 470 years according to the KCI code.