• Title/Summary/Keyword: inference model

Search Result 1,171, Processing Time 0.029 seconds

Inference of the Conceptual Model of Wild Gardens - A Comparative Study of William Robinson and Gertrude Jekyll - (와일드 가든(Wild Garden)의 개념적 모형 유추 - 윌리암 로빈슨(William Robinson)과 거투르드 제킬(Gertrude Jekyll)의 비교 연구 -)

  • Park, Eun-Yeong;Yoon, Sang-Jun
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.31 no.4
    • /
    • pp.62-69
    • /
    • 2013
  • The origin of natural planting, which is getting the spotlight in modern time facing natural and environmental problems, can be found from wild gardens. They were started by William Robinson and concretely embodied by Gertrude Jekyll. It is worth shedding new light on wild gardens, as they served as a pathbreaker for ecological design and an important foundation for the specialization of naturalism, which are part of the most important topics in modern gardens. This study aimed to infer the conceptual model of wild gardens and identify their historic significance by comparatively analyzing Robinson's Gravetye Manor and Jekyll's Munstead Wood. The results are: Firstly, they inherited inspirations for spatial organization from basic cottage gardens and introduced informal forms. Secondly, in terms of the use of materials, they had observed various climates in their journeys so that they could use both native and naturalized plants based on their understanding of the plants' hardiness and exotic species. They also displayed interests in plants in the woodlands and forests. Thirdly, in terms of design techniques, they investigated the colors and textures of individual plants and their relationships to produce a variety of views that resembled nature in microcosm. Fourthly, in terms of maintenance, their basic orientation was the minimum maintenance to allow plants to live according to their nature.

Development of a surrogate model based on temperature for estimation of evapotranspiration and its use for drought index applicability assessment (증발산 산정을 위한 온도기반의 대체모형 개발 및 가뭄지수 적용성 평가)

  • Kim, Ho-Jun;Kim, Kyoungwook;Kwon, Hyun-Han
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.11
    • /
    • pp.969-983
    • /
    • 2021
  • Evapotranspiration, one of the hydrometeorological components, is considered an important variable for water resource planning and management and is primarily used as input data for hydrological models such as water balance models. The FAO56 PM method has been recommended as a standard approach to estimate the reference evapotranspiration with relatively high accuracy. However, the FAO56 PM method is often challenging to apply because it requires considerable hydrometeorological variables. In this perspective, the Hargreaves equation has been widely adopted to estimate the reference evapotranspiration. In this study, a set of parameters of the Hargreaves equation was calibrated with relatively long-term data within a Bayesian framework. Statistical index (CC, RMSE, IoA) is used to validate the model. RMSE for monthly results reduced from 7.94 ~ 24.91 mm/month to 7.94 ~ 24.91 mm/month for the validation period. The results confirmed that the accuracy was significantly improved compared to the existing Hargreaves equation. Further, the evaporative demand drought index (EDDI) based on the evaporative demand (E0) was proposed. To confirm the effectiveness of the EDDI, this study evaluated the estimated EDDI for the recent drought events from 2014 to 2015 and 2018, along with precipitation and SPI. As a result of the evaluation of the Han-river watershed in 2018, the weekly EDDI increased to more than 2 and it was confirmed that EDDI more effectively detects the onset of drought caused by heatwaves. EDDI can be used as a drought index, particularly for heatwave-driven flash drought monitoring and along with SPI.

Evaluation of flood frequency analysis technique using measured actual discharge data (실측유량 자료를 활용한 홍수량 빈도해석 기법 평가)

  • Kim, Tae-Jeong;Kim, Jang-Gyeong;Song, Jae-Hyun;Kim, Jin-Guk;Kwon, Hyun-Han
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.5
    • /
    • pp.333-343
    • /
    • 2022
  • For water resource management, the design flood is calculated using the flood frequency analysis technique and the rainfall runoff model. The method by design flood frequency analysis calculates the stochastic design flood by directly analyzing the actual discharge data and is theoretically evaluated as the most accurate method. Actual discharge data frequency analysis of the measured flow was limited due to data limitations in the existing flood flow analysis. In this study, design flood frequency analysis was performed using the measured flow data stably secured through the water level-discharge relationship curve formula. For the frequency analysis of design flood, the parameters were calculated by applying the bayesian inference, and the uncertainty of flood volume by frequency was quantified. It was confirmed that the result of calculating the design flood was close to that calculated by the rainfall-runoff model by applying long-term rainfall data. It is judged that hydrological analysis can be done from various perspectives by using long-term actual flow data through hydrological survey.

Efficient Poisoning Attack Defense Techniques Based on Data Augmentation (데이터 증강 기반의 효율적인 포이즈닝 공격 방어 기법)

  • So-Eun Jeon;Ji-Won Ock;Min-Jeong Kim;Sa-Ra Hong;Sae-Rom Park;Il-Gu Lee
    • Convergence Security Journal
    • /
    • v.22 no.3
    • /
    • pp.25-32
    • /
    • 2022
  • Recently, the image processing industry has been activated as deep learning-based technology is introduced in the image recognition and detection field. With the development of deep learning technology, learning model vulnerabilities for adversarial attacks continue to be reported. However, studies on countermeasures against poisoning attacks that inject malicious data during learning are insufficient. The conventional countermeasure against poisoning attacks has a limitation in that it is necessary to perform a separate detection and removal operation by examining the training data each time. Therefore, in this paper, we propose a technique for reducing the attack success rate by applying modifications to the training data and inference data without a separate detection and removal process for the poison data. The One-shot kill poison attack, a clean label poison attack proposed in previous studies, was used as an attack model. The attack performance was confirmed by dividing it into a general attacker and an intelligent attacker according to the attacker's attack strategy. According to the experimental results, when the proposed defense mechanism is applied, the attack success rate can be reduced by up to 65% compared to the conventional method.

Intentionality Judgement in the Criminal Case: The Role of Moral Character (형사사건에서의 고의성 판단: 도덕적 특성의 역할)

  • Choi, Seung-Hyuk;Hur, Taekyun
    • Korean Journal of Culture and Social Issue
    • /
    • v.26 no.1
    • /
    • pp.25-45
    • /
    • 2020
  • Intentionality judgement in criminal cases is a core area of fact finding that is root of guilty and sentencing judgment on the defendant. However, the third party is not sure the intentionality because it reflects subjective aspect of agent. Thus, mechanism behind intentionality judgment is an important factor to be properly understood by the academia and the criminal justice system. However, previous studies regarding intentionality judgment models have shown inconsistent results. Mental-state models proposed foreseeability(belief) and desire of agent at the time of the offence as key factors in intentionality judgment. These factors consistent with central things on intentionality judgment in criminal law. However, key factors in moral-evaluation models are blameworthiness of agent and badness of outcome reflected on the consequent aspect of act. Recently, deep-self concordance model emerged suggesting important factors on intentionality judgment are not mental states and moral evaluations but individual's deep-self. However, these models are limited in that they do not consider the important features of criminal cases, that the consequence of the case is inevitably negative, and therefore the actor who is a party to legal punishment rarely expresses his or her mental state at the time of the act. Therefore, this study suggests that, based on the existing intentionality judgment studies and the characteristics of the criminal case, the inference about who the agent was originally will play a key role in judging the intentionality in the criminal case. This is the moral-character model. Futhermore, In this regard, this study discussed what the media and criminal justice institutions should keep in mind and the directions for future research.

Comparison between REML and Bayesian via Gibbs Sampling Algorithm with a Mixed Animal Model to Estimate Genetic Parameters for Carcass Traits in Hanwoo(Korean Native Cattle) (한우의 도체형질 유전모수 추정을 위한 REML과 Bayesian via Gibbs Sampling 방법의 비교 연구)

  • Roh, S.H.;Kim, B.W.;Kim, H.S.;Min, H.S.;Yoon, H.B.;Lee, D.H.;Jeon, J.T.;Lee, J.G.
    • Journal of Animal Science and Technology
    • /
    • v.46 no.5
    • /
    • pp.719-728
    • /
    • 2004
  • The aims of this study were to estimate genetic parameters for carcass traits on Hanwoo(Korean Native Cattle) and to compare two different statistical algorithms for estimating genetic parameters. Data obtained from 1526 steers at Hanwoo Improvement Center and Hanwoo Improvement Complex Area from 1996 to 2001 were used for the analyses. The carcass traits considered in these studies were carcass weight, dressing percent, eye muscle area, backfat thickness, and marbling score. Estimated genetic parameters using EM-REML algorithm were compared to those by Bayesian inference via Gibbs Sampling to find out statistical properties. The estimated heritabilities of carcass traits by REML method were 0.28, 0.25, 0.35, 0.39 and 0.51, respectively and those by Gibbs Sampling method were 0.29, 0.25, 0.40, 0.42 and 0.54, respectively. This estimates were not significantly different, even though the estimated heritabilities by Gibbs Sampling method were higher than ones by REML method. Since the estimated statistics by REML method and Gibbs Sampling method were not significantly different in this study, it is inferred that both mothods could be efficiently applied for the analysis of carcass traits of cattle. However, further studies are demanded to define an optimal statistical method for handling large scale performance data.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

A Future Study Agenda Applying Service Research Framework (서비스 연구 프레임워크 관점에서의 향후 연구과제)

  • Lee, JeungSun;Ahn, Jinho;Kim, Hyunsoo
    • Journal of Service Research and Studies
    • /
    • v.7 no.1
    • /
    • pp.83-96
    • /
    • 2017
  • The importance of service science is emphasized in the modern economy, and the value and necessity of service research still increasing. Since the service research framework was proposed, it has been studied from various perspectives and incorporated into one framework--service research. The direction of service research has been established and a new baseline of research has been established. However, the modern economic and social environment could be described as a new era, the Fourth Industrial Revolution has changed drastically. More and more systematic research on services has become necessary. Therefore, this study analyzed the field of service research in the existing framework. The study suggested how service research could broaden the horizon of service research by studying the 'what'. To do this, we analyzed recent service research trends by themes. We also identified the shortcomings of previous studies about service, and suggested directions and research themes for future research. Based on this study we developed a general approach to the creation of new models from the viewpoint of service science. The authors were also able to develop a general approach to areas such as service innovation, service inference, service solution, and service design leverage. In addition, it is necessary to extend service research and business model to the utilization of service technology. This approach could contribute to forming the basis of future service development, and to utilize social media to create new value of innovative company. The results of this study could contribute to deepening and expanding service research.

Designing fuzzy systems for optimal parameters of TMDs to reduce seismic response of tall buildings

  • Ramezani, Meysam;Bathaei, Akbar;Zahrai, Seyed Mehdi
    • Smart Structures and Systems
    • /
    • v.20 no.1
    • /
    • pp.61-74
    • /
    • 2017
  • One of the most reliable and simplest tools for structural vibration control in civil engineering is Tuned Mass Damper, TMD. Provided that the frequency and damping parameters of these dampers are tuned appropriately, they can reduce the vibrations of the structure through their generated inertia forces, as they vibrate continuously. To achieve the optimal parameters of TMD, many different methods have been provided so far. In old approaches, some formulas have been offered based on simplifying models and their applied loadings while novel procedures need to model structures completely in order to obtain TMD parameters. In this paper, with regard to the nonlinear decision-making of fuzzy systems and their enough ability to cope with different unreliability, a method is proposed. Furthermore, by taking advantage of both old and new methods a fuzzy system is designed to be operational and reduce uncertainties related to models and applied loads. To design fuzzy system, it is required to gain data on structures and optimum parameters of TMDs corresponding to these structures. This information is obtained through modeling MDOF systems with various numbers of stories subjected to far and near field earthquakes. The design of the fuzzy systems is performed by three methods: look-up table, the data space grid-partitioning, and clustering. After that, rule weights of Mamdani fuzzy system using the look-up table are optimized through genetic algorithm and rule weights of Sugeno fuzzy system designed based on grid-partitioning methods and clustering data are optimized through ANFIS (Adaptive Neuro-Fuzzy Inference System). By comparing these methods, it is observed that the fuzzy system technique based on data clustering has an efficient function to predict the optimal parameters of TMDs. In this method, average of errors in estimating frequency and damping ratio is close to zero. Also, standard deviation of frequency errors and damping ratio errors decrease by 78% and 4.1% respectively in comparison with the look-up table method. While, this reductions compared to the grid partitioning method are 2.2% and 1.8% respectively. In this research, TMD parameters are estimated for a 15-degree of freedom structure based on designed fuzzy system and are compared to parameters obtained from the genetic algorithm and empirical relations. The progress up to 1.9% and 2% under far-field earthquakes and 0.4% and 2.2% under near-field earthquakes is obtained in decreasing respectively roof maximum displacement and its RMS ratio through fuzzy system method compared to those obtained by empirical relations.

Development of an Artificial Neural Expert System for Rational Determination of Lateral Earth Pressure Coefficient (합리적인 측압계수 결정을 위한 인공신경 전문가 시스템의 개발)

  • 문상호;문현구
    • Journal of the Korean Geotechnical Society
    • /
    • v.15 no.1
    • /
    • pp.99-112
    • /
    • 1999
  • By using 92 values of lateral earth pressure coefficient(K) measured in Korea, the tendency of K with varying depth is analyzed and compared with the range of K defined by Hoek and Brown. The horizontal stress is generally larger than the vertical stress in Korea : About 84 % of K values are above 1. In this study, the theory of elasto-plasticity is applied to analyze the variation of K values, and the results are compared with those of numerical analysis. This reveals that the erosion, sedimentation and weathering of earth crust are important factors in the determination of K values. Surface erosion, large lateral pressure and good rock mass increase the K values, but sedimentation decreases the K values. This study enable us to analyze the effects of geological processes on the K values, especially at shallow depth where underground excavation takes place. A neural network expert system using multi-layer back-propagation algorithm is developed to predict the K values. The neural network model has a correlation coefficient above 0.996 when it is compared with measured data. The comparison with 9 measured data which are not included in the back-propagation learning has shown an average inference error of 20% and the correlation coefficient above 0.95. The expert system developed in this study can be used for reliable determination of K values.

  • PDF