• Title/Summary/Keyword: 집계함수

Search Result 51, Processing Time 0.032 seconds

Valuation of Willingness to Pay for Forest Fire Prevention (산불 예방(豫防)을 위한 지불의사금액(支拂意思金額) 평가(評價))

  • Kim, Seong Il;Hong, Sung Kwon;Kim, Jae Jun;Kim, Tong Il
    • Journal of Korean Society of Forest Science
    • /
    • v.90 no.4
    • /
    • pp.573-581
    • /
    • 2001
  • The purposes of this study are to estimate mean willingness to pay (WTP) for preventing forest fires by contingent valuation method (CVM), and to calibrate the variables affecting WTP. The forest fire prevention fund was utilized as a payment vehicle to elicit respondents' willingness to pay (WTP). A total of 500 adults who reside in Seoul Metropolitan area were selected by two-stage cluster sampling and conducted the face-to-face interview. The scenario was designed to meet the requirements for double-bounded dichotomous choice CVM. More than half of the respondents (64.6%) have a willing to pay for the fund. The mean WTP was \4,532. Therefore a total WTP for the population was \34,165,758,000. The calibration of Weibull proportional hazard model showed that education level, environmental conservation intention and negative consciousness about the effect of forest fire were independent variables strongly influencing the WTP.

  • PDF

Research on supporting the group by clause reflecting XML data characteristics in XQuery (XQuery에서의 XML 데이터 특성을 고려한 group by 지원을 위한 질의 표현 기법에 대한 연구)

  • Lee Min-Soo;Cho Hye-Young;Oh Jung-Sun;Kim Yun-Mi;Song Soo-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.13D no.4 s.107
    • /
    • pp.501-512
    • /
    • 2006
  • XML is the most popular platform-independent data expression which is used to communicate between loosely coupled heterogeneous systems such as B2B Applications or Workflow systems. The powerful query language XQuery has been developed to support diverse needs for querying XML documents. XQuery is designed to configure results from diverse data sources into a uniquely structured query result. Therefore, it became the standard for the XML query language. Although the latest XQuery supports heavy search functions including iterations, the grouping mechanism for data is too primitive and makes the query expression difficult and complex. Therefore, this work is focused on supporting the groupby clause in the query expression to process XQuery grouping. We suggest it to be a more efficient way to process grouping for restructuring and aggregation functions on XML data. We propose an XQuery EBNF that includes the groupby clause and implemented an XQuery processing system with grouping functions based on the eXist Native XML Database.

Development of Auto-calibration System for Micro-Simulation Model using Aggregated Data (Case Study of Urban Express) (집계자료를 이용한 미시적 시뮬레이션 모형의 자동정산체계 개발 (도시고속도로사례))

  • Lee, Ho-Sang;Lee, Tae-Gyeong;Ma, Guk-Jun;Kim, Yeong-Chan;Won, Je-Mu
    • Journal of Korean Society of Transportation
    • /
    • v.29 no.1
    • /
    • pp.113-123
    • /
    • 2011
  • The application of micro-simulation model has been extended farther with improvement of computer performance and development of complicated model. To make a micro-simulation model accurately replicate field traffic conditions, model calibration is very crucial. Studies on calibration of micro-simulation model have not been enough while lots of studies on calibration of macro-simulation model have been continued in our country. This paper presents an auto-calibration of parameter values in micro-simulation model(VISSIM) using genetic algorithm. RMSE(Root Mean Square Error) of collected volume on the urban expressway versus simulated volume is set as MOP(measure of performance) and objective function of optimization is set as to minimize the RMSE. Applying to urban expressway(Nae-bu circular) as a case study, it shows that RMSE of optimized parameter values decrease 60.4%($19.3{\longrightarrow}7.6$) compared to default parameter values and the proposed auto-calibration system is very effective.

Development of a Fatigue Damage Model of Wideband Process using an Artificial Neural Network (인공 신경망을 이용한 광대역 과정의 피로 손상 모델 개발)

  • Kim, Hosoung;Ahn, In-Gyu;Kim, Yooil
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.52 no.1
    • /
    • pp.88-95
    • /
    • 2015
  • For the frequency-domain spectral fatigue analysis, the probability density function of stress range needs to be estimated based on the stress spectrum only, which is a frequency domain representation of the response. The probability distribution of the stress range of the narrow-band spectrum is known to follow the Rayleigh distribution, however the PDF of wide-band spectrum is difficult to define with clarity due to the complicated fluctuation pattern of spectrum. In this paper, efforts have been made to figure out the links between the probability density function of stress range to the structural response of wide-band Gaussian random process. An artificial neural network scheme, known as one of the most powerful system identification methods, was used to identify the multivariate functional relationship between the idealized wide-band spectrums and resulting probability density functions. To achieve this, the spectrums were idealized as a superposition of two triangles with arbitrary location, height and width, targeting to comprise wide-band spectrum, and the probability density functions were represented by the linear combination of equally spaced Gaussian basis functions. To train the network under supervision, varieties of different wide-band spectrums were assumed and the converged probability density function of the stress range was derived using the rainflow counting method and all these data sets were fed into the three layer perceptron model. This nonlinear least square problem was solved using Levenberg-Marquardt algorithm with regularization term included. It was proven that the network trained using the given data set could reproduce the probability density function of arbitrary wide-band spectrum of two triangles with great success.

Extracting week key issues and analyzing differences from realtime search keywords of portal sites (포털사이트 실시간 검색키워드의 주간 핵심 이슈 선정 및 차이 분석)

  • Chong, Min-Yeong
    • Journal of Digital Convergence
    • /
    • v.14 no.12
    • /
    • pp.237-243
    • /
    • 2016
  • Since realtime search keywords of portal sites are arranged in descending order by instant increasing rates of search numbers, they easily show issues increasing in interests for a short time. But they have the limits extracted different results by portal sites and not shown issues by a period. Thus, to find key issues from the whole realtime search keywords for certain period, and to show results of summarizing them and analyzing differences, is significant in providing the basis of understanding issues more practically and in maintaining consistency of them. This paper analyzes differences of week key issues extracted from week analysis of realtime search keywords provided by two typical portal sites. The results of experiments show that the portal group means of realtime search keywords by the independent t-test and the survival functions of realtime search keywords by the survival analysis are statistically significant differences.

Efficient Processing of Multiple Group-by Queries in MapReduce for Big Data Analysis (맵리듀스에서 빅데이터 분석을 위한 다중 Group-by 질의의 효율적인 처리 기법)

  • Park, Eunju;Park, Sojeong;Oh, Sohyun;Choi, Hyejin;Lee, Ki Yong;Shim, Junho
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.5
    • /
    • pp.387-392
    • /
    • 2015
  • MapReduce is a framework used to process large data sets in parallel on a large cluster. A group-by query is a query that partitions the input data into groups based on the values of the specified attributes, and then evaluates the value of the specified aggregate function for each group. In this paper, we propose an efficient method for processing multiple group-by queries using MapReduce. Instead of computing each group-by query independently, the proposed method computes multiple group-by queries in stages with one or more MapReduce jobs in order to reduce the total execution cost. We compared the performance of this method with the performance of a less sophisticated method that computes each group-by query independently. This comparison showed that the proposed method offers better performance in terms of execution time.

Prediction of damages induced by Snow using Multiple-linear regression and Artificial Neural Network model (다중선형회귀 및 인공신경망 모형을 이용한 대설피해에 따른 피해액 예측에 관한 연구)

  • Kwon, Soon Ho;Lee, Eui Hoon;Chung, Gunhui;Kim, Joong Hoon
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2017.05a
    • /
    • pp.20-20
    • /
    • 2017
  • 최근 기후변화 영향에 따라 전 세계적으로 인명피해 및 재산피해를 유발하는 자연재난이 지속적으로 증가하고 있으며, 그로 인한 자연재해의 규모가 점점 더 커지고 있다. 실제로 우리나라에서도 지난 1994 년에서 2013 년까지 지난 20 년간 자연재해에 의한 피해액은 12조 3천억 원으로 집계되었으며, 이 중 강우와 태풍에 의한 피해가 85 % 이고, 대설에 의한 피해는 약 13 % 로 자연재해 중 대부분의 피해는 강우 및 태풍에서 발생하지만, 폭설에 의한 피해도 적지 않은 것으로 나타났다. 이에 따라, 정확한 예측을 위해 신뢰도 높은 자료 구축을 통한 대설피해 예측에 관한 연구가 필요한 시점이다. 본 연구에서는 대설피해액 예측을 위해 우리나라의 63개 기상 관측소에서 관측한 적설심 자료 및 기상관측 자료와 사회 경제 자료 총 11개를 대설피해 예측을 위한 입력변수로 선정하고, 이를 기상관측소가 속한 도시의 면적에 따라 3개의 지역으로 구분하였다. 주성분분석을 활용하여 선정된 입력변수들을 4개의 주성분으로 구분하고, 인공신경망 및 다중선형 회귀 모형을 구성하여 각 지역별 대설피해 예측의 오차를 분석하였다. 적용결과, 인공신경망 모형을 이용한 대설피해 예측의 수정결정계수는 22.8 %~48.2 %를 나타냈고, 다중선형회귀 모형의 수정결정 계수는 9.2 %~39.7% 로 나타났다. 그러므로 인공신경망 모형이 다중회귀 모형보다 선택된 입력자료를 활용하여 대설피해를 예측하는 목적으로 조금 더 우수한 결과를 나타내었다. 향후 자료를 보완 및 모형의 고도화를 통해 보다 정확한 대설피해 예측 함수 개발이 가능할 것으로 기대된다.

  • PDF

Analysis of Highway Traffic Indices Using Internet Search Data (검색 트래픽 정보를 활용한 고속도로 교통지표 분석 연구)

  • Ryu, Ingon;Lee, Jaeyoung;Park, Gyeong Chul;Choi, Keechoo;Hwang, Jun-Mun
    • Journal of Korean Society of Transportation
    • /
    • v.33 no.1
    • /
    • pp.14-28
    • /
    • 2015
  • Numerous research has been conducted using internet search data since the mid-2000s. For example, Google Inc. developed a service predicting influenza patterns using the internet search data. The main objective of this study is to prove the hypothesis that highway traffic indices are similar to the internet search patterns. In order to achieve this objective, a model to predict the number of vehicles entering the expressway and space-mean speed was developed and the goodness-of-fit of the model was assessed. The results revealed several findings. First, it was shown that the Google search traffic was a good predictor for the TCS entering traffic volume model at sites with frequent commute trips, and it had a negative correlation with the TCS entering traffic volume. Second, the Naver search traffic was utilized for the TCS entering traffic volume model at sites with numerous recreational trips, and it was positively correlated with the TCS entering traffic volume. Third, it was uncovered that the VDS speed had a negative relationship with the search traffic on the time series diagram. Lastly, it was concluded that the transfer function noise time series model showed the better goodness-of-fit compared to the other time series model. It is expected that "Big Data" from the internet search data can be extensively applied in the transportation field if the sources of search traffic, time difference and aggregation units are explored in the follow-up studies.

Calculation of Greenhouse Gas and Air Pollutant Emission on Inter-regional Road Network Using ITS Information (지능형교통체계(ITS) 정보를 이용한 지역 간 도로의 온실가스 및 대기오염물질 배출량 산정)

  • Wu, Seung Kook;Kim, Youngkook;Park, Sangjo
    • Journal of Korean Society of Transportation
    • /
    • v.31 no.3
    • /
    • pp.55-64
    • /
    • 2013
  • Conventionally, greenhouse gas (GHG) emissions in the transport sector have been estimated using the fuel consumption (i.e. Tier 1 method). However, the GHG emissions on road networks may not be practically estimated using the Tier 1 method because it is not practical to monitor fuel consumption on a road segment. Further, air pollutant emissions on a road may not be estimated efficiently by the Tier 1 method either due to the diverse characteristics of vehicles, such as travel speed, vehicle type, model year, fuel type, etc. Given these conditions, the goal of this study is to propose a Tier 3 level methodology to calculate $CO_2$ and $NO_X$ emissions on inter-regional roads using the information from ITS infrastructure. The methodology may avoid the under-estimation issue caused by the concavity of emission factor curves because the ITS speed or volume information is aggregated by a short time interval. The proposed methodology was applied to 4 road segments as a case study. The results show that the management of heavy vehicles' speed is important to control the $CO_2$ and $NO_X$ emissions on road networks.

A Short Composting Method by the Single Phase Composter for the Production of Oyster Mushroom (느타리버섯 배지 제조기를 이용한 배지의 제조 연구)

  • Lee, Ho-Yong;Shin, Chang-Yup;Lee, Young-Keun;Chang, Hwa-Hyoung;Min, Bong-Hee
    • The Korean Journal of Mycology
    • /
    • v.27 no.1 s.88
    • /
    • pp.10-14
    • /
    • 1999
  • A single phase composter was constructed by modifying the conventional mixer of sawdust for the cultivation of oyster mushroom Pleurotus ostreatus. The machine was designed on the basis of 3-phase-1 system which was controlled in prewetting, pasteurization and fermentation processes. In composting 200 kg of straw and cotton waste in the machine, it took 20 minutes in prewetting step and also to hours at $65^{\circ}C$ in pasteurization process. Postfermentation by aerothermophiles was completed by treating the compost at $45^{\circ}C-50^{\circ}C$ for 48 hours which was shorten 24 hours from the conventional method. In the postfermentation at high temperature, forced aeration and/or vigorous mixing process(es) played a great role in the improvement of spawn quality. The growth of mycelium of oyster mushroom was excellent in the culture combinated with 3 parts of surface inoculation and 7 parts of mechanical mixing.

  • PDF