• Title/Summary/Keyword: Standard estimating

Search Result 610, Processing Time 0.032 seconds

A Study on the Improvement of Guideline in Digital Forest Type Map (수치임상도 작업매뉴얼의 개선방안에 관한 연구)

  • PARK, Jeong-Mook;DO, Mi-Ryung;SIM, Woo-Dam;LEE, Jung-Soo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.1
    • /
    • pp.168-182
    • /
    • 2019
  • The objectives of this study were to examine the production processes and methods of "Forest Type Map Actualization Production (Database (DB) Construction Work Manual)" (Work Manual) identify issues associated with the production processes and methods, and suggest solutions for them by applying evaluation items to a 1:5k digital forest type map. The evaluation items applied to a forest type map were divided into zoning and attributes, and the issues associated with the production processes and methods of Work Manual were derived through analyzing the characteristics of the stand structure and fragmentation by administrative districts. Korea is divided into five divisions, where one is set as the area changed naturally and the other four areas set as the area changed artificially. The area changed naturally has been updated every five years, and those changed artificially have been updated annually. The fragmentation of South Korea was analyzed in order to examine the consistency of the DB established for each region. The results showed that, in South Korea, the number of patches increased and the mean patch size decreased. As a result, the degree of fragmentation and the complexity of shapes increased. The degree of fragmentation and the complexity of shapes decreased in four regions out of 17 regions (metropolitan cities and provinces). The results indicated that there were spatial variations. The "Forest Classification" defines the minimum area of a zoning as 0.1ha. This study examined the criteria for the minimum area of a zoning by estimating the divided object (polygon unit) in a forest type map. The results of this study revealed that approximately 26% of objects were smaller than the minimum area of a zoning. The results implied that it would be necessary to establish the definition and the regeneration interval of "Areas Changed Artificially and Areas Changed Naturally", and improve the standard for the minimum area of a zoning. Among the attributes of Work Manual, "Species Change" item classifies terrain features into 52 types, and 43 types of them belong to stocking land. This study examined distribution ratios by extracting species information from the forest type map. It was found that each of 23 species, approximately 53% of species, occupied less than 0.1% of Forested land. The top three species were pine and other species. Although undergrowth on unstocked forest land are classified in the terrain feature system, their definition and classification criteria are not established in the "Forest Classification" item. Therefore, it will be needed to reestablish the terrain feature system and set the definitions of undergrowth.

A Study of the Current State of the Garden and Restoration Proposal for the Original Garden of Yi Cheon-bo's Historic House in Gapyeong (가평 이천보(李天輔) 고가(古家)의 정원 현황과 원형 복원을 위한 제안)

  • Rho, Jaehyun;Choi, Seunghee;Jang, Hyeyoung
    • Korean Journal of Heritage: History & Science
    • /
    • v.53 no.4
    • /
    • pp.118-135
    • /
    • 2020
  • It is not uncommon in Korea to see the structure and function of a garden remain intact as well as its form. Yi Cheon-bo's Historic House (Gyeonggi-do Cultural Heritage Item No. 55), located in Sang-myeon, Gapyeong-gun, Gyeonggi-do, is considered an example of very valuable garden heritage, although its family history, location, and remaining buildings and natural cultural assets are not fully intact. Along with Yi Cheon-bo's Historic House, this study attempted to explore the possibility of restoration of the forest houses and gardens by highlighting the high value of Yi Cheon-bo's Historic House through research into the typical layout of private households in northern Gyeonggi Province and Gapyeong County, comparative review of aerial photographs from 1954, and interviews with those involved. The results of the study are as follows: In this study, the presence of Banggye-dongmun and Bansukam in the Banggyecheon area, where the location of the garden was well-preserved, was examined across the landscape of the outer garden, while the location of Yi Cheon-bo's Historic House, the appearance of feng shui, and the viewing axis were considered. Also, the appearance of the lost main house was inferred from the arrangement and shape of the Sarangchae and Haengrangchae that remain in the original garden, and the asymmetry of the Sarangchae Numaru and the hapgak shape on the side of the roof. In addition, the three tablets (Pyeonaeks) of Sanggodang (尙古堂), Bangyejeongsa (磻溪精舍), and Okgyeongsanbang (玉聲山房) were used to infer the landscape, use, and symbolism of the men's quarters. Also, a survey was conducted on the trees that existed or existed in the high prices. Incidentally, it was confirmed that information on boards and cultural properties of Yeonha-ri juniper (Gyeonggi-do Monument No. 61) was recorded to a much lesser extent than the actual required standard, and the juniper trees remaining in the front of Haengrangchae should also be re-evaluated after speculation. On the other hand, as a result of estimating the original shape as a way of pursuing completeness of the garden through restoration of the lost women's quarters and shrine, it is estimated that the main house was placed in the form of a '口' or a 'be warped 口' on the right (north) side of the men's quarters. By synthesizing these results, a restoration alternative for Yi Cheon-bo's Historic House was suggested.

A Machine Learning-based Total Production Time Prediction Method for Customized-Manufacturing Companies (주문생산 기업을 위한 기계학습 기반 총생산시간 예측 기법)

  • Park, Do-Myung;Choi, HyungRim;Park, Byung-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.177-190
    • /
    • 2021
  • Due to the development of the fourth industrial revolution technology, efforts are being made to improve areas that humans cannot handle by utilizing artificial intelligence techniques such as machine learning. Although on-demand production companies also want to reduce corporate risks such as delays in delivery by predicting total production time for orders, they are having difficulty predicting this because the total production time is all different for each order. The Theory of Constraints (TOC) theory was developed to find the least efficient areas to increase order throughput and reduce order total cost, but failed to provide a forecast of total production time. Order production varies from order to order due to various customer needs, so the total production time of individual orders can be measured postmortem, but it is difficult to predict in advance. The total measured production time of existing orders is also different, which has limitations that cannot be used as standard time. As a result, experienced managers rely on persimmons rather than on the use of the system, while inexperienced managers use simple management indicators (e.g., 60 days total production time for raw materials, 90 days total production time for steel plates, etc.). Too fast work instructions based on imperfections or indicators cause congestion, which leads to productivity degradation, and too late leads to increased production costs or failure to meet delivery dates due to emergency processing. Failure to meet the deadline will result in compensation for delayed compensation or adversely affect business and collection sectors. In this study, to address these problems, an entity that operates an order production system seeks to find a machine learning model that estimates the total production time of new orders. It uses orders, production, and process performance for materials used for machine learning. We compared and analyzed OLS, GLM Gamma, Extra Trees, and Random Forest algorithms as the best algorithms for estimating total production time and present the results.

Applying Nonlinear Mixed-effects Models to Taper Equations: A Case Study of Pinus densiflora in Gangwon Province, Republic of Korea (비선형 혼합효과 모형의 수간곡선 적용: 강원지방 소나무를 대상으로)

  • Shin, Joong-Hoon;Han, Hee;Ko, Chi-Ung;Kang, Jin-Taek;Kim, Young-Hwan
    • Journal of Korean Society of Forest Science
    • /
    • v.111 no.1
    • /
    • pp.136-149
    • /
    • 2022
  • In this study, the performance of a nonlinear mixed-effects (NLME) model used to estimate the stem taper of Pinus densiflora in Gangwon Province was compared with that of a nonlinear fixed-effects (NLFE) model using several performance measures. For the diameters of whole tree stems, the NLME model improved on the performance of the NLFE model by 26.4%, 42.9%, 43.1%, and 0.9% in terms of BIAS, MAB, RMSE, and FI, respectively. For the cross-section areas of whole tree stems, the NLME model improved on the performance of the NLFE model by 67.7%, 44.7%, 45.8%, and 1.0% in terms of BIAS, MAB, RMSE, and FI, respectively. Based on the analysis of 12 relative height classes of tree stems, stem taper estimation performance was also reasonably improved by the NLME model, which showed better MAB, RMSE, and FI at every relative height class compared with those of the NLFE model. In some classes, the NLFE model had better BIAS than the NLME model (stem diameter: 0.05, 0.2, 0.3, and 0.8; stem cross-section area: 0.05, 0.3, 0.5, 0.6, and 1.0). However, the NLME model enhanced the performance of stem diameter and cross-section area estimations at the lowest stem part (0.2 m from the ground). Improvements for stem diameter in terms of BIAS, MAB, RMSE, and FI were 84.2%, 69.8%, 68.7%, and 3.1%, respectively. For stem cross-section areas, the improvements in BIAS, MAB, RMSE, and FI were 98.5%, 70.1%, 68.7%, and 3.1%, respectively. The cross-section area at 0.2 m from the ground occupied 22.7% of total cross-section area. Improvements in estimation of cross-section area at the lowest stem part indicate that stem volume estimation performance could also be enhanced. Although NLME models are more difficult to fit than NLFE models, the use of NLME models as a standard method for the estimating the parameters of stem taper equations should be considered.

Estimation for Ground Air Temperature Using GEO-KOMPSAT-2A and Deep Neural Network (심층신경망과 천리안위성 2A호를 활용한 지상기온 추정에 관한 연구)

  • Taeyoon Eom;Kwangnyun Kim;Yonghan Jo;Keunyong Song;Yunjeong Lee;Yun Gon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.207-221
    • /
    • 2023
  • This study suggests deep neural network models for estimating air temperature with Level 1B (L1B) datasets of GEO-KOMPSAT-2A (GK-2A). The temperature at 1.5 m above the ground impact not only daily life but also weather warnings such as cold and heat waves. There are many studies to assume the air temperature from the land surface temperature (LST) retrieved from satellites because the air temperature has a strong relationship with the LST. However, an algorithm of the LST, Level 2 output of GK-2A, works only clear sky pixels. To overcome the cloud effects, we apply a deep neural network (DNN) model to assume the air temperature with L1B calibrated for radiometric and geometrics from raw satellite data and compare the model with a linear regression model between LST and air temperature. The root mean square errors (RMSE) of the air temperature for model outputs are used to evaluate the model. The number of 95 in-situ air temperature data was 2,496,634 and the ratio of datasets paired with LST and L1B show 42.1% and 98.4%. The training years are 2020 and 2021 and 2022 is used to validate. The DNN model is designed with an input layer taking 16 channels and four hidden fully connected layers to assume an air temperature. As a result of the model using 16 bands of L1B, the DNN with RMSE 2.22℃ showed great performance than the baseline model with RMSE 3.55℃ on clear sky conditions and the total RMSE including overcast samples was 3.33℃. It is suggested that the DNN is able to overcome cloud effects. However, it showed different characteristics in seasonal and hourly analysis and needed to append solar information as inputs to make a general DNN model because the summer and winter seasons showed a low coefficient of determinations with high standard deviations.

A study on age distortion reduction in facial expression image generation using StyleGAN Encoder (StyleGAN Encoder를 활용한 표정 이미지 생성에서의 연령 왜곡 감소에 대한 연구)

  • Hee-Yeol Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.464-471
    • /
    • 2023
  • In this paper, we propose a method to reduce age distortion in facial expression image generation using StyleGAN Encoder. The facial expression image generation process first creates a face image using StyleGAN Encoder, and changes the expression by applying the learned boundary to the latent vector using SVM. However, when learning the boundary of a smiling expression, age distortion occurs due to changes in facial expression. The smile boundary created in SVM learning for smiling expressions includes wrinkles caused by changes in facial expressions as learning elements, and it is determined that age characteristics were also learned. To solve this problem, the proposed method calculates the correlation coefficient between the smile boundary and the age boundary and uses this to introduce a method of adjusting the age boundary at the smile boundary in proportion to the correlation coefficient. To confirm the effectiveness of the proposed method, the results of an experiment using the FFHQ dataset, a publicly available standard face dataset, and measuring the FID score are as follows. In the smile image, compared to the existing method, the FID score of the smile image generated by the ground truth and the proposed method was improved by about 0.46. In addition, compared to the existing method in the smile image, the FID score of the image generated by StyleGAN Encoder and the smile image generated by the proposed method improved by about 1.031. In non-smile images, compared to the existing method, the FID score of the non-smile image generated by the ground truth and the method proposed in this paper was improved by about 2.25. In addition, compared to the existing method in non-smile images, it was confirmed that the FID score of the image generated by StyleGAN Encoder and the non-smile image generated by the proposed method improved by about 1.908. Meanwhile, as a result of estimating the age of each generated facial expression image and measuring the estimated age and MSE of the image generated with StyleGAN Encoder, compared to the existing method, the proposed method has an average age of about 1.5 in smile images and about 1.63 in non-smile images. Performance was improved, proving the effectiveness of the proposed method.

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.

Characteristics of Manure and Estimation of Nutrient and Pollutant of Holstein Dairy Cattle (홀스타인 젖소 분뇨의 특성과 비료성분 및 오염물질 부하량 추정)

  • Choi, D.Y.;Choi, H.L.;Kwag, J.H.;Kim, J.H.;Choi, H.C.;Kwon, D.J.;Kang, H.S.;Yang, C.B.;Ahn, H.K.
    • Journal of Animal Science and Technology
    • /
    • v.49 no.1
    • /
    • pp.137-146
    • /
    • 2007
  • This study was conducted to determine fertilizer nutrient and pollutant production of Holstein dairy cattle by estimating manure characteristics. The moisture content of feces was 83.9% and 95.1% for urine. The pH of feces and urine were in the ranges of 7.0~7.4 and 7.5~7.8, respectively. The average BOD5, COD, SS, T-N, T-P concentrations of the dairy feces were 18,294, 52,765, 102,889, 2,575, and 457mg/ℓ, respectively. Dairy urine showed lower levels of BOD5(5,455mg/ℓ), COD(8,089mg/ℓ), SS(593mg/ℓ), T-N(3,401mg/l), and T-P(13mg/ℓ) than feces. The total daily produced pollutant amounts of a dairy cow were 924.1g(Milking cow), 538.8g(Dry cow), 284.4g(Heifer) of BOD5, 2,336.5g (Milking cow), 1,651.8g(Dry cow), 734.1g(Heifer) of COD and 4,210.1g(Milking cow), 2,417.1g(Dry cow), 1,629.1g(Heifer) of SS and 194.8g(Milking cow), 96.4g(Dry cow), 58.3g(Heifer) of T-N and 24.0g(Milking cow), 10.2g(Dry cow), 6.1g(Heifer) of T-P. The calculated amount of pollutants produced by a 450kg dairy cow for one year were 181.3kg of BOD5, 492.5kg of COD, 899.9kg of SS, 36.0kg of T-N and 4.1kg of T-P. The total yearly estimated pollutant production from all head(497,261) of dairy cattle in Korea is 90,149 tons of BOD5, 244,890 tons of COD, 447,491 tons of SS, 17,898 tons of T-N and 2,008 tons of T-P. The fertilizer nutrient concentrations of dairy feces was 0.26% N, 0.1% P2O5 and 0.14% K2O. Urine was found to contain 0.34% N, 0.003% of P2O5 and 0.31% K2O. The total daily fertilizer nutrients produced by dairy cattle were 197.4g (Milking cow), 97.4g(Dry cow), and 57.9g(Heifer) of Nitrogen, 54.2g(Milking cow), 22.2g(Dry cow), and 14.2g(Heifer) of P2O5 and 110.8g(Milking cow), 80.4g (Dry cow), and 39.5g(Heifer) of K2O. The total yearly estimated fertilizer nutrient produced by a 450kg dairy animal is 36.2kg of N, 8.8kg of P2O5, 24.6kg of K2O. The estimated yearly fertilizer nutrient production from all dairy cattle in Korea is 18,000 tons of N, 4,397 tons of P2O5, 12,206 tons of K2O. Dairy manure contains useful trace minerals for crops, such as CaO and MgO, which are contained in similar levels to commercial compost being sold in the domestic market. Concentrations of harmful trace minerals, such as As, Cd, Hg, Pb, Cr, Cu, Ni, Zn, met the Korea compost standard regulations, with some of these minerals being in undetected amounts.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF