• 제목/요약/키워드: Weight classification system

검색결과 188건 처리시간 0.023초

해저 동력케이블 보호를 위한 대상 선박 선정에 관한 연구 (A Study on the Selection of Target Ship for the Protection of Submarine Power Cable)

  • 이윤석;김승연;유용웅;윤귀호
    • 해양환경안전학회지
    • /
    • 제24권6호
    • /
    • pp.662-669
    • /
    • 2018
  • 최근 제주도를 비롯한 도서지역의 전력 사용량 증가 및 해상풍력 발전단지 개발 등으로 인해 해저 동력케이블의 신규 설치가 검토되고 있다. 해저에 설치되는 동력케이블의 보호를 위해서는 선박의 투묘, 주묘, 어로 작업 등에 대한 특성을 기반으로 매설 깊이를 산출해야 한다. 그러나 우리나라는 아직까지 해저 동력케이블 보호를 위한 대상 선박의 규모와 관련된 설계기준이 전무한 실정이다. 본 연구에서는 해저 동력케이블과 유사한 국내 해저배관의 보호를 위한 설계기준을 분석하고, 동력케이블의 설치 환경을 고려한 긴급 투묘의 형태별 분류를 토대로 위험도 매트릭스 모델을 개발하여 보호 대상 선박의 규모를 해당 해역을 통항하는 선박의 크기별 누적함수 규모에 따라 설계하였다. 해저 동력케이블 보호 기준에는 설치 해역의 수심과 조류 등의 환경 조건, 투묘와 주묘 등과 같은 해양사고 조건 등을 연계하였고, 선박의 운항 환경을 항계, 연안해역과 근해해역으로 구분하여 동력케이블의 구체적인 보호 대상 선박의 규모를 검토하였다. 대상 선박의 규모 결정에 대한 적정성 및 유용성 차원에서 완도에서 제주구간에 신설 예정인 제3호 해저 동력케이블에 적용하여 검증하였다. 이러한 해저 동력케이블과 해저배관 등의 보호를 위한 대상 선박의 선정 기준은 향후 매설깊이 설정에 따른 앵커 중량 선정은 물론 해저케이블 물리적 보호시스템 개발에도 활용될 것으로 기대된다.

하천환경 자연도의 평가지표 및 기준 연구 - 생물적 특성을 중심으로 (A study on indicator & criteria for assessment of river environmental naturalness -focused on biological characteristics)

  • 전승훈
    • 한국수자원학회논문집
    • /
    • 제52권spc2호
    • /
    • pp.765-776
    • /
    • 2019
  • 본 연구는 하천복원사업의 전 과정에서 활용될 수 있는 법 제도적 지침과 기준을 제공하고 하천사업의 성과를 진단 평가할 수 있는 한국형 표준화된 하천환경 평가체계 구축과정의 일환으로서 하천생태계의 수변 및 수서환경을 대변할 수 있는 4개의 생물 분류군, 즉 식생과 조류, 그리고 저서 무척추동물과 어류의 평가지표 및 기준 등 평가체계를 구축하였다. 구체적으로 생물적 특성의 평가지표 및 기준을 정리하면, 식생의 경우 식생 다양도와 식생 복잡도, 그리고 식생 자연도 등 3가지 지수의 조합을 통한 하천 식생군집의 구조적 특성을 정량적으로 평가할 수 있도록 하였다. 저서 무척추동물과 어류, 그리고 조류의 경우도 선진 기법의 과학적 근거를 바탕으로 우리나라 하천특성에 적합하도록 생물적 자료의 평가등급 획정에 따른 정량적인 생물지수 평가법을 제안하였다. 아울러 하천환경 자연도의 한 부문인 생물적 특성의 평가를 위하여 이들 4개 생물분류군의 가중치를 적용한 종합 생물지수 및 평가등급화 방안을 제시하였으며, 이에 대한 시험하천의 적용결과에서도 생물분류군 간 비교적 일관성 있게 하천환경의 특성을 반영하고 있는 것으로 분석되었다.

Functional Aspects of the Obesity Paradox in Patients with Severe Coronavirus Disease-2019: A Retrospective, Multicenter Study

  • Jeongsu Kim;Jin Ho Jang;Kipoong Kim;Sunghoon Park;Su Hwan Lee;Onyu Park;Tae Hwa Kim;Hye Ju Yeo;Woo Hyun Cho
    • Tuberculosis and Respiratory Diseases
    • /
    • 제87권2호
    • /
    • pp.176-184
    • /
    • 2024
  • Background: Results of studies investigating the association between body mass index (BMI) and mortality in patients with coronavirus disease-2019 (COVID-19) have been conflicting. Methods: This multicenter, retrospective observational study, conducted between January 2020 and August 2021, evaluated the impact of obesity on outcomes in patients with severe COVID-19 in a Korean national cohort. A total of 1,114 patients were enrolled from 22 tertiary referral hospitals or university-affiliated hospitals, of whom 1,099 were included in the analysis, excluding 15 with unavailable height and weight information. The effect(s) of BMI on patients with severe COVID-19 were analyzed. Results: According to the World Health Organization BMI classification, 59 patients were underweight, 541 were normal, 389 were overweight, and 110 were obese. The overall 28-day mortality rate was 15.3%, and there was no significant difference according to BMI. Univariate Cox analysis revealed that BMI was associated with 28-day mortality (hazard ratio, 0.96; p=0.045), but not in the multivariate analysis. Additionally, patients were divided into two groups based on BMI ≥25 kg/m2 and underwent propensity score matching analysis, in which the two groups exhibited no significant difference in mortality at 28 days. The median (interquartile range) clinical frailty scale score at discharge was higher in nonobese patients (3 [3 to 5] vs. 4 [3 to 6], p<0.001). The proportion of frail patients at discharge was significantly higher in the nonobese group (28.1% vs. 46.8%, p<0.001). Conclusion: The obesity paradox was not evident in this cohort of patients with severe COVID-19. However, functional outcomes at discharge were better in the obese group.

임도성토사면(林道盛土斜面)의 토질역학적(土質力學的) 특성(特性)과 안정해석(安定解析) (Soil Mechanical Properties and Stability Analysis on Fill Slope of Forest Road)

  • 지병윤;오재헌;차두송
    • 한국산림과학회지
    • /
    • 제89권2호
    • /
    • pp.275-284
    • /
    • 2000
  • 본 연구는 집중호우로 인하여 임도사면의 붕괴가 발생된 변성암지역 및 화성암지역의 임도를 대상으로 성토사면의 토질역학적 특성파악과 사면안정해석을 실시하였다. 그 결과는 다음과 같다. 1) 통일분류법에 의한 임도성토사면의 토질의 분류는 화성암과 변성암지역 모두 토사사면은 SW, 풍화암사면은 SP, 연암사면은 GP로 나타났으나, 호박돌 섞인 토사사면에서는 화성암지역은 SP, 변성암지역은 GW로 나다났다. 2) 화성암지역의 성토사면 흙의 건조말도는 $1.34g/cm^2{\sim}1.59g/cm^2$, 비중은 2.57~2.61, 간극비는 0.66~0.93으로 나타났고, 변성암지역의 흙의 건조밀도는 $1.35g/cm^2{\sim}1.51g/cm^2$, 비중은 2.67~2.77, 간극비는 O.78~1.01로 나타났다. 3) 흙의 강도정수인 내부마찰각과 점착력의 시험결과, 화성암지역에서의 내부마찰각은 $29.51^{\circ}{\sim}41.82^{\circ}$ 점착력은 $0.03kg/cm^2{\sim}0.38kg/cm^2$, 변성암지역에서의 내부마찰각은 $21.43^{\circ}{\sim}41.43^{\circ}$, 점착력은 $0.05kg/cm^2{\sim}0.44kg/cm^2$로 나타났다. 4) 사면안정해석을 실시한 결과 안전율은 전체적으로 화성암지역의 경우에는 풍화암사면, 변성암지역의 경우에는 토사사면과 풍화암사면이 1 이하로 붕괴발생위험이 가장 큰 것으로 나타났다.

  • PDF

오염지하수의 확산방지를 위한 대체 혼합차수재의 적용에 관한 연구 (A Feasibility Study on the Deep Soil Mixing Barrier to Control Contaminated Groundwater)

  • 김윤희;임동희;이재영
    • 한국지하수토양환경학회지:지하수토양환경
    • /
    • 제6권3호
    • /
    • pp.53-59
    • /
    • 2001
  • 비위생 매립지를 정비하는 방법은 여러 가지 공법이 있으나, 지중에 투수성이 매우 낮은 물질을 설치하여 폐기물과 오염된 지하수를 가두고 외부지역의 지하수가 유입되는 것을 차단하는 목적으로 심층혼합차수공법 형태의 연직 차수벽이 많이 설치된다. 국내에서 일반적으로 많이 사용되고 있는 심층혼합 차수공법의 차수재료는 특수시멘트 계열의 고화재를 많이 사용하고 있으며, 이때 고화재 투입량은 차수재의 법적 설치기준인 투수계수가 1.0x$10^{7}$cm/sec 이하이어야 하므로 현장토의 여건에 따라 달라지게 된다. 본 연구에서는 흙의 통일분류법상 SW-SC로 분류된 현장토를 대상으로 고화재를 활용한 혼합 차수벽 형성에서의 적정 고화재 투입량 및 최적 함수비를 결정하고 고화재를 개량할 수 있는 물질로서 비산재와 석회를 선정하여 적절한 혼합비로 고화재에 첨가함으로써 혼합 차수재의 기능 향상에 대한 방안을 검토하였다. 연구결과, 심층 혼합 차수공법에서 차수재의 고화재 적정 배합비율은 투수계수 실험을 통하여 13%가 적절한 것으로 나타났으며, 이 때 시공성을 용이하게 하기 위한 배합수비는 고화재 : 물의 비가 1 : 1.5가 적절한 것으로 나타났다. 이와 같이 도출된 기본적인 배합비를 기준으로 비산재와 석회를 첨가한 혼합 차수재의 강도와 투수능을 평가한 결과, 고화재(시멘트) 대신 첨가재(비산재:석회 = 70:30)를 20~40% 정도 첨가하여 사용한다면 고화재만을 사용하는 경우보다 더 낮은 투수능을 보임을 알 수 있었다. 혼합 차수재의 중금속 고정능 평가에서는 고화재(시멘트)만을 혼합할 때와 상응하는 중금속 고정능력이 있었으며, 환경적 위해성 평가를 위한 중금속 용출 실험에서도 용출농도는 규제치 이하임을 알 수 있었다.의 값이 모두 광릉이 높고 남산이 낮은(mesh size 1.7mm>광릉 mesh size 0.4 mm>남산 mesh size 1.7 mm) 일관된 경향을 나타냈다. 이는 날개응애 군집의 종 다양성은 광릉지역이 남산지역에 비해 더 높다는 결론을 도출할 수 있는 것이었다. 낙엽주머니내 출현종의 우점종과 출현빈도 분석결과, 각 조사구의 우점종들은 전체 밀도의 70%이상을 차지하고 있어 비중이 매우 높은 것들로 나타났고, 최고 우점종은 mesh size 1.7mm의 남산과 광릉 조사구에서 Tricho-galumna nipponica로 동일했고, 광릉 mesh size 0.4 mm에서는 이 종보다 크기가 작은 Ramusella sengbuschi가 최고 우점종이었다. 그리고 낙엽주머니내에 밀도와 출현빈도가 높아 낙엽분해에 직,간접적으로 크게 관여하는 날개응애 종들로는 Tricogalumna nipponica, Epidamaeus coreanus, Scheloribates latipes, Ceratozetes japonicus, Ramusella sengbuschi, Eohypochthonius crassisetiger, Cultroribula lata 등을 선발할 수 있었다.X>$_4$$^{2-}$ 및 HCO$_3$$^{-}$ 각각의 관계에 의하면. 남부지역과 서북부지역 얘서 모두 염수의 영향을 받고 있는 것으로 나타난다.worm by topical aprication. 3. There is an increase of cocoon yield in both chemical treatments. It was resulted from increase of weight of

  • PDF

키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법 (A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model)

  • 조원진;노상규;윤지영;박진수
    • Asia pacific journal of information systems
    • /
    • 제21권1호
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

공공주택 실적공사비 분석을 통한 공사비 리스크에 관한 연구 (A Study on the Construction Cost Risk through Analyzing the Actual Cost of Public Apartment)

  • 윤우성;고성석
    • 한국건설관리학회논문집
    • /
    • 제12권6호
    • /
    • pp.65-78
    • /
    • 2011
  • 건설사업은 복합, 중장기적인 사업특성상 프로젝트의 기획부터 시공 완료 단계까지 정확한 공사비 예측 및 확인, 정산 절차가 매우 중요하며, 기획에서 실시설계, 물량산출의 전 단계에 이르기까지 공사비와 관계된 리스크요인의 검토와 판단이 강조되고 있다. 그러나 공공주택의 입낙찰 금액의 심사 및 실행예산 편성 시 실적데이터에 의한 공사비 초과요인에 대한 검토와 조치, 공사비 리스크의 적절한 대응이 이루어지지 않고 있는 실정이다. 이에 본 연구에서는 2004년~2010년 준공분 공공주택 40개 현장의 준공정산서를 대상으로 세부공종별 실적공사비 항목을 사업초기 실행예산을 기준으로 비교하여, 실적공사비의 변동관계에서 나타나는 편차 및 변동 폭을 분석함으로써 불규칙적 공사비 리스크요인을 파악하고 정량적 분석을 실시하였다. 연구결과 연도별, 연면적별, 지역별, 공사금액별, 분양/임대방식별 다양한 공사비 리스크 요인 및 결과를 도출하였다. 연도별 정책과 경기 변동에 따른 공사비 리스크를 알 수 있었으며, 지역별, 연면적별 공사특성에 따른 공사비 리스크 항목을 도출하였다. 공사금액별 리스크 분석에서는 최저가낙찰제의 문제점을 알 수 있었으며, 공사비 초과 리스크 비중은 외주비와 자재비 항목에서 가장 크게 발생하는 것으로 나타났다. 직접공사비 중 외주비 리스크는 지붕공사와 타일공사가 높게 나타났고, 자재비 리스크는 철근, 시멘트가 높게 나타났다. 본 연구결과는 향후 공공주택 실행예산 편성 시 분류 방식에 따라서 공사비 리스크 검토 공종 및 관리요소 분석을 위한 자료로 활용할 수 있을 것으로 판단된다.

한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발 (DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA)

  • 박만배
    • 대한교통학회:학술대회논문집
    • /
    • 대한교통학회 1995년도 제27회 학술발표회
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF