• Title/Summary/Keyword: predicting method

Search Result 2,817, Processing Time 0.038 seconds

M-mode Ultrasound Assessment of Diaphragmatic Excursions in Chronic Obstructive Pulmonary Disease : Relation to Pulmonary Function Test and Mouth Pressure (만성폐쇄성 폐질환 환자에서 M-mode 초음파로 측정한 횡격막 운동)

  • Lim, Sung-Chul;Jang, Il-Gweon;Park, Hyeong-Kwan;Hwang, Jun-Hwa;Kang, Yu-Ho;Kim, Young-Chul;Park, Kyung-Ok
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.4
    • /
    • pp.736-745
    • /
    • 1998
  • Background: Respiratory muscle interaction is further profoundly affected by a number of pathologic conditions. Hyperinflation may be particularly severe in chronic obstructive pulmonary disease(COPD) patients, in whom the functional residual capacity(FRC) often exceeds predicted total lung capacity(TLC). Hyperinflation reduces the diaphragmatic effectiveness as a pressure generator and reduces diaphragmatic contribution to chest wall motion. Ultrasonography has recently been shown to be a sensitive and reproducible method of assessing diaphragmatic excursion. This study was performed to evaluate how differences of diaphragmatic excursion measured by ultrasonography associate with normal subjects and COPD patients. Methods: We measured diaphragmatic excursions with ultrasonography on 28 healthy subjects(l6 medical students, 12 age-matched control) and 17 COPD patients. Ultrasonographic measurements were performed during tidal breathing and maximal respiratory efforts approximating vital capacity breathing using Aloka KEC-620 with 3.5 MHz transducer. Measurements were taken in the supine posture. The ultrasonographic probe was positioned transversely in the midclavicular line below the right subcostal margin. After detecting the right hemidiaphragm in the B-mode the ultrasound beam was then positioned so that it was approximately parallel to the movement of middle or posterior third of right diaphragm. Recordings in the M-mode at this position were made throughout the test. Measurements of diaphragmatic excursion on M-mode tracing were calculated by the average gap in 3 times-respiration cycle. Pulmonary function test(SensorMedics 2800), maximal inspiratory(PImax) and expiratory mouth pressure(PEmax, Vitalopower KH-101, Chest) were measured in the seated posture. Results: During the tidal breathing, diaphragmatic excursions were recorded $1.5{\pm}0.5cm$, $1.7{\pm}0.5cm$ and $1.5{\pm}0.6cm$ in medical students, age-matched control group and COPD patients, respectively. Diaphragm excursions during maximal respiratory efforts were significantly decreased in COPD patients ($3.7{\pm}1.3cm$) when compared with medical students, age-matched control group($6.7{\pm}1.3cm$, $5.8{\pm}1.2cm$, p< 0.05}. During maximal respiratory efforts in control subjects, diaphragm excursions were correlated with $FEV_1$, FEVl/FVC, PEF, PIF, and height. In COPD patients, diaphragm excursions during maximal respiratory efforts were correlated with PEmax(maximal expiratory pressure), age, and %FVC. In multiple regression analysis, the combination of PEmax and age was an independent marker of diaphragm excursions during maximal respiratory efforts with COPD patients. Conclusion: COPD subjects had smaller diaphragmatic excursions during maximal respiratory efforts than control subjects. During maximal respiratory efforts in COPD patients, diaphragm excursions were well correlated with PEmax. These results suggest that diaphragm excursions during maximal respiratory efforts with COPD patients may be valuable at predicting the pulmonary function.

  • PDF

Prognostic Value of TNM Staging in Small Cell Lung Cancer (소세포폐암의 TNM 병기에 따른 예후)

  • Park, Jae-Yong;Kim, Kwan-Young;Chae, Sang-Cheol;Kim, Jeong-Seok;Kim, Kwon-Yeop;Park, Ki-Su;Cha, Seung-Ik;Kim, Chang-Ho;Kam, Sin;Jung, Tae-Hoon
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.2
    • /
    • pp.322-332
    • /
    • 1998
  • Background: Accurate staging is important to determine treatment modalities and to predict prognosis for the patients with lung cancer. The simple two-stage system of the Veteran's Administration Lung Cancer study Group has been used for staging of small cell lung cancer(SCLC) because treatment usually consists of chemotherapy with or without radiotherapy. However, this system does not accurately reflect segregation of patients into homogenous prognostic groups. Therefore, a variety of new staging system have been proposed as more intensive treatments including either intensive radiotherapy or surgery enter clinical trials. We evaluate the prognostic importance of TNM staging, which has the advantage of providing a uniform detailed classification of tumor spread, in patients with SCLC. Methods: The medical records of 166 patients diagnosed with SCLC between January 1989 and December 1996 were reviewed retrospectively. The influence of TNM stage on survival was analyzed in 147 patients, among 166 patients, who had complete TNM staging data. Results: Three patients were classified in stage I / II, 15 in stage III a, 78 in stage IIIb and 48 in stage IV. Survival rate at 1 and 2 years for these patients were as follows: stage I / II, 75% and 37.5% ; stage IIIa, 46.7% and 25.0% ; stage III b, 34.3% and 11.3% ; and stage IV, 2.6% and 0%. The 2-year survival rates for 84 patients who received chemotherapy(more than 2 cycles) with or without radiotherapy were as follows: stage I / II, 37.5% ; stage rna, 31.3% ; stage IIIb 13.5% ; and stage IV 0%. Overall outcome according to TNM staging was significantly different whether or not received treatment. However, there was no significant difference between stage IIIa and stage IIIb though median survival and 2-year survival rate were higher in stage IIIa than stage IIIb. Conclusion: These results suggest that the TNM staging system may be helpful for predicting the prognosis of patients with SCLC.

  • PDF

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

Clinical Application of Serum CEA, SCC, Cyfra21-1, and TPA in Lung Cancer (폐암환자에서 혈청 CEA, SCC, Cyfra21-1, TPA-M 측정의 의의)

  • Lee, Jun-Ho;Kim, Kyung-Chan;Lee, Sang-Jun;Lee, Jong-Kook;Jo, Sung-Jae;Kwon, Kun-Young;Han, Sung-Beom;Jeon, Young-June
    • Tuberculosis and Respiratory Diseases
    • /
    • v.44 no.4
    • /
    • pp.785-795
    • /
    • 1997
  • Background : Tumor markers have been used in diagnosis, predicting the extent of disease, monitoring recurrence after therapy and prediction of prognosis. But the utility of markers in lung cancer has been limited by low sensitivity and specificity. TPA-M is recently developed marker using combined monoclonal antibody of Cytokeratin 8, 18, and 19. This study was conducted to evaluate the efficacy of new tumor marker, TPA-M by comparing the estabilished markers SCC, CEA, Cyfra21-1 in lung cancer. Method : An immunoradiometric assay of serum CEA, sec, Cyfra21-1, and TPA-M was performed in 49 pathologically confirmed lung cancer patients who visited Keimyung University Hospital from April 1996 to August 1996, and 29 benign lung diseases. Commercially available kits, Ab bead CEA (Eiken) to CEA, SCC RIA BEAD (DAINABOT) to SCC, CA2H (TFB) to Cyfra2H. and TPA-M (DAIICHI) to TPA-M were used for this study. Results : The mean serum values of lung cancer group and control group were $10.05{\pm}38.39{\mu}/L$, $1.59{\pm}0.94{\mu}/L$ in CEA, $3.04{\pm}5.79{\mu}/L$, $1.58{\pm}2.85{\mu}/L$ in SCC, $8.27{\pm}11.96{\mu}/L$, $1.77{\pm}2.72{\mu}/L$ in Cyfra21-1, and $132.02{\pm}209.35\;U/L$, $45.86{\pm}75.86\;U/L$ in TPA-M respectively. Serum values of Cyfra21-1 and TPA-M in lung cancer group were higher than control group (p<0.05). Using cutoff value recommended by the manufactures, that is $2.5{\mu}/L$ in CEA, $3.0{\mu}/L$ in Cyfra21-1, 70.0 U/L in TPA-M, and $2.0{\mu}/L$ in SCC, sensitivity and specificity of lung cancer were 33.3%, 78.6% in CEA, 50.0%, 89.7% in Cyfra21-1, 52.3%, 89.7% in TPA-M, 23.8%, 89.3% in SCC. Sensitivity and specificity of nonsmall cell lung cancer were 36.1%, 78.1% in CEA, 50.1%, 89.7% in Cyfra21-1, 53.1%, 89.7% in TPA-M, 33.8%, 89.3% in SCC. Sensitivity and specificity of small cell lung cancer were 25.0%, 78.5% in CEA, 50.0%, 89.6% in Cyfra21-1, 50.0%, 89.6% in TPA-M, 0%, 89.2% in SCC. Cutoff value according to ROC(Receiver operating characteristics) curve was $1.25{\mu}/L$ in CEA, $1.5{\mu}/L$ in Cyfra2-1, 35 U/L in TPA-M, $0.6{\mu}/L$ in SCC. With this cutoff value, sensitivity, specificity, accuracy and kappa index of Cyfra21-1 and TPA-M were better than CEA and SCC. SCC only was related with statistic significance to TNM stages, dividing to operable stages(TNM stage I to IIIA) and inoperable stages (IIIB and IV) (p<0.05). But no tumor markers showed any correlation with significance with tumor size(p>0.05). Conclusion : Serum TPA-M and Cyfra21-1 shows higher sensitivity and specificity than CEA and SCC in overall lung cancer and nonsmall cell lung cancer those were confirmed pathologically. SCC has higher specificity in nonsmall cell lung cancer. And the level of serum sec are signiticantly related with TNM staging.

  • PDF

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF