• Title/Summary/Keyword: Functional classification systems

Search Result 70, Processing Time 0.026 seconds

The Design of Polynomial Network Pattern Classifier based on Fuzzy Inference Mechanism and Its Optimization (퍼지 추론 메커니즘에 기반 한 다항식 네트워크 패턴 분류기의 설계와 이의 최적화)

  • Kim, Gil-Sung;Park, Byoung-Jun;Oh, Sung-Kwun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.7
    • /
    • pp.970-976
    • /
    • 2007
  • In this study, Polynomial Network Pattern Classifier(PNC) based on Fuzzy Inference Mechanism is designed and its parameters such as learning rate, momentum coefficient and fuzzification coefficient are optimized by means of Particle Swarm Optimization. The proposed PNC employes a partition function created by Fuzzy C-means(FCM) clustering as an activation function in hidden layer and polynomials weights between hidden layer and output layer. Using polynomials weights can help to improve the characteristic of the linear classification of basic neural networks classifier. In the viewpoint of linguistic analysis, the proposed classifier is expressed as a collection of "If-then" fuzzy rules. Namely, architecture of networks is constructed by three functional modules that are condition part, conclusion part and inference part. The condition part relates to the partition function of input space using FCM clustering. In the conclusion part, a polynomial function caries out the presentation of a partitioned local space. Lastly, the output of networks is gotten by fuzzy inference in the inference part. The proposed PNC generates a nonlinear discernment function in the output space and has the better performance of pattern classification as a classifier, because of the characteristic of polynomial based fuzzy inference of PNC.

Development of Staffing Levels for Nursing Personnel to Provide Inpatients with Integrated Nursing Care (간호·간병통합서비스 제공을 위한 간호인력 배치기준 개발)

  • Cho, Sung-Hyun;Song, Kyung Ja;Park, Ihn Sook;Kim, Yeon Hee;Kim, Mi Soon;Gong, Da Hyun;You, Sun Ju;Ju, Young-Su
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.23 no.2
    • /
    • pp.211-222
    • /
    • 2017
  • Purpose: To develop staffing levels for nursing personnel (registered nurses and nursing assistants) to provide inpatients with integrated nursing care that includes, in addition to professional nursing care, personal care previously provided by patients' families or private caregivers. Methods: A time & motion study was conducted to observe nursing care activities and the time spent by nursing personnel, families, and private caregivers in 10 medical-surgical units. The Korean Patient Classification System-1 (KPCS-1) was used for the nurse manager survey conducted to measure staffing levels and patient needs for nursing care. Results: Current nurse to patient ratios from the time-motion study and the survey study were 1:10 and 1:11, respectively. Time spent in direct patient care by nursing personnel and family/private caregivers was 51 and 130 minutes per day, respectively. Direct nursing care hours correlated with KPCS-1 scores. Nursing personnel to patient ratio required to provide integrated inpatient care ranged from 1:3.9 to 1:6.1 in tertiary hospitals and from 1:4.4 to 1:6.0 in general hospitals. The functional nursing care delivery system had been implemented in 38.5% of the nursing units. Conclusion: Findings indicate that appropriate nurse staffing and efficient nursing care delivery systems are required to provide integrated inpatient nursing care.

Development and Evaluation of an 'Activity and Rest' Integrated Course (혼합학습형태의 『활동과휴식』 통합교과목 개발 및 적용)

  • Oh, Eui Gum;Hwang, Seon Young;Lee, Jae Eun;Song, Eun Kyeung;Kim, Min Jeong
    • Korean Journal of Adult Nursing
    • /
    • v.19 no.4
    • /
    • pp.624-633
    • /
    • 2007
  • Purpose: This study was conducted to develop an integrated undergraduate course including a PBL based on a blended learning strategy, and evaluate learners' responses. Methods: The learning contents of cardiovascular, respiratory, and musculoskeletal medical systems, and nursing diagnoses of 'activity and rest' domain (NANADA's classification II, 2005) were analyzed. Six clinical scenarios with the clients in different life cycles were developed for PBL. Classical lecture and group presentation with on-line self learning were implemented in addition to PBL. The developed course was implemented on 84 junior nursing students in a university for 7 weeks with 5 hours per day, two days per week. Students were asked to complete structured questionnaires including problem solving, critical thinking, and nursing diagnosis differentiation abilities. Results: Learner's evaluation was positive in problem solving skills and in the differentiation ability of nursing diagnoses relevant to an 'activity and rest' functional health pattern. Conclusion: Development and implementation of integrated courses based on a blended learning method need to be continued to enhance students' thinking and self-directed learning abilities. Supporting strategies for individual learners should be added for successful blended learning such as individual on-line feedback and consideration of individual learning outcomes.

  • PDF

THE ROLE OF SATELLITE REMOTE SENSING TO DETECT AND ASSESS THE DAMAGE OF TSUNAMI DISASTER

  • Siripong, Absornsuda
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.827-830
    • /
    • 2006
  • The tsunami from the megathrust earthquake magnitude 9.3 on 26 December 2004 is the largest tsunami the world has known in over forty years. This tsunami destructively attacked 13 countries around Indian Ocean with at least 230,000 fatalities, displaced people 2,089,883 and 1.5 million people who lost their livelihoods. The ratio of women and children killed to men is 3 to 1. The total damage costs US$ 10.73 billion and rebuilding costs US$ 10.375 billion. The tsunami's death toll could have been drastically reduced, if the warning was disseminated quickly and effectively to the coastal dwellers along the Indian Ocean rim. With a warning system in Indian Ocean similar to that operating in the Pacific Ocean since 1965, it would have been possible to warn, evacuate and save countless lives. The best tribute we can pay to all who perished or suffered in this disaster is to heed its powerful lessons. UNESCO/IOC have put their tremendous effort on better disaster preparedness, functional early warning systems and realistic arrangements to cope with tsunami disaster. They organized ICG/IOTWS (Indian Ocean Tsunami Warning System) and the third of this meeting is held in Bali, Indonesia during $31^{st}$ July to $4^{th}$ August 2006. A US$ 53 million interim warning system using tidal gauges and undersea sensors is nearing completion in the Indian Ocean with the assistance from IOC. The tsunami warning depends strictly on an early detection of a tsunami (wave) perturbation in the ocean itself. It does not and cannot depend on seismological information alone. In the case of 26 December 2004 tsunami when the NOAA/PMEL DART (Deep-ocean Assessment and Reporting of Tsunami) system has not been deployed, the initialized input of sea surface perturbation for the MOST (Method Of Splitting Tsunami) model was from the tsunamigenic-earthquake source model. It is the first time that the satellite altimeters can detect the signal of tsunami wave in the Bay of Bengal and was used to validate the output from the MOST model in the deep ocean. In the case of Thailand, the inundation part of the MOST model was run from Sumatra 2004 for inundation mapping purposes. The medium and high resolution satellite data were used to assess the degree of the damage from Indian Ocean tsunami of 2004 with NDVI classification at 6 provinces on the Andaman seacoast of Thailand. With the tide-gauge station data, run-up surveys, bathymetry and coastal topography data and land-use classification from satellite imageries, we can use these information for coastal zone management on evacuation plan and construction code.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Factors Affecting the Implementation Success of Data Warehousing Systems (데이터 웨어하우징의 구현성공과 시스템성공 결정요인)

  • Kim, Byeong-Gon;Park, Sun-Chang;Kim, Jong-Ok
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2007.05a
    • /
    • pp.234-245
    • /
    • 2007
  • The empirical studies on the implementation of data warehousing systems (DWS) are lacking while there exist a number of studies on the implementation of IS. This study intends to examine the factors affecting the implementation success of DWS. The study adopts the empirical analysis of the sample of 112 responses from DWS practitioners. The study results suggest several implications for researchers and practitioners. First, when the support from top management becomes great, the implementation success of DWS in organizational aspects is more likely. When the support from top management exists, users are more likely to be encouraged to use DWS, and organizational resistance to use DWS is well coped with increasing the possibility of implementation success of DWS. The support of resource increases the implementation success of DWS in project aspects while it is not significantly related to the implementation success of DWS in organizational aspects. The support of funds, human resources, and other efforts enhances the possibility of successful implementation of project; the project does not exceed the time and resource budgets and meet the functional requirements. The effect of resource support, however, is not significantly related to the organizational success. The user involvement in systems implementation affects the implementation success of DWS in organizational and project aspects. The success of DWS implementation is significantly related to the users' commitment to the project and the proactive involvement in the implementation tasks. users' task. The observation of the behaviors of competitors which possibly increases data quality does not affect the implementation success of DWS. This indicates that the quality of data such as data consistency and accuracy is not ensured through the understanding of the behaviors of competitors, and this does not affect the data integration and the successful implementation of DWS projects. The prototyping for the DWS implementation positively affects the implementation success of DWS. This indicates that the extent of understanding requirements and the communication among project members increases the implementation success of DWS. Developing the prototypes for DWS ensures the acquirement of accurate or integrated data, the flexible processing of data, and the adaptation into new organizational conditions. The extent of consulting activities in DWS projects increases the implementation success of DWS in project aspects. The continuous support for consulting activities and technology transfer enhances the adherence to the project schedule preventing the exceeding use of project budget and ensuring the implementation of intended system functions; this ultimately leads to the successful implementation of DWS projects. The research hypothesis that the capability of project teams affects the implementation success of DWS is rejected. The technical ability of team members and human relationship skills themselves do not affect the successful implementation of DWS projects. The quality of the system which provided data to DWS affects the implementation success of DWS in technical aspects. The standardization of data definition and the commitment to the technical standard increase the possibility of overcoming the technical problems of DWS. Further, the development technology of DWS affects the implementation success of DWS. The hardware, software, implementation methodology, and implementation tools contribute to effective integration and classification of data in various forms. In addition, the implementation success of DWS in organizational and project aspects increases the data quality and system quality of DWS while the implementation success of DWS in technical aspects does not affect the data quality and system quality of DWS. The data and systems quality increases the effective processing of individual tasks, and reduces the decision making times and efforts enhancing the perceived benefits of DWS.

  • PDF

Exploratory Case Study for Key Successful Factors of Producy Service System (Product-Service System(PSS) 성공과 실패요인에 관한 탐색적 사례 연구)

  • Park, A-Rum;Jin, Dong-Su;Lee, Kyoung-Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.255-277
    • /
    • 2011
  • Product Service System(PSS), which is an integrated combination of product and service, provides new value to customer and makes companies sustainable as well. The objective of this paper draws Critical Successful Factors(CSF) of PSS through multiple case study. First, we review various concepts and types in PSS and Platform business literature currently available on this topic. Second, after investigating various cases with the characteristics of PSS and platform business, we select four cases of 'iPod of Apple', 'Kindle of Amazon', 'Zune of Microsoft', and 'e-book reader of Sony'. Then, the four cases are categorized as successful and failed cases according to criteria of case selection and PSS classification. We consider two methodologies for the case selection, i.e., 'Strategies for the Selection of Samples and Cases' proposed by Bent(2006) and the seven case selection procedures proposed by Jason and John(2008). For case selection, 'Stratified sample and Paradigmatic cases' is adopted as one of several options for sampling. Then, we use the seven case selection procedures such as 'typical', 'diverse', 'extreme', 'deviant', 'influential', 'most-similar', and 'mostdifferent' and among them only three procedures of 'diverse', 'most?similar', and 'most-different' are applied for the case selection. For PSS classification, the eight PSS types, suggested by Tukker(2004), of 'product related', 'advice and consulancy', 'product lease', 'product renting/sharing', 'product pooling', 'activity management', 'pay per service unit', 'functional result' are utilized. We categorize the four selected cases as a product oriented group because the cases not only sell a product, but also offer service needed during the use phase of the product. Then, we analyze the four cases by using cross-case pattern that Eisenhardt(1991) suggested. Eisenhardt(1991) argued that three processes are required for avoiding reaching premature or even false conclusion. The fist step includes selecting categories of dimensions and finding within-group similarities coupled with intergroup difference. In the second process, pairs of cases are selected and listed. The second step forces researchers to find the subtle similarities and differences between cases. The third process is to divide the data by data source. The result of cross-case pattern indicates that the similarities of iPod and Kindle as successful cases are convenient user interface, successful plarform strategy, and rich contents. The differences between the successful cases are that, wheares iPod has been recognized as the culture code, Kindle has implemented a low price as its main strategy. Meanwhile, the similarities of Zune and PRS series as failed cases are lack of sufficient applications and contents. The differences between the failed cases are that, wheares Zune adopted an undifferentiated strategy, PRS series conducted high-price strategy. From the analysis of the cases, we generate three hypotheses. The first hypothesis assumes that a successful PSS system requires convenient user interface. The second hypothesis assumes that a successful PSS system requires a reciprocal(win/win) business model. The third hypothesis assumes that a successful PSS system requires sufficient quantities of applications and contents. To verify the hypotheses, we uses the cross-matching (or pattern matching) methodology. The methodology matches three key words (user interface, reciprocal business model, contents) of the hypotheses to the previous papers related to PSS, digital contents, and Information System (IS). Finally, this paper suggests the three implications from analyzed results. A successful PSS system needs to provide differentiated value for customers such as convenient user interface, e.g., the simple design of iTunes (iPod) and the provision of connection to Kindle Store without any charge. A successful PSS system also requires a mutually benefitable business model as Apple and Amazon implement a policy that provides a reasonable proft sharing for third party. A successful PSS system requires sufficient quantities of applications and contents.

A Study on the Health Insurance Management System; With Emphasis on the Management Operating Cost (의료보험 관리체계에 대한 연구 - 관리비용을 중심으로 -)

  • 남광성
    • Korean Journal of Health Education and Promotion
    • /
    • v.6 no.2
    • /
    • pp.23-39
    • /
    • 1989
  • There have been a lot of considerable. discussion and debate surrounding the management model in the health insurance management system and opinions regarding the management operating cost. It is a well known fact that there have always been dissenting opinions and debates surrounding the issue. The management operating cost varies according to the scale of the management organization and component members characteristics of the insurance carrier. Therefore, it is necessary to examine and compare the management operating cost to the simulated management models developed to cover those eligible for the health insurance scheme in this country. Since the management operating cost can vary according to the different models of management, four alternative management models have been established based on the critical evaluation of existing theories concerned, as well as on the basis of the survey results and simulation attempts. The first alternative model is the Unique Insurance Carrier Model(Ⅰ) ; desigened to cover all of the people with no classification of insurance qualifications and finances from the source of contribution of the insured, nationwide. The second is the Management Model of Large-scale District Insurance Carrier(Ⅱ) ; this means the Korean society would be divided into 21 large districts; each having its own insurance carrier that would cover the people in that particular district with no classification of insurance qualifications arid finances as in Model I. The third is the Management Model of Insurance Carrier Divided by Area and Classified with Occupation if Largescale (Ⅲ) ; to serve the self-employed in the 21 districts divided as in Model Ⅱ. It would serve the employees and their dependents by separate insurance carriers in large-scale similar to the area of the district-scale for the self-employed, so that the insurance qualifications and finances would be classified with each of the insurance carriers: The last is the Management Model of the Multi - insurance Carrier (Ⅳ) based on the Si. Gun. Gu area which will cover their own self- employed people in the area with more than 150 additional insurance carriers covering the employees and their dependents. The manpower necessary to provide services to all of the people according to the four models is calculated through simulation trials. It indicates that the Management Model of Large-scale District Insurance Carrier requires the most manpower among the four alternative models. The unit management operating costs per the insured individuals and covered persons are leveled with several intervals based on the insurance recipients. in their characteristics. The interval levels derived from the regression analysis reveal that the larger the scale of the insurance carriers is in the number of those insured and covered. the more the unit management operating cost decreases. significantly. Moreover. the result of the quadratic functional formula also shows the U-shape significantly. The management operating costs derived from the simulated calculation. on the basis of the average salary and related cost per staff- member of the Health Insurance Societies for Occupational Labours and Korean Medical Insurance Corporation for the Official Servants and Private School Teachers in 1987 fiscal year. show that the Model of Multi-insurance Carrier warrants the highest management operating cost. Meanwhile the least expensive management operating cost is the Management Model of Unique Insurance Carrier. Insurance Carrier Divided by Area and Classified with Occupation in Large-scale. and Large-scale District Insurance Carrier. in order. Therefore. it is feasible to select the Unique Insurance Carrier Model among the four alternatives from the viewpoint of the management operating cost and in the sense of the flexibility in promoting the productivity of manpower in the human services field. However. the choice of the management model for health insurance systems and its application should be examined further utilizing the operation research analysis for such areas as the administrative efficiency and factors related to computer cost etc.

  • PDF

A Destructive Method in the Connection of the Algorithm and Design in the Digital media - Centered on the Rapid Prototyping Systems of Product Design - (디지털미디어 환경(環境)에서 디자인 특성(特性)에 관한 연구(硏究) - 실내제품(室內製品) 디자인을 중심으로 -)

  • Kim Seok-Hwa
    • Journal of Science of Art and Design
    • /
    • v.5
    • /
    • pp.87-129
    • /
    • 2003
  • The purpose of this thesis is to propose a new concept of design of the 21st century, on the basis of the study on the general signification of the structures and the signs of industrial product design, by examining the difference between modern and post-modern design, which is expected to lead the users to different design practice and interpretation of it. The starting point of this study is the different styles and patterns of 'Gestalt' in the post-modern design of the late 20th century from modern design - the factor of determination in industrial product design. That is to say, unlike functional and rational styles of modern product design, the late 20th century is based upon the pluralism characterized by complexity, synthetic and decorativeness. So far, most of the previous studies on design seem to have excluded visual aspects and usability, focused only on effective communication of design phenomena. These partial studies on design, blinded by phenomenal aspects, have resulted in failure to discover a principle of fundamental system. However, design varies according to the times; and the transformation of design is reflected in Design Pragnanz to constitute a new text of design. Therefore, it can be argued that Design Pragnanz serves as an essential factor under influence of the significance of text. In this thesis, therefore, I delve into analysis of the 20th century product design, in the light of Gestalt theory and Design Pragnanz, which have been functioning as the principle of the past design. For this study, I attempted to discover the fundamental elements in modern and post-modern designs, and to examine the formal structure of product design, the users' aesthetic preference and its semantics, from the integrative viewpoint. Also, with reference to history and theory of design my emphasis is more on fundamental visual phenomena than on structural analysis or process of visualization in product design, in order to examine the formal properties of modern and post-modern designs. Firstly, In Chapter 1, 'Issues and Background of the Study', I investigated the Gestalt theory and Design Pragnanz, on the premise of formal distinction between modern and post-modern designs. These theories are founded upon the discussion on visual perception of Gestalt in Germany in 1910's, in pursuit of the principle of perception centered around visual perception of human beings. In Chapter 2, I dealt with functionalism of modern design, as an advance preparation for the further study on the product design of the late 20th century. First of all, in Chapter 2-1, I examined the tendency of modern design focused on functionalism, which can be exemplified by the famous statement 'Form follows function'. Excluding all unessential elements in design - for example, decoration, this tendency has attained the position of the international style based on the spirit of Bauhause - universality and regularity - in search of geometric order, standardization and rationalization. In Chapter 2-2, I investigated the anthropological viewpoint that modern design started representing culture in a symbolic way including overall aspects of the society - politics, economics and ethics, and its criticism on functionalist design that aesthetic value is missing in exchange of excessive simplicity in style. Moreover, I examined the pluralist phenomena in post-modern design such as kitsch, eclecticism, reactionism, hi-tech and digital design, breaking away from functionalist purism of modern design. In Chapter 3, I analyzed Gestalt Pragnanz in design in a practical way, against the background of design trends. To begin with, I selected mass product design among those for the 20th century products as a target of analysis, highlighting representative styles in each category of the products. For this analysis, I adopted the theory of J. M Lehnhardt, who gradated in percentage the aesthetic and semantic levels of Pragnantz in design expression, and that of J. K. Grutter, who expressed it in a formula of M = O : C. I also employed eight units of dichotomies, according to the G. D. Birkhoff's aesthetic criteria, for the purpose of scientific classification of the degree of order and complexity in design; and I analyzed phenomenal aspects of design form represented in each unit. For Chapter 4, I executed a questionnaire about semiological phenomena of Design Pragnanz with 28 units of antonymous adjectives, based upon the research in the previous chapter. Then, I analyzed the process of signification of Design Pragnanz, founded on this research. Furthermore, the interpretation of the analysis served as an explanation to preference, through systematic analysis of Gestalt and Design Pragnanz in product design of the late 20th century. In Chapter 5, I determined the position of Design Pragnanz by integrating the analyses of Gestalt and Pragnanz in modern and post-modern designs In this process, 1 revealed the difference of each Design Pragnanz in formal respect, in order to suggest a vision of the future as a result, which will provide systemic and structural stimulation to current design.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF