• Title/Summary/Keyword: Similar Systems

Search Result 3,870, Processing Time 0.037 seconds

Product Community Analysis Using Opinion Mining and Network Analysis: Movie Performance Prediction Case (오피니언 마이닝과 네트워크 분석을 활용한 상품 커뮤니티 분석: 영화 흥행성과 예측 사례)

  • Jin, Yu;Kim, Jungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.49-65
    • /
    • 2014
  • Word of Mouth (WOM) is a behavior used by consumers to transfer or communicate their product or service experience to other consumers. Due to the popularity of social media such as Facebook, Twitter, blogs, and online communities, electronic WOM (e-WOM) has become important to the success of products or services. As a result, most enterprises pay close attention to e-WOM for their products or services. This is especially important for movies, as these are experiential products. This paper aims to identify the network factors of an online movie community that impact box office revenue using social network analysis. In addition to traditional WOM factors (volume and valence of WOM), network centrality measures of the online community are included as influential factors in box office revenue. Based on previous research results, we develop five hypotheses on the relationships between potential influential factors (WOM volume, WOM valence, degree centrality, betweenness centrality, closeness centrality) and box office revenue. The first hypothesis is that the accumulated volume of WOM in online product communities is positively related to the total revenue of movies. The second hypothesis is that the accumulated valence of WOM in online product communities is positively related to the total revenue of movies. The third hypothesis is that the average of degree centralities of reviewers in online product communities is positively related to the total revenue of movies. The fourth hypothesis is that the average of betweenness centralities of reviewers in online product communities is positively related to the total revenue of movies. The fifth hypothesis is that the average of betweenness centralities of reviewers in online product communities is positively related to the total revenue of movies. To verify our research model, we collect movie review data from the Internet Movie Database (IMDb), which is a representative online movie community, and movie revenue data from the Box-Office-Mojo website. The movies in this analysis include weekly top-10 movies from September 1, 2012, to September 1, 2013, with in total. We collect movie metadata such as screening periods and user ratings; and community data in IMDb including reviewer identification, review content, review times, responder identification, reply content, reply times, and reply relationships. For the same period, the revenue data from Box-Office-Mojo is collected on a weekly basis. Movie community networks are constructed based on reply relationships between reviewers. Using a social network analysis tool, NodeXL, we calculate the averages of three centralities including degree, betweenness, and closeness centrality for each movie. Correlation analysis of focal variables and the dependent variable (final revenue) shows that three centrality measures are highly correlated, prompting us to perform multiple regressions separately with each centrality measure. Consistent with previous research results, our regression analysis results show that the volume and valence of WOM are positively related to the final box office revenue of movies. Moreover, the averages of betweenness centralities from initial community networks impact the final movie revenues. However, both of the averages of degree centralities and closeness centralities do not influence final movie performance. Based on the regression results, three hypotheses, 1, 2, and 4, are accepted, and two hypotheses, 3 and 5, are rejected. This study tries to link the network structure of e-WOM on online product communities with the product's performance. Based on the analysis of a real online movie community, the results show that online community network structures can work as a predictor of movie performance. The results show that the betweenness centralities of the reviewer community are critical for the prediction of movie performance. However, degree centralities and closeness centralities do not influence movie performance. As future research topics, similar analyses are required for other product categories such as electronic goods and online content to generalize the study results.

Influence of Fertilizer Type on Physiological Responses during Vegetative Growth in 'Seolhyang' Strawberry (생리적 반응이 다른 비료 종류가 '설향' 딸기의 영양생장에 미치는 영향)

  • Lee, Hee Su;Jang, Hyun Ho;Choi, Jong Myung;Kim, Dae Young
    • Horticultural Science & Technology
    • /
    • v.33 no.1
    • /
    • pp.39-46
    • /
    • 2015
  • Objective of this research was to investigate the influence of compositions and concentrations of fertilizer solutions on the vegetative growth and nutrient uptake of 'Seolhyang' strawberry. To achieve this, the solutions of acid fertilizer (AF), neutral fertilizer (NF), and basic fertilizer (BF) were prepared at concentrations of 100 or $200mg{\cdot}L^{-1}$ based on N and applied during the 100 days after transplanting. The changes in chemical properties of the soil solution were analysed every two weeks, and crop growth measurements as well as tissue analyses for mineral contents were conducted 100 days after fertilization. The growth was the highest in the treatments with BF, followed by those with NF and AF. The heaviest fresh and dry weights among treatments were 151.3 and 37.8 g, respectively, with BF $200mg{\cdot}L^{-1}$. In terms of tissue nutrient contents, the highest N, P and Na contents, of 3.08, 0.54, and 0.10%, respectively, were observed in the treatment with NF $200mg{\cdot}L^{-1}$. The highest K content was 2.83%, in the treatment with AF $200mg{\cdot}L^{-1}$, while the highest Ca and Mg were 0.98 and 0.42%, respectively, in BF $100mg{\cdot}L^{-1}$. The AF treatments had higher tissue Fe, Mn, Zn, and Cu contents compared to those of NF or BF when fertilizer concentrations were controlled to equal. During the 100 days after fertilization, the highest and lowest pH in soil solution of root media among all treatments tested were 6.67 in BF $100mg{\cdot}L^{-1}$ and 4.69 in AF $200mg{\cdot}L^{-1}$, respectively. The highest and lowest ECs were $5.132dS{\cdot}m^{-1}$ in BF $200mg{\cdot}L^{-1}$ and $1.448dS{\cdot}m^{-1}$ in BF $100mg{\cdot}L^{-1}$, respectively. For the concentrations of macronutrients in the soil solution of root media, the AF $200mg{\cdot}L^{-1}$ treatment gave the highest $NH_4$ concentrations followed by NF $200mg{\cdot}L^{-1}$ and AF $100mg{\cdot}L^{-1}$. The K concentrations in all treatments rose gradually after day 42 in all treatments. When fertilizer concentrations were controlled to equal, the highest Ca and Mg concentrations were observed in AF followed by NF and BF until day 84 in fertilization. The BF treatments produced the highest $NO_3$ concentrations, followed by NF and AF. The trends in the change of $PO_4$ concentration were similar in all treatments. The $SO_4$ concentrations were higher in treatments with AF than those with NF or BF until day 70 in fertilization. These results indicate that compositions of fertilizer solution should to be modified to contain more alkali nutrients when 'Seolhyang' strawberry is cultivated through inert media and nutri-culture systems.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

A Study on the Characteristics of Enterprise R&D Capabilities Using Data Mining (데이터마이닝을 활용한 기업 R&D역량 특성에 관한 탐색 연구)

  • Kim, Sang-Gook;Lim, Jung-Sun;Park, Wan
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.1-21
    • /
    • 2021
  • As the global business environment changes, uncertainties in technology development and market needs increase, and competition among companies intensifies, interests and demands for R&D activities of individual companies are increasing. In order to cope with these environmental changes, R&D companies are strengthening R&D investment as one of the means to enhance the qualitative competitiveness of R&D while paying more attention to facility investment. As a result, facilities or R&D investment elements are inevitably a burden for R&D companies to bear future uncertainties. It is true that the management strategy of increasing investment in R&D as a means of enhancing R&D capability is highly uncertain in terms of corporate performance. In this study, the structural factors that influence the R&D capabilities of companies are explored in terms of technology management capabilities, R&D capabilities, and corporate classification attributes by utilizing data mining techniques, and the characteristics these individual factors present according to the level of R&D capabilities are analyzed. This study also showed cluster analysis and experimental results based on evidence data for all domestic R&D companies, and is expected to provide important implications for corporate management strategies to enhance R&D capabilities of individual companies. For each of the three viewpoints, detailed evaluation indexes were composed of 7, 2, and 4, respectively, to quantitatively measure individual levels in the corresponding area. In the case of technology management capability and R&D capability, the sub-item evaluation indexes that are being used by current domestic technology evaluation agencies were referenced, and the final detailed evaluation index was newly constructed in consideration of whether data could be obtained quantitatively. In the case of corporate classification attributes, the most basic corporate classification profile information is considered. In particular, in order to grasp the homogeneity of the R&D competency level, a comprehensive score for each company was given using detailed evaluation indicators of technology management capability and R&D capability, and the competency level was classified into five grades and compared with the cluster analysis results. In order to give the meaning according to the comparative evaluation between the analyzed cluster and the competency level grade, the clusters with high and low trends in R&D competency level were searched for each cluster. Afterwards, characteristics according to detailed evaluation indicators were analyzed in the cluster. Through this method of conducting research, two groups with high R&D competency and one with low level of R&D competency were analyzed, and the remaining two clusters were similar with almost high incidence. As a result, in this study, individual characteristics according to detailed evaluation indexes were analyzed for two clusters with high competency level and one cluster with low competency level. The implications of the results of this study are that the faster the replacement cycle of professional managers who can effectively respond to changes in technology and market demand, the more likely they will contribute to enhancing R&D capabilities. In the case of a private company, it is necessary to increase the intensity of input of R&D capabilities by enhancing the sense of belonging of R&D personnel to the company through conversion to a corporate company, and to provide the accuracy of responsibility and authority through the organization of the team unit. Since the number of technical commercialization achievements and technology certifications are occurring both in the case of contributing to capacity improvement and in case of not, it was confirmed that there is a limit in reviewing it as an important factor for enhancing R&D capacity from the perspective of management. Lastly, the experience of utility model filing was identified as a factor that has an important influence on R&D capability, and it was confirmed the need to provide motivation to encourage utility model filings in order to enhance R&D capability. As such, the results of this study are expected to provide important implications for corporate management strategies to enhance individual companies' R&D capabilities.

Identification of Sorption Characteristics of Cesium for the Improved Coal Mine Drainage Treated Sludge (CMDS) by the Addition of Na and S (석탄광산배수처리슬러지에 Na와 S를 첨가하여 개량한 흡착제의 세슘 흡착 특성 규명)

  • Soyoung Jeon;Danu Kim;Jeonghyeon Byeon;Daehyun Shin;Minjune Yang;Minhee Lee
    • Economic and Environmental Geology
    • /
    • v.56 no.2
    • /
    • pp.125-138
    • /
    • 2023
  • Most of previous cesium (Cs) sorbents have limitations on the treatment in the large-scale water system having low Cs concentration and high ion strength. In this study, the new Cs sorbent that is eco-friendly and has a high Cs removal efficiency was developed by improving the coal mine drainage treated sludge (hereafter 'CMDS') with the addition of Na and S. The sludge produced through the treatment process for the mine drainage originating from the abandoned coal mine was used as the primary material for developing the new Cs sorbent because of its high Ca and Fe contents. The CMDS was improved by adding Na and S during the heat treatment process (hereafter 'Na-S-CMDS' for the developed sorbent in this study). Laboratory experiments and the sorption model studies were performed to evaluate the Cs sorption capacity and to understand the Cs sorption mechanisms of the Na-S-CMDS. The physicochemical and mineralogical properties of the Na-S-CMDS were also investigated through various analyses, such as XRF, XRD, SEM/EDS, XPS, etc. From results of batch sorption experiments, the Na-S-CMDS showed the fast sorption rate (in equilibrium within few hours) and the very high Cs removal efficiency (> 90.0%) even at the low Cs concentration in solution (< 0.5 mg/L). The experimental results were well fitted to the Langmuir isotherm model, suggesting the mostly monolayer coverage sorption of the Cs on the Na-S-CMDS. The Cs sorption kinetic model studies supported that the Cs sorption tendency of the Na-S-CMDS was similar to the pseudo-second-order model curve and more complicated chemical sorption process could occur rather than the simple physical adsorption. Results of XRF and XRD analyses for the Na-S-CMDS after the Cs sorption showed that the Na content clearly decreased in the Na-S-CMDS and the erdite (NaFeS2·2(H2O)) was disappeared, suggesting that the active ion exchange between Na+ and Cs+ occurred on the Na-S-CMDS during the Cs sorption process. From results of the XPS analysis, the strong interaction between Cs and S in Na-S-CMDS was investigated and the high Cs sorption capacity was resulted from the binding between Cs and S (or S-complex). Results from this study supported that the Na-S-CMDS has an outstanding potential to remove the Cs from radioactive contaminated water systems such as seawater and groundwater, which have high ion strength but low Cs concentration.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

On the Bibliographies of Chinese Historical Books - Classifying and cataloguing system of six historical bibliographies - (중국의 사지서목에 대하여 -육사예문$\cdot$경적지의 분류 및 편목체재 비교를 중심으로-)

  • Kang Soon-Ae
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.24
    • /
    • pp.289-332
    • /
    • 1993
  • In china, six bibliographies of offical historical books are evaluated at the most important things among the systematically-editing bibliographies. These bibliographies would be usful to study the orign of classical sciences and their development, bibliographic research of Chinese classics, bibliographic judgement on genuine books, titles, authors, volumes. They could be refered to research into graving, correcting, and existence of ancient books. therefore, these bibliographies would be applied to estimation the phase of scientific and cultural development. The study of these bibliographies has been not yet made in Korea. This thesis lays its importance on the background of their appearance, their classification norms, organizing system of their catalogue, and comparison between their difference. 1. Editing and compiling of Chilyak (칠약) by Liu Chin (유흠) and official histories played an important role of entering an apperance of historical book's bibliographies. Chilyak has been lost. However, its classification and compiling system of classical books would be traced by Hansoyemunji(한서예문지) of which basic system is similar to Chilyak. It classified books according to their scientific characteristic. If a few books didn't have their own categories, they were combined by the circles parallel to the books' characteristic. With the books classified under the same scientific characteristic, they were again divided into the scientific schools or structures. It also arranged the same kinds of books according to the chronology. The some books wi th duplicate subjects were classified multiplely by their duplicate subject. 2. Ssu-ma Chon's (사마천) The Historical Records (Saki, 사기) and Pan Ku's (반고) The History of the Former Han Dynasty (Hanso, 한서) has also took effects on appearance of historical books' bibliographies. Covering overall history, Saki was structured by the five parts: The basic annals(본기), the chronological tables (표), the documents (서), the hereditary houses (세가), biographies (열전). The basic annals dealt with kings and courts' affairs according to the chronology. The chronological tables was the records of the annals. The documents described overall the social and cultural systems. The hereditary houses recorded courts' meritorious officials and public figures. The biographies showed exemplars of seventy peoples selected by their social status. Pan Ku(반구)'s The History of the Former Han Dynasty(한서) deserved to be called the prototype for the offical histories after Saki's (사기; The Historical Records) apperance. Although it modelled on Saki, it had set up its own cataloguing system. It was organized by four parts; the basic annals (본기), the chronological tables (표), treatises(지), biographies (열전). The documents in the Hanso(한서) was converted into treatises(지). The hereditary houses and biographies were merged. For the first time, the treatise with The Yemunji could operate function for historical bibliographies. 3. There were six historical bibliographies: Hansoyemunji(한서예문지), Susokyongjeokji (수서경적지), Kudangsokyongjeokji(구당서경적지), Shindangsoyemunji (신당서예문지), Songsayemunji (송사예문지), Myongsayemunji (명사예문지). 1) Modelling on Liu Chin's Chilyak except Chipryak(집략), Hansoyemunji divided the characteristic of the books and documents into six parts: Yukrye(육예), Cheja(제자), Shibu(시부), Pyongsoh(병서), Susul(수술), Pangki(방기). Under six parts, there were thirty eight orders in Hansoyemunji. To its own classification, Hansoyemunji applied the Chilyak's theory of classification that the books or documents were managed according to characteristic of sciences, the difference of schools, the organization of sentences. However the overlapped subjects were deleted and unified into one. The books included into an unsuitable subject were corrected and converted into another. The Hansoyemunji consisted of main preface (Taesoh 대서), minor preface (Sosoh 소서) , the general preface (Chongso 총서). It also recorded the introduction of books and documents, the origin of sciences, the outline of subjects, and the establishment of orders. The books classified by the subject had title, author, and volumes. They were rearranged by titles and the chronological publication year. Sometimes author was the first access point to catalogue the books. If it was necessary for the books to take footnotes, detail notes were formed. The Volume number written consecutively to order and subject could clarify the quantity of books. 2) Refering to Classfication System by Seven Norms (칠분법) and Classification System by Four Norms(사분법), Susokyongjeokji(수서경적지) had accomplished the classification by four norms. In fact, its classification largely imitated Wanhyosoh(완효서)'s Chilrok(칠록), Susokyongjeokji's system of classification consisted of four parts-Kyung(경), Sa(사), Cha(자), Chip(칩). The four parts were divided into 40 orders. Its appendix was again divided into two parts, Buddihism and Taiosm. Under the two parts there were fifteen orders. Totally Susokyongjeokji was made of six parts and fifty five orders. In comparison with Hansoyemunji(한서예문지), it clearly showed the conception of Kyung, Sa, Cha, Chip. Especially it deserved to be paid attention that Hansoyemunji laied history off Chunchu(춘추) and removed history to Sabu(사부). However Chabu(사부) put many contrary subjects such as Cheja(제자), Kiye(기예), Sulsu(술수), Sosol(소설) into the same boundary, which committed errors insufficient theoretical basis. Anothor demerit of Susokyongjeokji was that it dealt with Taiosm scriptures and Buddism scriptures at the appendix because they were considered as quasi-religion. Its compilation of bibliographical facts consisted of main preface(Taesoh 대서), minor preface(Sosoh 소서), general preface (Chongsoh 총서), postscript (Husoh 후서). Its bibliological facts mainly focused on the titles. Its recorded authors' birth date and their position. It wrote the lost and existence of books consecutive to total number of books, which revealed total of the lost books in Su Dynasty. 3) Modelling on the basis of Kokumsorok(고분서록) and Naewaekyongrok(내외경록), Kudangsokyongjeokji(구당서경적지) had four parts and fourty five orders. It was estimated as the important role of establishing basic frame of classification by four norms in classification theory's history. However it had also its own limit. Editing and compling orders of Kudangsokyongjeokji had been not progressively changed. Its orders imitated by and large Susokyongjeokji. In Its system of organizing catalogue, with its minor preface and general preface deleting, Kudangsokyongjeokji by titles after orders sometimes broke out confusion because of unclear boundaries between orders. 4) Shindangsoyemunji(신당서예문지), adding 28,469 books to Kudangsokyongjeokji, recorded 82,384 books which were divided by four parts and fourty four orders. In comparison with Kudangkyongjeokj, Sindangsoyemunji corrected unclear order's norm. It merged the analogical norms four orders (for instance, Kohun 고훈 and Sohakryu 소학류) and seperated the different norms four orders (for example, Hyokyong 효경 and Noneuhryu 논어류, Chamwi 참위 and Kyonghaeryu 경해류, Pyonryon 편년 and Wisaryu 위사류). Recording kings' behaviors and speeches (Kikochuryu 기거주류) in the historical parts induced the concept of specfication category. For the first time, part of Chipbu (집부) set up the order of classification norm for historical and literatural books and documents (Munsaryu 문사류). Its editing and compiling had been more simplified than Kudangsokyongjeokji. Introduction was written at first part of bibliographies. Appendants except bibliographic items such subject, author, title, volume number, total were omitted. 5) Songsayemunji(송사예문지) were edited in the basis of combining Puksong(북송) and Namsong(남송), depending on Sabukuksayemunji(사부국사예문지). Generally Songsayemunji had lost a lot of bibliographical facts of many books. They were duplicated and wrongly classified books because it committed an error of the incorrectly annalistic editing. Particularly Namsong showed more open these defaults. Songsayemunji didin't include the books published since the king Youngchong(영종). Its system of classification was more better controlled. Chamwiryu(참위류) in the part of Kyongbu(경부) was omitted. In the part of history(Sabu 사부), recordings of kings' behaviors and speeches more merged in the annals. Historical abstract documents (Sachoryu 사초류) were seperately arranged. In the part of Chabu(자부), Myongdangkyongmaekryu(명당경맥류) and Euisulryu(의술류) were combined. Ohangryu(오행류) were laied off Shikuryu(시구류). In the part of Chipbu(집부), historical and literatural books (Munsaryu 문사류) were independentely arranged. There were the renamed orders; from Wisa(위사) to Paesa(패사), Chapsa (잡사) to Pyolsa(열사), Chapchonki(잡전기) to Chonki(전기), Ryusoh(류서) to Ryusa(류서). Introduction had only main preface. The books of each subject catalogued by title, the volume number, and author and arranged mainly by authors. Annotations were written consecutively after title and the volume number. In the afternote the number of not-treated books were revealed. Difference from Singdangsohyemunji(신당서예문지) were that the concept and boundary of orders became more clearer. It also wrote the number of books consecutive to main subject. 6) Modelling on Chonkyongdangsomok (경당서목), Myongsayemunji(명사예문지) was compiled in the basis of books and documents published in the Ming Danasty. In classification system, Myongsayemunji partly merged and the seperated some orders for it. It also deleted and renamed some of orders. In case of necessity, combining of orders' norm was occured particulary in the part of Sabu(사부) and Chabu(자부). Therefore these merging of orders norm didn't offer sufficient theretical background. For example, such demerits were seen in the case that historical books edited by annals were combined with offical historical ones which were differently compiled and edited from the former. In the part of Chabu(자부), it broke out another confusion that Pubga(법가), Meongga(명가), Mukga(묵가), Chonghweongka's(종횡가) thoughts were classified in the Chapka(잡가). Scriptures of Taiosim and Buddhism were seperated from each other. There were some deleted books such as Mokrokryu(목록류), Paesaryu(패사류) in the part of history (Sabu 사부) and Chosaryu(초사류) in the part of Chipbu(집부). The some in the each orders had been renamed. Imitating compiling system of Songsayemunji(송사예문지), with reffering to its differ-ence, Myongsayemunji(명사예문지) wrote the review and the change of the books by author. The number of not-treated books didn't appear at the total. It also deleted the total following main subject.

  • PDF

A Comparative Study of Domestic and International regulation on Mixed-fleet Flying of Flight crew (운항승무원의 항공기 2개 형식 운항관련 국내외 기준 비교 연구)

  • Lee, Koo-Hee
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.30 no.2
    • /
    • pp.403-425
    • /
    • 2015
  • The Chicago Convention and Annexes have become the basis of aviation safety regulations for every contracting state. Generally, the State's aviation safety regulations refer to the Standards and Recommended Practices(SARPs) provided in the Annexes of the Chicago Convention. In order to properly reflect international aviation safety regulations, constant studies of the aviation fields are of paramount importance. This Paper is intended to identify the main differences between korean and foreign regulation and suggest a few amendment proposals on Mixed-fleet Flying(at or more two aircraft type operation) of flight crew. Comparing with these regulations, the korean regulations and implementations have some insufficiency points. I suggest some amendment proposals of korean regulations concerning Mixed-fleet Flying that flight crew operate aircraft of different types. Basically an operator shall not assign a pilot-in-command or a co-pilot to operate at the flight controls of a type of airplane during take-off and landing unless that pilot has operated the flight controls during at least three take-offs and landings within the preceding 90 days on the same type of airplane or in a flight simulator. Also, flight crew members are familiarized with the significant differences in equipment and/or procedures between concurrently operated types. An operator shall ensure that piloting technique and the ability to execute emergency procedures is checked in such a way as to demonstrate the pilot's competence on each type or variant of a type of airplane. Proficiency check shall be performed periodically. When an operator schedules flight crew on different types of airplanes with similar characteristics in terms of operating procedures, systems and handling, the State shall decide the requirements for each type of airplane can be combined. In conclusion, it is necessary for flight crew members to remain concurrently qualified to operate multiple types. The operator shall have a program to include, as a minimum, required differences training between types and qualification to maintain currency on each type. If the Operator utilizes flight crew members to concurrently operate aircraft of different types, the operator shall have qualification processes approved or accepted by the State. If applicable, the qualification curriculum as defined in the operator's Advanced Qualification Program could be applied. Flight crew members are familiarized with the significant differences in equipment and/or procedures between concurrently operated types. The difference among different types of airpcrafts decrease and standards for these airpcrafts can be applied increasingly because function and performance have been improved by aircraft manufacture company in accordance to basic aircraft system in terms of developing new aircrafts for flight standard procedure and safety of flight. Also, it becomes more necessary for flight crews to control multi aircraft types due to various aviation business and activation of leisure business. Nevertheless, in terms of flight crew training and qualification program, there are no regulations in Korea to be applied to new aircraft types differently in accordance with different levels. In addition, it has no choice different programs based on different levels because there are not provisions to restrict or limit and specific standards to operate at or more than two aircraft types for flight safety. Therefore the aviation authority introduce Flight Standardization and/or Operational Evaluation Board in order to analysis differences among aircraft types. In addition to that, the aviation authority should also improve standard flight evaluation and qualification system among different aircraft types for flight crews to apply reasonable training and qualification efficiently. For all the issue mentioned above, I have studied the ICAO SARPs and some state's regulation concerning operating aircraft of different types(Mixed-fleet flying), and suggested some proposals on the different aircraft type operation as an example of comprehensive problem solving. I hope that this paper is 1) to help understanding about the international issue, 2) to help the improvement of korean aviation regulations, 3) to help compliance with international standards and to contribute to the promotion of aviation safety, in addition.