The objective of this study was to develop an in vitro assessment of sperm fertilizing capacity of bulls and investigate the factors influencing sperm function and characteristics of frozen-thawed bovine spermatozoa. in vitro fertilization (IVF), the evaluation of motility and normal morphology, HOST (hypoosmotic swelling test), Ca-ionophore induced acrosome reaction, luminol and lucigenin-dependent chemiluminescence for the measurement of reactive oxygen species (ROS), the measurement of malondialdehyde formation for the analysis of lipid peroxidation (LPO), and the evaluation of DNA fragmentation using the method of 747-mediated nick end labelling (TUNEL) by flow cytometry were performed in frozen-thawed bovine spermatozoa. Correlations between the rates of fertilization, blastocyst formation after IVF and the values of respective assays were investigated. 1. IVF rate and blastocyst formation rate averaged 64.4% and 34.3% for spermatozoa from high -fertility bull group and averaged 18.5% and 6.2% for spermatozoa from low-fertility bull group, respectively. There were significantly different between two bull groups. Sperm motility and percentage acrosome reaction averaged 79.0% and 66.2% for spermatozoa from high-fertility bull group and averaged 40.7% and 22.9% for spermatozoa from low-fertility bull group, respectivitely. There were not different between two bull groups. 2. Luminol depenent chemiluminescence, LPO and DNA fragementation averaged 6.4, 2.0 nmol and 2.6% from spermatozoa from high-fertility bull group and averaged 6.5, 3.1 nmol and 7.4% for spermatozoa from low-fertility bull group, respectively. There were significantly different between two bull groups. There was no significant difference in lucigenin dependent chemiluminescence between two bull groups. 3. Fertilization rate was positively correlated with motility and the rate of Ca-ionophore induced acrosome reaction, but negatively correlated with the frequency of luminol-dependent chemiluminescence, the rate of LPO, and the percentage of sperm with DNA fragmentation. There was no correlation between fertilization rate and the percentage of swollen spermatozoa, normal morphology, and the frequency of lucigenin-dependent chemiluminescence. 4. Blastocyst formation rate was positively correlated with the rate of Ca-ionophore induced acrosome reaction, but negatively correlated with the frequency of luminol-dependent chemiluminescence, the rate of LPO, and the percentage of sperm with DNA fragmentation. There was no correlation between blastocyst formation rate and motility, the percentage of swollen spermatozoa, normal morphology, and the frequency of lucigenin-dependent chemiluminescence. In conclusion, these data suggest that ROS significantly impact semen quality. The assays of this study may provide a basis fur improving in vitro assessment of sperm fertilizing capacity.
Nursing of today has as one of its objectives the solving of problems related to human needs arising from the demands of a rapidly changing society. This nursing objective, I believe, can he attained by the appropriate application of scientific principles in the giving of comprehensive nursing care. Comprehensive nursing care may be defined as nursing care which meets all of the patient's needs. the needs of patients are said to fall into five broad categories: physical needs, psychological needs, environmental needs, socio-economic needs, and teaching needs. Most people who become ill have adjustment problems related to their new situation. Because patient teaching is one of the most important functions of professional nursing, the success of this teaching may be used as a gauge for evaluating comprehensive nursing care. This represents a challenge foe the future. A questionnaire consisting of 67 items was distributed to 200 professional nurses working ill direct patient care at Yonsei University Medical Center in Seoul, Korea. 160 (80,0%) nurses of the total sample returned completed questionnaires 81 (50.6%) nurses were graduates of 3 fear diploma courser 79 (49.4%) nurses were graduates of 4 year collegiate nursing schools in Korea 141 (88,1%) nurses had under 5 years of clinical experience in a medical center, while 19 (11.9%) nurses had more than 5years of clinical experience. Three hypotheses were tested: 1. “Nurses had high levels of concept and knowledge toward patient teaching”-This was demonstrated by the use of a statistical method, the mean average. 2. “Nurses graduating from collegiate programs and diploma school programs of nursing show differences in concepts and knowledge toward patient teaching”-This was demonstrated by a statistical method, the mean average, although the results showed little difference between the two groups. 3. “Nurses having different amounts of clinical experience showed differences in concepts and knowledge toward patient teaching”-This was demonstrated by the use of a statistical method, the mean average. 2. “Nurses graduating from collegiate programs and diploma school programs of nursing show differences in concepts and knowledge toward patient teaching”-This was demonstrated by a statistical method, the mean average, although the results showed little difference between the two groups. 3. “Nurses having different amounts of clinical experience showed differences in concepts and knowledge toward patient teaching”-This was demonstrated by the use of the T-test. Conclusions of this study are as follow: Before attempting the explanation, of the results, the questionnaire will he explained. The questionnaire contained 67 questions divided into 9 sections. These sections were: concept, content, time, prior preparation, method, purpose, condition, evaluation, and recommendations for patient teaching. 1. The nurse's concept of patient teaching: Most of the nurses had high levels of concepts and knowledge toward patient teaching. Though nursing service was task-centered at the turn of the century, the emphasis today is put on patient-centered nursing. But we find some of the nurses (39.4%) still are task-centered. After, patient teaching, only a few of the nurses (14.4%) checked this as “normal teaching.”It seems therefore that patient teaching is often done unconsciously. Accordingly it would he desirable to have correct concepts and knowledge of teaching taught in schools of nursing. 2. Contents of patient teaching: Most nurses (97.5%) had good information about content of patient teaching. They teach their patients during admission about their diseases, tests, treatments, and before discharge give nurses instruction about simple nursing care, personal hygiene, special diets, rest and sleep, elimination etc. 3. Time of patient teaching: Teaching can be accomplished even if there is no time set aside specifically for it. -a large part of the nurse's teaching can be done while she is giving nursing care. If she believes she has to wait for time free from other activities, she may miss many teaching opportunities. But generally proper time for patient teaching is in the midmorning or midafternoon since one and a half or two hours required. Nurses meet their patients in all stages of health: often tile patient is in a condition in which learning is impossible-pain, mental confusion, debilitation, loss of sensory perception, fear and anxiety-any of these conditions may preclude the possibility of successful teaching. 4. Prior preparation for patient teaching: The teaching aids, nurses use are charts (53.1%), periodicals (23.8%), and books (7.0%) Some of the respondents (28.1%) reported that they had had good preparation for the teaching which they were doing, others (27.5%) reported adequate preparation, and others (43.8%) reported that their preparation for teaching was inadequate. If nurses have advance preparation for normal teaching and are aware of their objectives in teaching patients, they can do effective teaching. 5. Method of patient teaching: The methods of individual patient teaching, the nurses in this study used, were conversation (55.6%) and individual discussion (19.2%) . And the methods of group patient teaching they used were demonstration (42.3%) and lecture (26.2%) They should also he prepared to use pamphlet and simple audio-visual aids for their teaching. 6. Purposes of patient teaching: The purposes of patient teaching is to help the patient recover completely, but the majority of the respondents (40.6%) don't know this. So it is necessary for them to understand correctly the purpose of patient teaching and nursing care. 7. Condition of patient teaching: The majority of respondents (75.0%) reported there were some troubles in teaching uncooperative patients. It would seem that the nurse's leaching would be improved if, in her preparation, she was given a better understanding of the patient and communication skills. The majority of respondents in the total group, felt teaching is their responsibility and they should teach their patient's family as well as the patient. The place for teaching is most often at the patient's bedside (95.6%) but the conference room (3.1%) is also used. It is important that privacy be provided in learning situations with involve personal matters. 8. Evaluation of patient teaching: The majority of respondents (76.3%,) felt leaching is a highly systematic and organized function requiring special preparation in a college or university, they have the idea that teaching is a continuous and ever-present activity of all people throughout their lives. The suggestion mentioned the most frequently for improving preparation was a course in patient teaching included in the basic nursing program. 9. Recommendations: 1) It is recommended, that in clinical nursing, patient teaching be emphasized. 2) It is recommended, that insertive education the concepts and purposes of patient teaching he renewed for all nurses. In addition to this new knowledge, methods and materials which can be applied to patient teaching should be given also. 3) It is recommended, in group patient teaching, we try to embark on team teaching.
The
Previous research has presupposed that the evaluation of consumer who received any recovery after experiencing product failure should be better than the evaluation of consumer who did not receive any recovery. The major purposes of this article are to examine impacts of product defect failures rather than service failures, and to explore effects of recovery on postrecovery product attitudes. First, this article deals with the occurrence of severe and unsevere failure and corresponding service recovery toward tangible products rather than intangible services. Contrary to intangible services, purchase and usage are separable for tangible products. This difference makes it clear that executing an recovery strategy toward tangible products is not plausible right after consumers find out product failures. The consumers may think about backgrounds and causes for the unpleasant events during the time gap between product failure and recovery. The deliberation may dilutes positive effects of recovery efforts. The recovery strategies which are provided to consumers experiencing product failures can be classified into three types. A recovery strategy can be implemented to provide consumers with a new product replacing the old defective product, a complimentary product for free, a discount at the time of the failure incident, or a coupon that can be used on the next visit. This strategy is defined as "a rewarding effort." Meanwhile a product failure may arise in exchange for its benefit. Then the product provider can suggest a detail explanation that the defect is hard to escape since it relates highly to the specific advantage to the product. The strategy may be called as "a strengthening effort." Another possible strategy is to recover negative attitude toward own brand by giving prominence to the disadvantages of a competing brand rather than the advantages of its own brand. The strategy is reflected as "a weakening effort." This paper emphasizes that, in order to confirm its effectiveness, a recovery strategy should be compared to being nothing done in response to the product failure. So the three types of recovery efforts is discussed in comparison to the situation involving no recovery effort. The strengthening strategy is to claim high relatedness of the product failure with another advantage, and expects the two-sidedness to ease consumers' complaints. The weakening strategy is to emphasize non-aversiveness of product failure, even if consumers choose another competitive brand. The two strategies can be effective in restoring to the original state, by providing plausible motives to accept the condition of product failure or by informing consumers of non-responsibility in the failure case. However the two may be less effective strategies than the rewarding strategy, since it tries to take care of the rehabilitation needs of consumers. Especially, the relative effect between the strengthening effort and the weakening effort may differ in terms of the severity of the product failure. A consumer who realizes a highly severe failure is likely to attach importance to the property which caused the failure. This implies that the strengthening effort would be less effective under the condition of high product severity. Meanwhile, the failing property is not diagnostic information in the condition of low failure severity. Consumers would not pay attention to non-diagnostic information, and with which they are not likely to change their attitudes. This implies that the strengthening effort would be more effective under the condition of low product severity. A 2 (product failure severity: high or low) X 4 (recovery strategies: rewarding, strengthening, weakening, or doing nothing) between-subjects design was employed. The particular levels of product failure severity and the types of recovery strategies were determined after a series of expert interviews. The dependent variable was product attitude after the recovery effort was provided. Subjects were 284 consumers who had an experience of cosmetics. Subjects were first given a product failure scenario and were asked to rate the comprehensibility of the failure scenario, the probability of raising complaints against the failure, and the subjective severity of the failure. After a recovery scenario was presented, its comprehensibility and overall evaluation were measured. The subjects assigned to the condition of no recovery effort were exposed to a short news article on the cosmetic industry. Next, subjects answered filler questions: 42 items of the need for cognitive closure and 16 items of need-to-evaluate. In the succeeding page a subject's product attitude was measured on an five-item, six-point scale, and a subject's repurchase intention on an three-item, six-point scale. After demographic variables of age and sex were asked, ten items of the subject's objective knowledge was checked. The results showed that the subjects formed more favorable evaluations after receiving rewarding efforts than after receiving either strengthening or weakening efforts. This is consistent with Hoffman, Kelley, and Rotalsky (1995) in that a tangible service recovery could be more effective that intangible efforts. Strengthening and weakening efforts also were effective compared to no recovery effort. So we found that generally any recovery increased products attitudes. The results hint us that a recovery strategy such as strengthening or weakening efforts, although it does not contain a specific reward, may have an effect on consumers experiencing severe unsatisfaction and strong complaint. Meanwhile, strengthening and weakening efforts were not expected to increase product attitudes under the condition of low severity of product failure. We can conclude that only a physical recovery effort may be recognized favorably as a firm's willingness to recover its fault by consumers experiencing low involvements. Results of the present experiment are explained in terms of the attribution theory. This article has a limitation that it utilized fictitious scenarios. Future research deserves to test a realistic effect of recovery for actual consumers. Recovery involves a direct, firsthand experience of ex-users. Recovery does not apply to non-users. The experience of receiving recovery efforts can be relatively more salient and accessible for the ex-users than for non-users. A recovery effort might be more likely to improve product attitude for the ex-users than for non-users. Also the present experiment did not include consumers who did not have an experience of the products and who did not perceive the occurrence of product failure. For the non-users and the ignorant consumers, the recovery efforts might lead to decreased product attitude and purchase intention. This is because the recovery trials may give an opportunity for them to notice the product failure.
Objectives: The objective of this study was to investigate analysis of bone mineral density according to Women with low back pain women. Methods: The data were collected from women who visited Physical Examination Center of a Catholic university hospital located in Daegu. Questionnaires were completed by 50 women during the period from July 20, 2000 to January 12, 2001. The sample was divided into three groups(the normal group of 16 cases and the osteopenia group of 12cases and the osteoporosis group of 22 cases). Bone mineral density(BMD) of lumbar spine was measured using energy absorptiometry. Results: The bone mineral density of the lumbar spine decreased with aging. The bone mineral density of the lumbar spine decreased with the serum Calcium and Phosphorus and Alkaline phosphatase increased. The mean bone mineral density of the lumbar spine of healthy women in age(50~59) was 0.87g/
To train the manpower to meet the requirements of the industrial field, the introduction of the National Qualification Frameworks(hereinafter referred to as NQF) was determined in 2001 by National Competency Standards(hereinafter referred to as NCS) centrally of the Office for Government Policy Coordination. Also, for landscape architecture in the construction field, the "NCS -Landscape Architecture" pilot was developed in 2008 to be test operated for 3 years starting in 2009. Especially, as the 'realization of a competence-based society, not by educational background' was adopted as one of the major government projects in the Park Geun-Hye government(inaugurated in 2013) the NCS system was constructed on a nationwide scale as a detailed method for practicing this. However, in the case of the NCS developed by the nation, the ideal job performing abilities are specified, therefore there are weaknesses of not being able to reflect the actual operational problem differences in the student level between universities, problems of securing equipment and professors, and problems in the number of current curricula. For soft landing to practical curriculum, the process of clearly analyzing the gap between the current curriculum and the NCS must be preceded. Gap analysis is the initial stage methodology to reorganize the existing curriculum into NCS based curriculum, and based on the ability unit elements and performance standards for each NCS ability unit, the discrepancy between the existing curriculum within the department or the level of coincidence used a Likert scale of 1 to 5 to fill in and analyze. Thus, the universities wishing to operate NCS in the future measuring the level of coincidence and the gap between the current university curriculum and NCS can secure the basic tool to verify the applicability of NCS and the effectiveness of further development and operation. The advantages of reorganizing the curriculum through gap analysis are, first, that the government financial support project can be connected to provide quantitative index of the NCS adoption rate for each qualitative department, and, second, an objective standard is provided on the insufficiency or sufficiency when reorganizing to NCS based curriculum. In other words, when introducing in the subdivisions of the relevant NCS, the insufficient ability units and the ability unit elements can be extracted, and the supplementary matters for each ability unit element per existing subject can be extracted at the same time. There is an advantage providing directions for detailed class program and basic subject opening. The Ministry of Education and the Ministry of Employment and Labor must gather people from the industry to actively develop and supply the NCS standard a practical level to systematically reflect the requirements of the industrial field the educational training and qualification, and the universities wishing to apply NCS must reorganize the curriculum connecting work and qualification based on NCS. To enable this, the universities must consider the relevant industrial prospect and the relation between the faculty resources within the university and the local industry to clearly select the NCS subdivision to be applied. Afterwards, gap analysis must be used for the NCS based curriculum reorganization to establish the direction of the reorganization more objectively and rationally in order to participate in the process evaluation type qualification system efficiently.
Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.
Nowadays, customer satisfaction has been one of company's major objectives, and the index to measure and communicate customer satisfaction has been generally accepted among business practices. The major issues of CSI(customer satisfaction index) are three questions, as follows: (a)what level of customer satisfaction is tolerable, (b)whether customer satisfaction and company performance has positive causality, and (c)what to do to improve customer satisfaction. Among these, the second issue is recently attracting academic research in several perspectives. On this study, the second issue will be addressed. Many researchers including Anderson have regarded customer satisfaction as core competencies, such as brand equity, customer equity. They want to verify following causality "customer satisfaction → market performance(market share, sales growth rate) → financial performance(operating margin, profitability) → corporate value performance(stock price, credit ratings)" based on the process model of marketing performance. On the other hand, Insoo Jeon and Aeju Jeong(2009) verified sequential causality based on the process model by the domestic data. According to the rejection of several hypotheses, they suggested the balance model of marketing performance as an alternative. The objective of this study, based on the existing process model, is to examine the causal relationship between customer satisfaction and corporate value performance. Anderson and Mansi(2009) proved the relationship between ACSI(American Customer Satisfaction Index) and credit ratings using 2,574 samples from 1994 to 2004 on the assumption that credit rating could be an indicator of a corporate value performance. The similar study(Sangwoon Yoon, 2010) was processed in Korean data, but it didn't confirm the relationship between KCSI(Korean CSI) and credit ratings, unlike the results of Anderson and Mansi(2009). The summary of these studies is in the Table 1. Two studies analyzing the relationship between customer satisfaction and credit ratings weren't consistent results. So, in this study we are to test the conflicting results of the relationship between customer satisfaction and credit ratings based on the research model considering Korean credit ratings. To prove the hypothesis, we suggest the research model as follows. Two important features of this model are the inclusion of important variables in the existing Korean credit rating system and government support. To control their influences on credit ratings, we included three important variables of Korean credit rating system and government support, in case of financial institutions including banks. ROA, ER, TA, these three variables are chosen among various kinds of financial indicators since they are the most frequent variables in many previous studies. The results of the research model are relatively favorable : R2, F-value and p-value is .631, 233.15 and .000 respectively. Thus, the explanatory power of the research model as a whole is good and the model is statistically significant. The research model has good explanatory power, the regression coefficients of the KCSI is .096 as positive(+) and t-value and p-value is 2.220 and .0135 respectively. As a results, we can say the hypothesis is supported. Meanwhile, all other explanatory variables including ROA, ER, log(TA), GS_DV are identified as significant and each variables has a positive(+) relationship with CRS. In particular, the t-value of log(TA) is 23.557 and log(TA) as an explanatory variables of the corporate credit ratings shows very high level of statistical significance. Considering interrelationship between financial indicators such as ROA, ER which include total asset in their formula, we can expect multicollinearity problem. But indicators like VIF and tolerance limits that shows whether multicollinearity exists or not, say that there is no statistically significant multicollinearity in all the explanatory variables. KCSI, the main subject of this study, is a statistically significant level even though the standardized regression coefficients and t-value of KCSI is .055 and 2.220 respectively and a relatively low level among explanatory variables. Considering that we chose other explanatory variables based on the level of explanatory power out of many indicators in the previous studies, KCSI is validated as one of the most significant explanatory variables for credit rating score. And this result can provide new insights on the determinants of credit ratings. However, KCSI has relatively lower impact than main financial indicators like log(TA), ER. Therefore, KCSI is one of the determinants of credit ratings, but don't have an exceedingly significant influence. In addition, this study found that customer satisfaction had more meaningful impact on corporations of small asset size than those of big asset size, and on service companies than manufacturers. The findings of this study is consistent with Anderson and Mansi(2009), but different from Sangwoon Yoon(2010). Although research model of this study is a bit different from Anderson and Mansi(2009), we can conclude that customer satisfaction has a significant influence on company's credit ratings either Korea or the United State. In addition, this paper found that customer satisfaction had more meaningful impact on corporations of small asset size than those of big asset size and on service companies than manufacturers. Until now there are a few of researches about the relationship between customer satisfaction and various business performance, some of which were supported, some weren't. The contribution of this study is that credit rating is applied as a corporate value performance in addition to stock price. It is somewhat important, because credit ratings determine the cost of debt. But so far it doesn't get attention of marketing researches. Based on this study, we can say that customer satisfaction is partially related to all indicators of corporate business performances. Practical meanings for customer satisfaction department are that it needs to actively invest in the customer satisfaction, because active investment also contributes to higher credit ratings and other business performances. A suggestion for credit evaluators is that they need to design new credit rating model which reflect qualitative customer satisfaction as well as existing variables like ROA, ER, TA.
This project was a service-cum-research effort with a quasi-experimental study design to examine the health benefits of an integrated Family Planning (FP)/Maternal & Child health (MCH) Service approach that provides crucial factors missing in the present on-going programs. The specific objectives were: 1) To test the effectiveness of trained nurse/midwives (MW) assigned as change agents in the Health Sub-Center (HSC) to bring about the changes in the eight FP/MCH indicators, namely; (i)FP/MCH contacts between field workers and their clients (ii) the use of effective FP methods, (iii) the inter-birth interval and/or open interval, (iv) prenatal care by medically qualified personnel, (v) medically supervised deliveries, (vi) the rate of induced abortion, (vii) maternal and infant morbidity, and (viii) preinatal & infant mortality. 2) To measure the integrative linkage (contacts) between MW & HSC workers and between HSC and clients. 3) To examine the organizational or administrative factors influencing integrative linkage between health workers. Study design; The above objectives called for quasi-experimental design setting up a study and control area with and without a midwife. An active intervention program (FP/MCH minimum 'package' program) was conducted for a 2 year period from June 1982-July 1984 in Seosan County and 'before and after' surveys were conducted to measure the change. Service input; This study was undertaken by the Soonchunhyang University in collaboration with WHO. After a baseline survery in 1981, trained nurses/midwives were introduced into two health sub-centers in a rural setting (Seosan county) for a 2 year period from 1982 to 1984. A major service input was the establishment of midwifery services in the existing health delivery system with emphasis on nurse/midwife's role as the link between health workers (nurse aids) and village health workers, and the referral of risk patients to the private physician (OBGY specialist). An evaluation survey was made in August 1984 to assess the effectiveness of this alternative integrated approach in the study areas in comparison with the control area which had normal government services. Method of evaluation; a. In this study, the primary objective was first to examine to what extent the FP/MCH package program brought about changes in the pre-determined eight indicators (outcome and impact measures) and the following relationship was first analyzed; b. Nevertheless, this project did not automatically accept the assumption that if two or more activities were integrated, the results would automatically be better than a non-integrated or categorical program. There is a need to assess the 'integration process' itself within the package program. The process of integration was measured in terms of interactive linkages, or the quantity & quality of contacts between workers & clients and among workers. Intergrative linkages were hypothesized to be influenced by organizational factors at the HSC clinic level including HSC goals, sltrurture, authority, leadership style, resources, and personal characteristics of HSC staff. The extent or degree of integration, as measured by the intensity of integrative linkages, was in turn presumed to influence programme performance. Thus as indicated diagrammatically below, organizational factors constituted the independent variables, integration as the intervening variable and programme performance with respect to family planning and health services as the dependent variable: Concerning organizational factors, however, due to the limited number of HSCs (2 in the study area and 3 in the control area), they were studied by participatory observation of an anthropologist who was independent of the project. In this observation, we examined whether the assumed integration process actually occurred or not. If not, what were the constraints in producing an effective integration process. Summary of Findings; A) Program effects and impact 1. Effects on FP use: During this 2 year action period, FP acceptance increased from 58% in 1981 to 78% in 1984 in both the study and control areas. This increase in both areas was mainly due to the new family planning campaign driven by the Government for the same study period. Therefore, there was no increment of FP acceptance rate due to additional input of MW to the on-going FP program. But in the study area, quality aspects of FP were somewhat improved, having a better continuation rate of IUDs & pills and more use of effective Contraceptive methods in comparison with the control area. 2. Effects of use of MCH services: Between the study and control areas, however, there was a significant difference in maternal and child health care. For example, the coverage of prenatal care was increased from 53% for 1981 birth cohort to 75% for 1984 birth cohort in the study area. In the control area, the same increased from 41% (1981) to 65% (1984). It is noteworthy that almost two thirds of the recent birth cohort received prenatal care even in the control area, indicating that there is a growing demand of MCH care as the size of family norm becomes smaller 3. There has been a substantive increase in delivery care by medical professions in the study area, with an annual increase rate of 10% due to midwives input in the study areas. The project had about two times greater effect on postnatal care (68% vs. 33%) at delivery care(45.2% vs. 26.1%). 4. The study area had better reproductive efficiency (wanted pregancies with FP practice & healthy live births survived by one year old) than the control area, especially among women under 30 (14.1% vs. 9.6%). The proportion of women who preferred the 1st trimester for their first prenatal care rose significantly in the study area as compared to the control area (24% vs 13%). B) Effects on Interactive Linkage 1. This project made a contribution in making several useful steps in the direction of service integration, namely; i) The health workers have become familiar with procedures on how to work together with each other (especially with a midwife) in carrying out their work in FP/MCH and, ii) The health workers have gotten a feeling of the usefulness of family health records (statistical integration) in identifying targets in their own work and their usefulness in caring for family health. 2. On the other hand, because of a lack of required organizational factors, complete linkage was not obtained as the project intended. i) In regards to the government health worker's activities in terms of home visiting there was not much difference between the study & control areas though the MW did more home visiting than Government health workers. ii) In assessing the service performance of MW & health workers, the midwives balanced their workload between 40% FP, 40% MCH & 20% other activities (mainly immunization). However,
The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.