• Title/Summary/Keyword: Objective Prediction

Search Result 1,091, Processing Time 0.042 seconds

Development of Measuring Technique for Milk Composition by Using Visible-Near Infrared Spectroscopy (가시광선-근적외선 분광법을 이용한 유성분 측정 기술 개발)

  • Choi, Chang-Hyun;Yun, Hyun-Woong;Kim, Yong-Joo
    • Food Science and Preservation
    • /
    • v.19 no.1
    • /
    • pp.95-103
    • /
    • 2012
  • The objective of this study was to develop models for the predict of the milk properties (fat, protein, SNF, lactose, MUN) of unhomogenized milk using the visible and near-infrared (NIR) spectroscopic technique. A total of 180 milk samples were collected from dairy farms. To determine optimal measurement temperature, the temperatures of the milk samples were kept at three levels ($5^{\circ}C$, $20^{\circ}C$, and $40^{\circ}C$). A spectrophotometer was used to measure the reflectance spectra of the milk samples. Multilinear-regression (MLR) models with stepwise method were developed for the selection of the optimal wavelength. The preprocessing methods were used to minimize the spectroscopic noise, and the partial-least-square (PLS) models were developed to prediction of the milk properties of the unhomogenized milk. The PLS results showed that there was a good correlation between the predicted and measured milk properties of the samples at $40^{\circ}C$ and at 400~2,500 nm. The optimal-wavelength range of fat and protein were 1,600~1,800 nm, and normalization improved the prediction performance. The SNF and lactose were optimized at 1,600~1,900 nm, and the MUN at 600~800 nm. The best preprocessing method for SNF, lactose, and MUN turned out to be smoothing, MSC, and second derivative. The Correlation coefficients between the predicted and measured fat, protein, SNF, lactose, and MUN were 0.98, 0.90, 0.82, 0.75, and 0.61, respectively. The study results indicate that the models can be used to assess milk quality.

Roles of Perceived Use Control consisting of Perceived Ease of Use and Perceived Controllability in IT acceptance (정보기술 수용에서 사용용이성과 통제가능성을 하위 차원으로 하는 지각된 사용통제의 역할)

  • Lee, Woong-Kyu
    • Asia pacific journal of information systems
    • /
    • v.18 no.2
    • /
    • pp.1-14
    • /
    • 2008
  • According to technology acceptance model(TAN) which is one of the most important research models for explaining IT users' behavior, on intention of using IT is determined by usefulness and ease of use of it. However, TAM wouldn't explain the performance of using IT while it has been considered as a very good model for prediction of the intention. Many people would not be confirmed in the performance of using IT until they can control it at their will, although they think it useful and easy to use. In other words, in addition to usefulness and ease of use as in TAM, controllability is also should be a factor to determine acceptance of IT. Especially, there is a very close relationship between controllability and ease of use, both of which explain the other sides of control over the performance of using IT, so called perceived behavioral control(PBC) in social psychology. The objective of this study is to identify the relationship between ease of use and controllability, and analyse the effects of both two beliefs over performance and intention in using IT. For this purpose, we review the issues related with PBC in information systems studies as well as social psychology, Based on a review of PBC, we suggest a research model which includes the relationship between control and performance in using IT, and prove its validity empirically. Since it was introduced as qa variable for explaining volitional control for actions in theory of planned behavior(TPB), there have been confusion about concept of PBC in spite of its important role in predicting so many kinds of actions. Some studies define PBC as self-efficacy that means actor's perception of difficulty or ease of actions, while others as controllability. However, this confusion dose not imply conceptual contradiction but a double-faced feature of PBC since the performance of actions is related with both self-efficacy and controllability. In other words, these two concepts are discriminated and correlated with each other. Therefore, PBC should be considered as a composite concept consisting of self-efficacy and controllability, Use of IT has been also one of important areas for predictions by PBC. Most of them have been studied by analysis of comparison in prediction power between TAM and TPB or modification of TAM by inclusion of PBC as another belief as like usefulness and ease of use. Interestingly, unlike the other applications in social psychology, it is hard to find such confusion in the concept of PBC in the studies for use of IT. In most of studies, controllability is adapted as PBC since the concept of self-efficacy is included in ease of use explicitly. Based on these discussions, we can suggest perceived use control(PUC) which is defined as perception of control over the performance of using IT and composed of controllability and ease of use as sub-concepts. We suggest a research model explaining acceptance of IT which includes the relationships of PUC with attitude and performance of using IT. For empirical test of our research model, two user groups are selected for surveying questionnaires. In the first group, there are freshmen who take a basic course for Microsoft Excel, and the second group consists of senior students who take a course for analysis of management information by Excel. Most of measurements are adapted ones that have been validated in the other studies, while performance is real score of mid-term in each class. In result, four hypotheses related with PUC are supported statistically with very low significance level. Main contribution of this study is suggestion of PUC through theoretical review of PBC. Specifically, a hierarchical model of PUC are derived from very rigorous studies in the relationship between self-efficacy and controllability with a view of PBC in social psychology. The relationship between PUC and performance is another main contribution.

Development of Near-Infrared Reflectance Spectroscopy (NIRS) Model for Amylose and Crude Protein Contents Analysis in Rice Germplasm (근적외선 분광광도계를 이용한 벼 유전자원 아밀로스 및 단백질 함량분석을 위한 모델개발)

  • Oh, Sejong;Lee, Myung Chul;Choi, Yu Mi;Lee, Sukyeung;Oh, Myeongwon;Ali, Asjad;Chae, Byungsoo;Hyun, Do Yoon
    • Korean Journal of Plant Resources
    • /
    • v.30 no.1
    • /
    • pp.38-49
    • /
    • 2017
  • The objective of this research was to develop Near-Infrared Reflectance Spectroscopy (NIRS) model for amylose and protein contents analysis of large accessions of rice germplasm. A total of 511 accessions of rice germplasm were obtained from National Agrobiodiversity Center to make calibration equation. The accessions were measured by NIRS for both brown and milled brown rice which was additionally assayed by iodine and Kjeldahl method for amylose and crude protein contents. The range of amylose and protein content in milled brown rice were 6.15-32.25% and 4.72-14.81%, respectively. The correlation coefficient ($R^2$), standard error of calibration (SEC) and slope of brown rice were 0.906, 1.741, 0.995 in amylose and 0.941, 0.276, 1.011 in protein, respectively, whereas $R^2$, SEC and slope of milled brown rice values were 0.956, 1.159, 1.001 in amylose and 0.982, 0.164, 1.003 in protein, respectively. Validation results of this NIRS equation showed a high coefficient determination in prediction for amylose (0.962) and protein (0.986), and also low standard error in prediction (SEP) for amylose (2.349) and protein (0.415). These results suggest that NIRS equation model should be practically applied for determination of amylose and crude protein contents in large accessions of rice germplasm.

THE EFFECT OF THE REPEATABILITY FILE IN THE NIRS EATTY ACIDS ANALYSIS OF ANIMAL EATS

  • Perez Marin, M.D.;De Pedro, E.;Garcia Olmo, J.;Garrido Varo, A.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.4107-4107
    • /
    • 2001
  • Previous works have shown the viability of NIRS technology for the prediction of fatty acids in Iberian pig fat, but although the resulting equations showed high precision, in the predictions of new samples important fluctuations were detected, greater with the time passed from calibration development to NIRS analysis. This fact makes the use of NIRS calibrations in routine analysis difficult. Moreover, this problem only appears in products like fat, that show spectrums with very defined absorption peaks at some wavelengths. This circumstance causes a high sensibility to small changes of the instrument, which are not perceived with the normal checks. To avoid these inconveniences, the software WinISI 1.04 has a mathematic algorithm that consist of create a “Repeatability File”. This file is used during calibration development to minimize the variation sources that can affect the NIRS predictions. The objective of the current work is the evaluation of the use of a repeatability file in quantitative NIRS analysis of Iberian pig fat. A total of 188 samples of Iberian pig fat, produced by COVAP, were used. NIR data were recorded using a FOSS NIRSystems 6500 I spectrophotometer equipped with a spinning module. Samples were analysed by folded transmission, using two sample cells of 0.1mm pathlength and gold surface. High accuracy calibration equations were obtained, without and with repeatability file, to determine the content of six fatty acids: miristic (SECV$\sub$without/=0.07% r$^2$$\sub$without/=0.76 and SECV$\sub$with/=0.08% r$^2$$\sub$with/=0.65), Palmitic (SECV$\sub$without/=0.28 r$^2$$\sub$without/=0.97 and SECV$\sub$with/=0.24% r$^2$$\sub$with/=0.98), palmitoleic (SECV$\sub$without/=0.08 r$^2$$\sub$without/=0.94 and SECV$\sub$with/=0.09% r$^2$$\sub$with/=0.92), Stearic (SECV$\sub$without/=0.27 r$^2$$\sub$without/=0.97 and SECV$\sub$with/=0.29% r$^2$$\sub$with/=0.96), oleic (SECV$\sub$without/=0.20 r$^2$$\sub$without/=0.99 and SECV$\sub$with/=0.20% r$^2$$\sub$with/=0.99) and linoleic (SECV$\sub$without/=0.16 r$^2$$\sub$without/=0.98 and SECV$\sub$with/=0.16% r$^2$$\sub$with/=0.98). The use of a repeatability file like a tool to reduce the variation sources that can disturbed the prediction accuracy was very effective. Although in calibration results the differences are negligible, the effect caused by the repeatability file is appreciated mainly when are predicted new samples that are not in the calibration set and whose spectrum were recorded a long time after the equation development. In this case, bias values corresponding to fatty acids predictions were lower when the repeatability file was used: miristic (bias$\sub$without/=-0.05 and bias$\sub$with/=-0.04), Palmitic (bias$\sub$without/=-0.42 and bias$\sub$with/=-0.11), Palmitoleic (bias$\sub$without/=-0.03 and bias$\sub$with/=0.03), Stearic (bias$\sub$without/=0.47 and bias$\sub$with/=0.28), oleic (bias$\sub$without/=0.14 and bias$\sub$with/=-0.04) and linoleic (bias$\sub$without/=0.25 and bias$\sub$with/=-0.20).

  • PDF

DEVELOPMENT OF SAFETY-BASED LEVEL-OF-SERVICE CRITERIA FOR ISOLATED SIGNALIZED INTERSECTIONS (독립신호 교차로에서의 교통안전을 위한 서비스수준 결정방법의 개발)

  • Dr. Tae-Jun Ha
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.3-32
    • /
    • 1995
  • The Highway Capacity Manual specifies procedures for evaluating intersection performance in terms of delay per vehicle. What is lacking in the current methodology is a comparable quantitative procedure for ass~ssing the safety-based level of service provided to motorists. The objective of the research described herein was to develop a computational procedure for evaluating the safety-based level of service of signalized intersections based on the relative hazard of alternative intersection designs and signal timing plans. Conflict opportunity models were developed for those crossing, diverging, and stopping maneuvers which are associated with left-turn and rear-end accidents. Safety¬based level-of-service criteria were then developed based on the distribution of conflict opportunities computed from the developed models. A case study evaluation of the level of service analysis methodology revealed that the developed safety-based criteria were not as sensitive to changes in prevailing traffic, roadway, and signal timing conditions as the traditional delay-based measure. However, the methodology did permit a quantitative assessment of the trade-off between delay reduction and safety improvement. The Highway Capacity Manual (HCM) specifies procedures for evaluating intersection performance in terms of a wide variety of prevailing conditions such as traffic composition, intersection geometry, traffic volumes, and signal timing (1). At the present time, however, performance is only measured in terms of delay per vehicle. This is a parameter which is widely accepted as a meaningful and useful indicator of the efficiency with which an intersection is serving traffic needs. What is lacking in the current methodology is a comparable quantitative procedure for assessing the safety-based level of service provided to motorists. For example, it is well¬known that the change from permissive to protected left-turn phasing can reduce left-turn accident frequency. However, the HCM only permits a quantitative assessment of the impact of this alternative phasing arrangement on vehicle delay. It is left to the engineer or planner to subjectively judge the level of safety benefits, and to evaluate the trade-off between the efficiency and safety consequences of the alternative phasing plans. Numerous examples of other geometric design and signal timing improvements could also be given. At present, the principal methods available to the practitioner for evaluating the relative safety at signalized intersections are: a) the application of engineering judgement, b) accident analyses, and c) traffic conflicts analysis. Reliance on engineering judgement has obvious limitations, especially when placed in the context of the elaborate HCM procedures for calculating delay. Accident analyses generally require some type of before-after comparison, either for the case study intersection or for a large set of similar intersections. In e.ither situation, there are problems associated with compensating for regression-to-the-mean phenomena (2), as well as obtaining an adequate sample size. Research has also pointed to potential bias caused by the way in which exposure to accidents is measured (3, 4). Because of the problems associated with traditional accident analyses, some have promoted the use of tqe traffic conflicts technique (5). However, this procedure also has shortcomings in that it.requires extensive field data collection and trained observers to identify the different types of conflicts occurring in the field. The objective of the research described herein was to develop a computational procedure for evaluating the safety-based level of service of signalized intersections that would be compatible and consistent with that presently found in the HCM for evaluating efficiency-based level of service as measured by delay per vehicle (6). The intent was not to develop a new set of accident prediction models, but to design a methodology to quantitatively predict the relative hazard of alternative intersection designs and signal timing plans.

  • PDF

Predicting Oxygen Uptake for Men with Moderate to Severe Chronic Obstructive Pulmonary Disease (COPD환자에서 6분 보행검사를 이용한 최대산소섭취량 예측)

  • Kim, Changhwan;Park, Yong Bum;Mo, Eun Kyung;Choi, Eun Hee;Nam, Hee Seung;Lee, Sung-Soon;Yoo, Young Won;Yang, Yun Jun;Moon, Joung Wha;Kim, Dong Soon;Lee, Hyang Yi;Jin, Young-Soo;Lee, Hye Young;Chun, Eun Mi
    • Tuberculosis and Respiratory Diseases
    • /
    • v.64 no.6
    • /
    • pp.433-438
    • /
    • 2008
  • Background: Measurement of the maximum oxygen uptake in patients with chronic obstructive pulmonary disease (COPD) has been used to determine the intensity of exercise and to estimate the patient's response to treatment during pulmonary rehabilitation. However, cardiopulmonary exercise testing is not widely available in Korea. The 6-minute walk test (6MWT) is a simple method of measuring the exercise capacity of a patient. It also provides high reliability data and it reflects the fluctuation in one' s exercise capacity relatively well with using the standardized protocol. The prime objective of the present study is to develop a regression equation for estimating the peak oxygen uptake ($VO_2$) for men with moderate to very severe COPD from the results of a 6MWT. Methods: A total of 33 male patients with moderate to very severe COPD agreed to participate in this study. Pulmonary function testing, cardiopulmonary exercise testing and a 6MWT were performed on their first visits. The index of work ($6M_{work}$, 6-minute walk distance [6MWD]${\times}$body weight) was calculated for each patient. Those variables that were closely related to the peak $VO_2$ were identified through correlation analysis. With including such variables, the equation to predict the peak $VO_2$ was generated by the multiple linear regression method. Results: The peak $VO_2$ averaged $1,015{\pm}392ml/min$, and the mean 6MWD was $516{\pm}195$ meters. The $6M_{work}$ (r=.597) was better correlated to the peak $VO_2$ than the 6MWD (r=.415). The other variables highly correlated with the peak $VO_2$ were the $FEV_1$ (r=.742), DLco (r=.734) and FVC (r=.679). The derived prediction equation was $VO_2$ (ml/min)=($274.306{\times}FEV_1$)+($36.242{\times}DLco$)+($0.007{\times}6M_{work}$)-84.867. Conclusion: Under the circumstances when measurement of the peak $VO_2$ is not possible, we consider the 6MWT to be a simple alternative to measuring the peak $VO_2$. Of course, it is necessary to perform a trial on much larger scale to validate our prediction equation.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

The differences of STO between before and after presurgical orthodontics in skeletal Class III malocclusions (골격성 III급 부정교합자에서 술 전 교정치료 전과 후의 수술계획의 차이)

  • Lee, Eun-Ju;Son, Woo-Sung;Park, Soo-Byung;Kim, Seong-Sik
    • The korean journal of orthodontics
    • /
    • v.38 no.3
    • /
    • pp.175-186
    • /
    • 2008
  • Objective: To evaluate the discrepancies between initial STO and final STO in Class III malocclusions and to find which factors are related to the discrepancies. Methods: Twenty patients were selected for the extraction group and 20 patients for the non-extraction group. They were diagnosed as skeletal Class III and received presurgical orthodontic treatment and mandibular set-back surgery at Pusan National University Hospital. The lateral cephalograms were analyzed for initial STO (T1s) at pretreatment and final STO (T2s) after presurgical orthodontic treatment, and specified the landmarks 3s coordinates of the X and V axes. Results: Differences in hard tissue points (T1s-T2s) in the X coordinates of upper central incisor edge, upper first molar mesial end surface, lower central incisor apex, lower first molar mesial end surface and mesio-buccal cusp and Y coordinates of upper central incisor edge, upper central incisor apex, upper first molar mesio-buccal cusp were statistically significant in the extraction group. Differences in hard tissue points (T1s-T2s) in the X coordinates of upper central incisor edge, lower central incisor apex, lower first molar mesial end surface and Y coordinates of lower central incisor apex were statistically significant in the non-extraction group. In the extraction group, the upper arch length discrepancy (UALD) had a statistically significant effect on maxillary incisor and first molar estimation. Lower arch length discrepancy and IMPA had statistically significant effects on mandibular incisor estimation in both groups. Conclusions: Discrepancies between initial STO and final STO and factors contributing to the accuracy of initial STO must be considered in treatment planning of Class III surgical patients to increase the accuracy of prediction.

Throughfall, Stemflow and Interception Loss at Pinus taeda and Pinus densiflora stands (테다소나무림과 소나무림에서의 수관통과우량(樹冠通過雨量), 수간유하우량(樹幹流下雨量) 및 차단손실우량(遮斷損失雨量))

  • Min, Hong-Jin;Woo, Bo-Myeong
    • Journal of Korean Society of Forest Science
    • /
    • v.84 no.4
    • /
    • pp.502-516
    • /
    • 1995
  • The objective of this study was to estimate throughfall, stemflow, interception loss and net rainfall in relation to rainfall interception, and to understand the factors affecting interception process at Pinus taeda stand and Pinus densiflora stand in the Research Forests of Seoul National University, located in Choosan, Kwangyang, Chollanamdo. 1. The gross rainfall during the period of field observation was 3,107.6mm(average 1,035.9mm/year). Most of the daily rainfall intensity was under 30mm, which was 90% in 1992, 81% in 1993 and 88% in 1994. 2. In this study the throughfall, stemflow, interception loss and net rainfall were expressed separately as a function of gross rainfall. The overall throughfall collected during the period of field observation was 2,432.5mm(78.3%) at Pinus taeda stand and 2,699.6mm at Pinus densiflora stand, out of total rainfall of 3107.6mm. The canopy storage capacity, which was determined by the prediction equation between gross rainfall and throughfall was 1.1mm at Pinus taeda stand and 1.3mm at Pinus densiflora stand. 3. The sums of stemflow from measurement of total rainfall at Pinus taeda stand and Pinus densiflora stand was 227.3mm(7.3%) and 62.7mm(2.0%), respectively. The minimum rainfall causing stemflow was estimated as 7.2mm at Pinus taeda stand and 1.9mm at Pinus densiflora stand. 4. Interception loss accounted for 447.8mm(14.4%) at Pinus taeda stand and 345.3mm(11.1%) at Pinus densiflorra stand. 5. Net rainfall was 2,659.8mm(85.6%) at Pinus taeda stand and 2,762.3mm(88.9%) at Pinus densiflora stand. 6. The rates of throughfall and stemflow increased with increasing the gross rainfall. However, the amounts of throughfall and the stemflow were constant above 30mm at Pinus taeda stand and 50mm at Pinus densiflora stand. The rates of interception loss decreased with increasing the gross rainfall. However, the amount of interception loss was constant above 50mm at Pinus taeda stand and Pinus densiflora stand.

  • PDF

Relationship between Steady Flow and Dynamic Rheological Properties for Viscoelastic Polymer Solutions - Examination of the Cox-Merz Rule Using a Nonlinear Strain Measure - (점탄성 고분자 용액의 정상유동특성과 동적 유변학적 성질의 상관관계 -비선헝 스트레인 척도를 사용한 Cox-Merz 법칙의 검증-)

  • 송기원;김대성;장갑식
    • The Korean Journal of Rheology
    • /
    • v.10 no.4
    • /
    • pp.234-246
    • /
    • 1998
  • The objective of this study is to investigate the correlation between steady shear flow (nonlinear behavior) and dynamic viscoelastic (linear behavior) properties for concentrated polymer solutions. Using both an Advanced Rheometic Expansion System(ARES) and a Rheometics Fluids Spectrometer (RFS II), the steady shear flow viscosity and the dynamic viscoelastic properties of concentrated poly(ethylene oxide)(PEO), polyisobutylene(PIB), and polyacrylamide(PAAm) solutions have been measured over a wide range of shear rates and angular frequencies. The validity of some previously proposed relationships was compared with experimentally measured data. In addition, the effect of solution concentration on the applicability of the Cox-Merz rule was examined by comparing the steady flow viscosity and the magnitude of the complex viscosity Finally, the applicability of the Cox-Merz rule was theoretically discussed by introducing a nonlinear strain measure. Main results obtained from this study can be summarized as follows : (1) Among the previously proposed relationships dealt with in this study, the Cox-Merz rule implying the equivalence between the steady flow viscosity and the magnitude of the complex viscosity has the best validity. (2) For polymer solutions with relatively lower concentration, the steady flow viscosity is higher than the complex viscosity. However, such a relation between the two viscosities is reversed for highly concentrated polymer solutions. (3) A nonlinear strain measure is decreased with increasing stran magnitude, after reaching the maximum value in small strain range. This behavior is different from the theoretical prediction demonstrating the shape of a damped oscillatory function. (4) The applicability of the Cox-Merz rule is influenced by the $\beta$ value, which indicates the slope of a nonlinear stain measure (namely, the degree of nonlinearity) at large shear deformations. The Cox-Merz rule shows better applicability as the $\beta$ value becomes smaller.

  • PDF