• Title/Summary/Keyword: Recommendation Model

Search Result 697, Processing Time 0.026 seconds

Assessment of Water Control Model for Tomato and Paprika in the Greenhouse Using the Penman-Monteith Model (Penman-Monteith을 이용한 토마토와 파프리카의 증발산 모델 평가)

  • Somnuek, Siriluk;Hong, Youngsin;Kim, Minyoung;Lee, Sanggyu;Baek, Jeonghyun;Kwak, Kangsu;Lee, Hyondong;Lee, Jaesu
    • Journal of Bio-Environment Control
    • /
    • v.29 no.3
    • /
    • pp.209-218
    • /
    • 2020
  • This paper investigated actual crop evapotranspiration (ETc) of tomato and paprika planted in test beds of the greenhouse. Crop water requirement (CWR) is the amount of water required to compensate ETc loss from the crop. The main objectives of the study are to assess whether the actual crop watering (ACW) was adequate CWR of tomato and paprika and which amount of ACW should be irrigated to each crop. ETc was estimated using the Penman-Monteith model (P-M) for each crop. ACW was calculated from the difference of amount of nutrient supply water and amount of nutrient drainage water. ACW and CWR of each crop were determined, compared and assessed. Results indicated CWR-tomato was around 100 to 1,200 ml/day, while CWR-paprika ranged from 100 to 500 ml/day. Comparison of ACW and CWR of each crop found that the difference of ACW and CWR are fluctuated following day of planting (DAP). However, the differences could divide into two phases, first the amount of ACWs of each crop are less than CWR in the initial phase (60 DAP) around 500 ml/day and 91 ml/day, respectively. Then, ACWs of each crop are greater than the CWR after 60 DAP until the end of cultivation approximately 400 ml/day in tomato and 178 ml/day in paprika. ETc assessment is necessary to correctly quantify crop irrigation water needs and it is an accurate short-term estimation of CWR in greenhouse for optimal irrigation scheduling. Thus, reducing ACW of tomato and paprika in the greenhouse is a recommendation. The amount of ACW of tomato should be applied from 100 to 1,200 ml/day and paprika is 100 to 500 ml/day depend on DAP.

The Impact of Conflict and Influence Strategies Between Local Korean-Products-Selling Retailers and Wholesalers on Performance in Chinese Electronics Distribution Channels: On Moderating Effects of Relational Quality (중국 가전유통경로에서 한국제품 현지 판매업체와 도매업체간 갈등 및 영향전략이 성과에 미치는 영향: 관계 질의 조절효과)

  • Chun, Dal-Young;Kwon, Joo-Hyung;Lee, Guo-Ming
    • Journal of Distribution Research
    • /
    • v.16 no.3
    • /
    • pp.1-32
    • /
    • 2011
  • I. Introduction: In Chinese electronics industry, the local wholesalers are still dominant but power is rapidly swifting from wholesalers to retailers because in recent foreign big retailers and local mass merchandisers are growing fast. During such transient period, conflicts among channel members emerge important issues. For example, when wholesalers who have more power exercise influence strategies to maintain status, conflicts among manufacturer, wholesaler, and retailer will be intensified. Korean electronics companies in China need differentiated channel strategies by dealing with wholesalers and retailers simultaneously to sell more Korean products in competition with foreign firms. For example, Korean electronics firms should utilize 'guanxi' or relational quality to form long-term relationships with whloesalers instead of power and conflict issues. The major purpose of this study is to investigate the impact of conflict, dependency, and influence strategies between local Korean-products-selling retailers and wholesalers on performance in Chinese electronics distribution channels. In particular, this paper proposes effective distribution strategies for Korean electronics companies in China by analyzing moderating effects of 'Guanxi'. II. Literature Review and Hypotheses: The specific purposes of this study are as follows. First, causes of conflicts between local Korean-products-selling retailers and wholesalers are examined from the perspectives of goal incongruence and role ambiguity and then effects of these causes are found out on perceived conflicts of local retailers. Second, the effects of dependency of local retailers upon wholesalers are investigated on local retailers' perceived conflicts. Third, the effects of non-coercive influence strategies such as information exchange and recommendation and coercive strategies such as threats and legalistic pleas exercised by wholesalers are explored on perceived conflicts by local retailers. Fourth, the effects of level of conflicts perceived by local retailers are verified on local retailers' financial performance and satisfaction. Fifth, moderating effects of relational qualities, say, 'quanxi' between wholesalers and retailers are analyzed on the impact of wholesalers' influence strategies on retailers' performances. Finally, moderating effects of relational qualities are examined on the relationship between conflicts and performance. To accomplish above-mentioned research objectives, Figure 1 and the following research hypotheses are proposed and verified. III. Measurement and Data Analysis: To verify the proposed research model and hypotheses, data were collected from 97 retailers who are selling Korean electronic products located around Central and Southern regions in China. Covariance analysis and moderated regression analysis were employed to validate hypotheses. IV. Conclusion: The following results were drawn using structural equation modeling and hierarchical moderated regression. First, goal incongruence perceived by local retailers significantly affected conflict but role ambiguity did not. Second, consistent with conflict spiral theory, the level of conflict decreased when retailers' dependency increased toward wholesalers. Third, noncoercive influence strategies such as information exchange and recommendation implemented by wholesalers had significant effects on retailers' performance such as sales and satisfaction without conflict. On the other hand, coercive influence strategies such as threat and legalistic plea had insignificant effects on performance in spite of increasing the level of conflict. Fourth, 'guanxi', namely, relational quality between local retailers and wholesalers showed unique effects on performance. In case of noncoercive influence strategies, 'guanxi' did not play a role of moderator. Rather, relational quality and noncoercive influence strategies can serve as independent variables to enhance performance. On the other hand, when 'guanxi' was well built due to mutual trust and commitment, relational quality as a moderator can positively function to improve performance even though hostile, coercive influence strategies were implemented. Fifth, 'guanxi' significantly moderated the effects of conflict on performance. Even if conflict arises, local retailers who form solid relational quality can increase performance by dealing with dysfunctional conflict synergistically compared with low 'quanxi' retailers. In conclusion, this study verified the importance of relational quality via 'quanxi' between local retailers and wholesalers in Chinese electronic industry because relational quality could cross out the adverse effects of coercive influence strategies and conflict on performance.

  • PDF

Service Quality, Customer Satisfaction and Customer Loyalty of Mobile Communication Industry in China (중국이동통신산업중적복무질량(中国移动通信产业中的服务质量), 고객만의도화고객충성도(顾客满意度和顾客忠诚度))

  • Zhang, Ruijin;Li, Xiangyang;Zhang, Yunchang
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.3
    • /
    • pp.269-277
    • /
    • 2010
  • Previous studies have shown that the most important factor affecting customer loyalty in the service industry is service quality. However, on the subject of whether service quality has a direct or indirect effect on customer loyalty, scholars' views apparently vary. Some studies suggest that service quality has a direct and fundamental influence on customer loyalty (Bai and Liu, 2002). However, others have shown that service quality not only directly affects customer loyalty, it also has an indirect impact on customer loyalty by influencing customer satisfaction and perceived value (Cronin, Brady, and Hult, 2000). Currently, there are few domestic articles that specifically address the relationship between service quality and customer loyalty in the mobile communication industry. Moreover, research has studied customer loyalty as a whole variable, rather than breaking it down further into multiple dimensions. Based on this analysis, this paper summarizes previous study results, establishes an effect mechanism model among service quality, customer satisfaction, and customer loyalty in the mobile communication industry, and presents a statistical test on model assumptions by using customer investigation data from Heilongjiang Mobile Company. It provides theoretical guidance for mobile service management based on the discussion of the hypothesis test results. For data collection, the sample comprised mobile users in Harbin city, and the survey was taken by random sampling. Out of a total of 300 questionnaires, 276 (92.9%) were recovered. After excluding invalid questionnaires, 249 remained, for an effective rate of 82.6 percent for the study. Cronbach's ${\alpha}$ coefficient was adapted to assess the scale reliability, and validity testing was conducted on the questionnaire from three aspects: content validity, construct validity. and convergent validity. The study tested for goodness of fit mainly from the absolute and relative fit indexes. From the hypothesis testing results, overall, four assumptions have not been supported. The ultimate affective relationship of service quality, customer satisfaction, and customer loyalty is demonstrated in Figure 2. On the whole, the service quality of the communication industry not only has a direct positive significant effect on customer loyalty, it also has an indirect positive significant effect on customer loyalty through service quality; the affective mechanism and extent of customer loyalty are different, and are influenced by each dimension of service quality. This study used the questionnaires of existing literature from home and abroad and tested them in empirical research, with all questions adapted to seven-point Likert scales. With the SERVQUAL scale of Parasuraman, Zeithaml, and Berry (1988), or PZB, as a reference point, service quality was divided into five dimensions-tangibility, reliability, responsiveness, assurance, and empathy-and the questions were simplified down to nineteen. The measurement of customer satisfaction was based mainly on Fornell (1992) and Wang and Han (2003), ending up with four questions. Based on the study’s three indicators of price tolerance, first choice, and complaint reaction were used to measure attitudinal loyalty, while repurchase intention, recommendation, and reputation measured behavioral loyalty. The collection and collation of literature data produced a model of the relationship among service quality, customer satisfaction, and customer loyalty in mobile communications, and China Mobile in the city of Harbin in Heilongjiang province was used for conducting an empirical test of the model and obtaining some useful conclusions. First, service quality in mobile communication is formed by the five factors mentioned earlier: tangibility, reliability, responsiveness, assurance, and empathy. On the basis of PZB SERVQUAL, the study designed a measurement scale of service quality for the mobile communications industry, and obtained these five factors through exploratory factor analysis. The factors fit basically with the five elements, indicating the concept of five elements of service quality for the mobile communications industry. Second, service quality in mobile communications has both direct and indirect positive effects on attitudinal loyalty, with the indirect effect being produced through the intermediary variable, customer satisfaction. There are also both direct and indirect positive effects on behavioral loyalty, with the indirect effect produced through two intermediary variables: customer satisfaction and attitudinal loyalty. This shows that better service quality and higher customer satisfaction will activate the attitudinal to service providers more active and show loyalty to service providers much easier. In addition, the effect mechanism of all dimensions of service quality on all dimensions of customer loyalty is different. Third, customer satisfaction plays a significant intermediary role among service quality and attitudinal and behavioral loyalty, indicating that improving service quality can boost customer satisfaction and make it easier for satisfied customers to become loyal customers. Moreover, attitudinal loyalty plays a significant intermediary role between service quality and behavioral loyalty, indicating that only attitudinally and behaviorally loyal customers are truly loyal customers. The research conclusions have some indications for Chinese telecom operators and others to upgrade their service quality. Two limitations to the study are also mentioned. First, all data were collected in the Heilongjiang area, so there might be a common method bias that skews the results. Second, the discussion addresses the relationship between service quality and customer loyalty, setting customer satisfaction as mediator, but does not consider other factors, like customer value and consumer features, This research will be continued in the future.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Glass Dissolution Rates From MCC-1 and Flow-Through Tests

  • Jeong, Seung-Young
    • Proceedings of the Korean Radioactive Waste Society Conference
    • /
    • 2004.06a
    • /
    • pp.257-258
    • /
    • 2004
  • The dose from radionuclides released from high-level radioactive waste (HLW) glasses as they corrode must be taken into account when assessing the performance of a disposal system. In the performance assessment (PA) calculations conducted for the proposed Yucca Mountain, Nevada, disposal system, the release of radionuclides is conservatively assumed to occur at the same rate the glass matrix dissolves. A simple model was developed to calculate the glass dissolution rate of HLW glasses in these PA calculations [1]. For the PA calculations that were conducted for Site Recommendation, it was necessary to identify ranges of parameter values that bounded the dissolution rates of the wide range of HLW glass compositions that will be disposed. The values and ranges of the model parameters for the pH and temperature dependencies were extracted from the results of SPFT, static leach tests, and Soxhlet tests available in the literature. Static leach tests were conducted with a range of glass compositions to measure values for the glass composition parameter. The glass dissolution rate depends on temperature, pH, and the compositions of the glass and solution, The dissolution rate is calculated using Eq. 1: $rate{\;}={\;}k_{o}10^{(ph){\eta})}{\cdot}e^{(-Ea/RT)}{\cdot}(1-Q/K){\;}+{\;}k_{long}$ where $k_{0},\;{\eta}$ and Eaare the parameters for glass composition, pH, $\eta$ and temperature dependence, respectively, and R is the gas constant. The term (1-Q/K) is the affinity term, where Q is the ion activity product of the solution and K is the pseudo-equilibrium constant for the glass. Values of the parameters $k_{0},\;{\eta}\;and\;E_{a}$ are the parameters for glass composition, pH, and temperature dependence, respectively, and R is the gas constant. The term (1-Q/C) is the affinity term, where Q is the ion activity product of the solution and K is the pseudo-equilibrium constant for the glass. Values of the parameters $k_0$, and Ea are determined under test conditions where the value of Q is maintained near zero, so that the value of the affinity term remains near 1. The dissolution rate under conditions in which the value of the affinity term is near 1 is referred to as the forward rate. This is the highest dissolution rate that can occur at a particular pH and temperature. The value of the parameter K is determined from experiments in which the value of the ion activity product approaches the value of K. This results in a decrease in the value of the affinity term and the dissolution rate. The highly dilute solutions required to measure the forward rate and extract values for $k_0$, $\eta$, and Ea can be maintained by conducting dynamic tests in which the test solution is removed from the reaction cell and replaced with fresh solution. In the single-pass flow-through (PFT) test method, this is done by continuously pumping the test solution through the reaction cell. Alternatively, static tests can be conducted with sufficient solution volume that the solution concentrations of dissolved glass components do not increase significantly during the test. Both the SPFT and static tests can ve conducted for a wide range of pH values and temperatures. Both static and SPFt tests have short-comings. the SPFT test requires analysis of several solutions (typically 6-10) at each of several flow rates to determine the glass dissolution rate at each pH and temperature. As will be shown, the rate measured in an SPFt test depends on the solution flow rate. The solutions in static tests will eventually become concentrated enough to affect the dissolution rate. In both the SPFt and static test methods. a compromise is required between the need to minimize the effects of dissolved components on the dissolution rate and the need to attain solution concentrations that are high enough to analyze. In the paper, we compare the results of static leach tests and SPFT tests conducted with simple 5-component glass to confirm the equivalence of SPFT tests and static tests conducted with pH buffer solutions. Tests were conducted over the range pH values that are most relevant for waste glass disssolution in a disposal system. The glass and temperature used in the tests were selected to allow direct comparison with SPFT tests conducted previously. The ability to measure parameter values with more than one test method and an understanding of how the rate measured in each test is affected by various test parameters provides added confidence to the measured values. The dissolution rate of a simple 5-component glass was measured at pH values of 6.2, 8.3, and 9.6 and $70^{\circ}C$ using static tests and single-pass flow-through (SPFT) tests. Similar rates were measured with the two methods. However, the measured rates are about 10X higher than the rates measured previously for a glass having the same composition using an SPFT test method. Differences are attributed to effects of the solution flow rate on the glass dissolution reate and how the specific surface area of crushed glass is estimated. This comparison indicates the need to standardize the SPFT test procedure.

  • PDF

Development and Preliminary Test of a Prototype Program to Recommend Nitrogen Topdressing Rate Using Color Digital Camera Image Analysis at Panicle Initiation Stage of Rice (디지털 카메라 칼라영상 분석을 이용한 벼 질소 수비량 추천 원시 프로그램의 개발과 예비 적용성 검토)

  • Chi, Jeong-Hyun;Lee, Jae-Hong;Choi, Byoung-Rourl;Han, Sang-Wook;Kim, Soon-Jae;Park, Kyeong-Yeol;Lee, Kyu-Jong;Lee, Byun-Woo
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.55 no.4
    • /
    • pp.312-318
    • /
    • 2010
  • This study was carried out to develop and test a prototype program that recommends the nitrogen topdressing rate using the color digital camera image taken from rice field at panicle initiation stage (PIS). This program comprises four models to estimate shoot N content (PNup) by color digital image analysis, shoot N accumulation from PIS to maturity (PHNup), yield, and protein content of rice. The models were formulated using data set from N rate experiments in 2008. PNup was found to be estimated by non-linear regression model using canopy cover and normalized green values calculated from color digital image analysis as predictor variables. PHNup could be predicted by quadratic regression model from PNup and N fertilization rate at panicle initiation stage with $R^2$ of 0.923. Yield and protein content of rice could also be predicted by quadratic regression models using PNup and PHNup as predictor variables with $R^2$ of 0.859 and 0.804, respectively. The performance of the program integrating the above models to recommend N topdressing rate at PIS was field-tested in 2009. N topdressing rate prescribed for the target protein content of 6.0% by the program were lower by about 30% compared to the fixed rate of 30% that is recommended conventionally as the split application rate of N fertilizer at PIS, while rice yield in the plots top-dressed with the prescribed N rate were not different from those of the plots top-dressed with the fixed N rates of 30% and showed a little lower or similar protein content of rice as well. And coefficients of variation in rice yield and quality parameters were reduced substantially by the prescribed N topdressing. These results indicate that the N rate recommendation using the analysis of color digital camera image is promising to be applied for precise management of N fertilization. However, for the universal and practical application the component models of the program are needed to be improved so as to be applicable to the diverse edaphic and climatic condition.

The Research on Online Game Hedonic Experience - Focusing on Moderate Effect of Perceived Complexity - (온라인 게임에서의 쾌락적 경험에 관한 연구 - 지각된 복잡성의 조절효과를 중심으로 -)

  • Lee, Jong-Ho;Jung, Yun-Hee
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.2
    • /
    • pp.147-187
    • /
    • 2008
  • Online game researchers focus on the flow and factors influencing flow. Flow is conceptualized as an optimal experience state and useful explaining game experience in online. Many game studies focused on the customer loyalty and flow in playing online game, In showing specific game experience, however, it doesn't examine multidimensional experience process. Flow is not construct which show absorbing process, but construct which show absorbing result. Hence, Flow is not adequate to examine multidimensional experience of games. Online game is included in hedonic consumption. Hedonic consumption is a relatively new field of study in consumer research and it explores the consumption experience as a experiential view(Hirschman and Holbrook 1982). Hedonic consumption explores the consumption experience not as an information processing event but from a phenomenological of experiential view, which is a primarily subjective state. It includes various playful leisure activities, sensory pleasures, daydreams, esthetic enjoyment, and emotional responses. In online game experience, therefore, it is right to access through a experiential view of hedonic consumption. The objective of this paper was to make up for lacks in our understanding of online game experience by developing a framework for better insight into the hedonic experience of online game. We developed this framework by integrating and extending existing research in marketing, online game and hedonic responses. We then discussed several expectations for this framework. We concluded by discussing the results of this study, providing general recommendation and directions for future research. In hedonic response research, Lacher's research(1994)and Jongho lee and Yunhee Jung' research (2005;2006) has served as a fundamental starting point of our research. A common element in this extended research is the repeated identification of the four hedonic responses: sensory response, imaginal response, emotional response, analytic response. The validity of these four constructs finds in research of music(Lacher 1994) and movie(Jongho lee and Yunhee Jung' research 2005;2006). But, previous research on hedonic response didn't show that constructs of hedonic response have cause-effect relation. Also, although hedonic response enable to different by stimulus properties. effects of stimulus properties is not showed. To fill this gap, while largely based on Lacher(1994)' research and Jongho Lee and Yunhee Jung(2005, 2006)' research, we made several important adaptation with the primary goal of bringing the model into online game and compensating lacks of previous research. We maintained the same construct proposed by Lacher et al.(1994), with four constructs of hedonic response:sensory response, imaginal response, emotional response, analytical response. In this study, the sensory response is typified by some physical movement(Yingling 1962), the imaginal response is typified by images, memories, or situations that game evokes(Myers 1914), and the emotional response represents the feelings one experiences when playing game, such as pleasure, arousal, dominance, finally, the analytical response is that game player engaged in cognition seeking while playing game(Myers 1912). However, this paper has several important differences. We attempted to suggest multi-dimensional experience process in online game and cause-effect relation among hedonic responses. Also, We investigated moderate effects of perceived complexity. Previous studies about hedonic responses didn't show influences of stimulus properties. According to Berlyne's theory(1960, 1974) of aesthetic response, perceived complexity is a important construct because it effects pleasure. Pleasure in response to an object will increase with increased complexity, to an optimal level. After that, with increased complexity, pleasure begins with a linearly increasing line for complexity. Therefore, We expected this perceived complexity will influence hedonic response in game experience. We discussed the rationale for these suggested changes, the assumptions of the resulting framework, and developed some expectations based on its application in Online game context. In the first stage of methodology, questions were developed to measure the constructs. We constructed a survey measuring our theoretical constructs based on a combination of sources, including Yingling(1962), Hargreaves(1962), Lacher (1994), Jongho Lee and Yunhee Jung(2005, 2006), Mehrabian and Russell(1974), Pucely et al(1987). Based on comments received in the pretest, we made several revisions to arrive at our final survey. We investigated the proposed framework through a convenience sample, where participation in a self-report survey was solicited from various respondents having different knowledges. All respondents participated to different degrees, in these habitually practiced activities and received no compensation for their participation. Questionnaires were distributed to graduates and we used 381 completed questionnaires to analysis. The sample consisted of more men(n=225) than women(n=156). In measure, the study used multi-item scales based previous study. We analyze the data using structural equation modeling(LISREL-VIII; Joreskog and Sorbom 1993). First, we used the entire sample(n=381) to refine the measures and test their convergent and discriminant validity. The evidence from both the factor analysis and the analysis of reliability provides support that the scales exhibit internal consistency and construct validity. Second, we test the hypothesized structural model. And, we divided the sample into two different complexity group and analyze the hypothesized structural model of each group. The analysis suggest that hedonic response plays different roles from hypothesized in our study. The results indicate that hedonic response-sensory response, imaginal response, emotional response, analytical response- are related positively to respondents' level of game satisfaction. And game satisfaction is related to higher levels of game loyalty. Additionally, we found that perceived complexity is important to online game experience. Our results suggest that importance of each hedonic response different by perceived game complexity. Understanding the role of perceived complexity in hedonic response enables to have a better understanding of underlying mechanisms at game experience. If game has high complexity, analytical response become important response. So game producers or marketers have to consider more cognitive stimulus. Controversy, if game has low complexity, sensorial response respectively become important. Finally, we discussed several limitations of our study and suggested directions for future research. we concluded with a discussion of managerial implications. Our study provides managers with a basis for game strategies.

  • PDF

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

Development of the Monte Carlo Simulation Radiation Dose Assessment Procedure for NORM added Consumer Adhere·Non-Adhere Product based on ICRP 103 (ICRP 103 권고기반의 밀착형·비밀착형 가공제품 사용으로 인한 몬테칼로 전산모사 피폭선량 평가체계 개발)

  • Go, Ho-Jung;Noh, Siwan;Lee, Jae-Ho;Yeom, Yeon-Soo;Lee, Jai-Ki
    • Journal of Radiation Protection and Research
    • /
    • v.40 no.3
    • /
    • pp.124-131
    • /
    • 2015
  • Radiation exposure to humans can be caused by the gamma rays emitted from natural radioactive elements(such as uranium, thorium and potassium and any of their decay products) of Naturally Occurring Radioactive Materials(NORM) or Technologically Enhanced Naturally Occurring Radioactive Materials(TENORM) added consumer products. In this study, assume that activity of radioactive elements is $^{238}U$, $^{235}U$, $^{232}Th$ $1Bq{\cdot}g^{-1}$, $^{40}K$ $10Bq{\cdot}g^{-1}$ and the gamma rays emitted from these natural radioactive elements radioactive equilibrium state. In this study, reflected End-User circumstances and evaluated annual exposure dose for products based on ICRP reference voxel phantoms and ICRP Recommendation 103 using the Monte Carlo Method. The consumer products classified according to the adhere to the skin(bracelet, necklace, belt-wrist, belt-ankle, belt-knee, moxa stone) or not(gypsum board, anion wallpaper, anion paint), and Geometric Modeling was reflected in Republic of Korea "Residential Living Trend-distributions and Design Guidelines For Common Types of Household.", was designed the Room model($3m{\times}4m{\times}2.8m$, a closed room, conservatively) and the ICRP reference phantom's 3D segmentation and modeling. The end-user's usage time assume that "Development and Application of Korean Exposure Factors." or conservatively 24 hours; in case of unknown. In this study, the results of the effective dose were 0.00003 ~ 0.47636 mSv per year and were confirmed the meaning of necessary for geometric modeling to ICRP reference phantoms through the equivalent dose rate of belt products.

Health Care Utilization Pattern and Its Related Factors of Low-income Population with Abnormal Results through Health Examination (저소득층 건강검진 유소견자의 의료이용 양상 및 관련요인)

  • Kwon, Bog-Soon;Kam, Sin;Han, Chang-Hyun
    • Journal of agricultural medicine and community health
    • /
    • v.28 no.2
    • /
    • pp.87-105
    • /
    • 2003
  • Objectives: The purpose of this study was to examine the health care utilization pattern and its related factors of low-income population with abnormal results through health examination. Methods: Analysed data were collected through a questionnaire survey, which was given to 263 persons who 30 years or over with abnormal results through health examination at Health Center. This survey was conducted in March, 2003. This study employed Andersen's prediction model as most well known medical demand mode and data were analysed through 2-test, and multiple logistic regression analysis. Results: The proportion of medical utilization for thorough examination or treatment among study subjects was 51.0%. In multiple logistic regression analysis as dependent variable with medical utilization, the variables affecting the medical utilization were 'feeling about abnormal result(anxiety versus no anxiety: odds ratio 2.25, 95% confidence intervals 1.07-4.75)', 'type of health security(medicaid type I versus health insurance: odds ratio 2.82, 95% confidence intervals 1.04-7.66; medicaid type II versus health insurance: odds ratio 3.22, 95% confidence intervals 1.37-7.53)', 'experience of health examination during past 2 years(odds ratio 2.39, 95% confidence intervals 1.09-5.21)' and 'family member's response for abnormal result(recommendation for medical utilization versus no response: odds ratio 4.90, 95% confidence intervals 1.75-13.75; family member recommended to utilize medical facilities with him/her versus no response: odds ratio 19.47, 95% confidence intervals 5.01-75.73)'. The time of medical utilization was 8-15 days after they received the result(29.9%), 16-30 days after they receive the result(27.6%), 2-7 days after they received the result(20.9%) in order. The most important reason why they didn't take a medical utilization was that it seemed insignificant to them(32.4%). Conclusions: In order to promote medical utilization of low-income population, health education for abnormal result and its management would be necessary to family member as well as person with abnormal result. And follow-up management program for person with abnormal result through health examination such as home-visit health care would be necessary.

  • PDF