• Title/Summary/Keyword: biased response

Search Result 66, Processing Time 0.023 seconds

Development of A Performance Model of the Foodservice Industry

  • Seo, Kyung Hwa;Jeon, Yu Jung Jennifer;Lee, Soo Bum
    • Culinary science and hospitality research
    • /
    • v.22 no.6
    • /
    • pp.132-144
    • /
    • 2016
  • This study reviewed previous researches about the competence selection of foodservice firms, and shows firm's performance model through the results. The study classified factors according to core competence, differentiation strategy, and management performance. Out of 400 survey responses from by the firm's executive and employees who had worked for over three years at the headquarters (sales, financial, marketing/plan, R & D, etc.), a total of 302 questionnaires were used for the final analysis due to missing values and biased responses (response rate: 75.5%). As the results of analyzing final research model of this study, it appeared that ${\chi}^2(df=170)=384.88$, ${\chi}^2/df=2.26$, GFI=0.90, NFI=0.92, CFI=0.95, RMSEA=0.07. The results indicated that the CEO leadership, organizational culture, and human resource competencies are a driving force in all aspects of competitive advantage differentiation strategies. In addition, the R & D innovation, service, and marketing differentiation strategies are positively related to performance. The results validate the fact that foodservice firms could reinforce strategic decisions through a variety core competencies and achieve continuous performance through competitive strategies.

Elimination of Outlier from Technology Growth Curve using M-estimator for Defense Science and Technology Survey (M-추정을 사용한 국방과학기술 수준조사 기술성장모형의 이상치 제거)

  • Kim, Jangheon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.23 no.1
    • /
    • pp.76-86
    • /
    • 2020
  • Technology growth curve methodology is commonly used in technology forecasting. A technology growth curve represents the paths of product performance in relation to time or investment in R&D. It is a useful tool to compare the technological performances between Korea and advanced nations and to describe the inflection points, the limit of improvement of a technology and their technology innovation strategies, etc. However, the curve fitting to a set of survey data often leads to model mis-specification, biased parameter estimation and incorrect result since data through survey with experts frequently contain outlier in process of curve fitting due to the subjective response characteristics. This paper propose a method to eliminate of outlier from a technology growth curve using M-estimator. The experimental results prove the overall improvement in technology growth curves by several pilot tests using real-data in Defense Science and Technology Survey reports.

A Systematic Comparison of Time Use Instruments: Time Diary and Experience Sampling Method (생활시간 연구를 위한 측정도구의 비교 : 경험표집법과 시간일지)

  • Jeong, Jae-Ki
    • Survey Research
    • /
    • v.9 no.1
    • /
    • pp.43-68
    • /
    • 2008
  • This study compares two instruments for time use study: The time diary and the Experience Sampling Method (ESM), While previous studies show that the ESM and the full-diary are similar with respect to aggregate estimates, No previous study has examined the concordance rates of individual records from both instruments. Based on the subsamples who completed both instruments during the same time period from 500 family studies conducted by the Alfred P. Sloan Center on Parents, Children, and Work at the University of Chicago, we systematically compares the two instruments and evaluates their relative strengths. The results suggest that time diaries provide less biased time use estimates. and that compared to the time diary, the ESM provides a more detailed description of everyday life. The implications of further researches are discussed.

  • PDF

Development of the framework for quantitative cyber risk assessment in nuclear facilities

  • Kwang-Seop Son;Jae-Gu Song;Jung-Woon Lee
    • Nuclear Engineering and Technology
    • /
    • v.55 no.6
    • /
    • pp.2034-2046
    • /
    • 2023
  • Industrial control systems in nuclear facilities are facing increasing cyber threats due to the widespread use of information and communication equipment. To implement cyber security programs effectively through the RG 5.71, it is necessary to quantitatively assess cyber risks. However, this can be challenging due to limited historical data on threats and customized Critical Digital Assets (CDAs) in nuclear facilities. Previous works have focused on identifying data flows, the assets where the data is stored and processed, which means that the methods are heavily biased towards information security concerns. Additionally, in nuclear facilities, cyber threats need to be analyzed from a safety perspective. In this study, we use the system theoretic process analysis to identify system-level threat scenarios that could violate safety constraints. Instead of quantifying the likelihood of exploiting vulnerabilities, we quantify Security Control Measures (SCMs) against the identified threat scenarios. We classify the system and CDAs into four consequence-based classes, as presented in NEI 13-10, to analyze the adversary impact on CDAs. This allows for the ranking of identified threat scenarios according to the quantified SCMs. The proposed framework enables stakeholders to more effectively and accurately rank cyber risks, as well as establish security and response strategies.

An Alternative Model for Determining the Optimal Fertilizer Level (수도(水稻) 적정시비량(適正施肥量) 결정(決定)에 대한 대체모형(代替模型))

  • Chang, Suk-Hwan
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.13 no.1
    • /
    • pp.21-32
    • /
    • 1980
  • Linear models, with and without site variables, have been investigated in order to develop an alternative methodology for determining optimal fertilizer levels. The resultant models are : (1) Model I is an ordinary quadratic response function formed by combining the simple response function estimated at each site in block diagonal form, and has parameters [${\gamma}^{(1)}_{m{\ell}}$], for m=1, 2, ${\cdots}$, n sites and degrees of polynomial, ${\ell}$=0, 1, 2. (2) Mode II is a multiple regression model with a set of site variables (including an intercept) repeated for each fertilizer level and the linear and quadratic terms of the fertilizer variables arranged in block diagonal form as in Model I. The parameters are equal to [${\beta}_h\;{\gamma}^{(2)}_{m{\ell}}$] for h=0, 1, 2, ${\cdots}$, k site variable, m=1, 2, ${\cdots}$ and ${\ell}$=1, 2. (3) Model III is a classical response surface model, I. e., a common quadratic polynomial model for the fertilizer variables augmented with site variables and interactions between site variables and the linear fertilizer terms. The parameters are equal to [${\beta}_h\;{\gamma}_{\ell}\;{\theta}_h$], for h=0, 1, ${\cdots}$, k, ${\ell}$=1, 2, and h'=1, 2, ${\cdots}$, k. (4) Model IV has the same basic structure as Mode I, but estimation procedure involves two stages. In stage 1, yields for each fertilizer level are regressed on the site variables and the resulting predicted yields for each site are then regressed on the fertilizer variables in stage 2. Each model has been evaluated under the assumption that Model III is the postulated true response function. Under this assumption, Models I, II and IV give biased estimators of the linear fertilizer response parameter which depend on the interaction between site variables and applied fertilizer variables. When the interaction is significant, Model III is the most efficient for calculation of optimal fertilizer level. It has been found that Model IV is always more efficient than Models I and II, with efficiency depending on the magnitude of ${\lambda}m$, the mth diagonal element of X (X' X)' X' where X is the site variable matrix. When the site variable by linear fertilizer interaction parameters are zero or when the estimated interactions are not important, it is demonstrated that Model IV can be a reasonable alternative model for calculation of optimal fertilizer level. The efficiencies of the models are compared us ing data from 256 fertilizer trials on rice conducted in Korea. Although Model III is usually preferred, the empirical results from the data analysis support the feasibility of using Model IV in practice when the estimated interaction term between measured soil organic matter and applied nitrogen is not important.

  • PDF

Statistical Analysis of Clustered Interval-Censored Data with Informative Cluster Size (정보적군집 크기를 가진 군집화된 구간 중도절단자료 분석을 위한결합모형의 적용)

  • Kim, Yang-Jin;Yoo, Han-Na
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.5
    • /
    • pp.689-696
    • /
    • 2010
  • Interval-censored data are commonly found in studies of diseases that progress without symptoms, which require clinical evaluation for detection. Several techniques have been suggested with independent assumption. However, the assumption will not be valid if observations come from clusters. Furthermore, when the cluster size relates to response variables, commonly used methods can bring biased results. For example, in a study on lymphatic filariasis, a parasitic disease where worms make several nests in the infected person's lymphatic vessels and reside until adulthood, the response variable of interest is the nest-extinction times. Since the extinction times of nests are checked by repeated ultrasound examinations, exact extinction times are not observed. Instead, data are composed of two examination points: the last examination time with living worms and the first examination time with dead worms. Furthermore, as Williamson et al. (2008) pointed out, larger nests show a tendency for low clearance rates. This association has been denoted as an informative cluster size. To analyze the relationship between the numbers of nests and interval-censored nest-extinction times, this study proposes a joint model for the relationship between cluster size and clustered interval-censored failure data.

Quantitative Approaches for the Determination of Volatile Organic Compounds (VOC) and Its Performance Assessment in Terms of Solvent Types and the Related Matrix Effects

  • Ullah, Md. Ahsan;Kim, Ki-Hyun;Szulejko, Jan E.;Choi, Dal Woong
    • Asian Journal of Atmospheric Environment
    • /
    • v.11 no.1
    • /
    • pp.1-14
    • /
    • 2017
  • For the quantitative analysis of volatile organic compounds (VOC), the use of a proper solvent is crucial to reduce the chance of biased results or effect of interference either in direct analysis by a gas chromatograph (GC) or with thermal desorption analysis due to matrix effects, e.g., the existence of a broad solvent peak tailing that overlaps early eluters. In this work, the relative performance of different solvents has been evaluated using standards containing 19 VOCs in three different solvents (methanol, pentane, and hexane). Comparison of the response factor of the detected VOCs confirms their means for methanol and hexane higher than that of pentane by 84% and 27%, respectively. In light of the solvent vapor pressure at the initial GC column temperature ($35^{\circ}C$), the enhanced sensitivity in methanol suggests the potential role of solvent vapor expansion in the hot injector (split ON) which leads to solvent trapping on the column. In contrast, if the recurrent relationships between homologues were evaluated using an effective carbon number (ECN) additivity approach, the comparability assessed in terms of percent difference improved on the order of methanol (26.5%), hexane (6.73%), and pentane (5.24%). As such, the relative performance of GC can be affected considerably in the direct injection-based analysis of VOC due to the selection of solvent.

Seismic Rehabilitation of Nonductile Reidorced Concrete Gravity Frame (비연성 철근 콘크리트 중력 프레임에 의한 지진 보강)

  • Dong Choon Choi;Javeed A. Munsh;Kwang W. Kim
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.43 no.5
    • /
    • pp.116-123
    • /
    • 2001
  • This paper represents results of an effort to seismically rehabilitate a 12-story nonductile reinforced concrete frame building. The frame located in the most severe seismic area, zone 4, is assumed to be designed and detailed for gravity load requirements only. Both pushover and nonlinear time-history analyses are carried out to determine strength, deformation capacity and the vulnerability of the building. The analysis indicates a drift concentration at the $1^{st}$ floor level due to inadequate strength and ductility capacity of the ground floor columns. The capacity curve of the structure, when superimposed on the average demand response spectrum for the ensemble of scaled earthquakes indicates that the structure is extremely weak and requires a major retrofit. The retrofit of the building is attempted using viscoelastic (VE) dampers. The dampers at each floor level are sized in order to reduce the elastic story drift ratios to within 1%. It is found that this requires substantially large dampers that are not practically feasible. With practical size dampers, the analyses of the viscoelastically damped building indicates that the damper sizes provided are not sufficient enough to remove the biased response and drift concentration of the building. The results indicate that VE-dampers alone are not sufficient to rehabilitate such a concrete frame. Concrete buildings, in general, being stiffer require larger dampers. The second rehabilitation strategy uses concrete shearwalls. Shearwalls increased stiffness and strength of the building, which resulted in reducing the drift significantly. The effectiveness of VE-dampers in conjunction with stiff shearwalls was also studied. Considering the economy and effectiveness, it is concluded that shearwalls were the most feasible solution for seismic rehabilitation of such buildings.

  • PDF

Comparative Analysis of Low Fertility Response Policies (Focusing on Unstructured Data on Parental Leave and Child Allowance) (저출산 대응 정책 비교분석 (육아휴직과 아동수당의 비정형 데이터 중심으로))

  • Eun-Young Keum;Do-Hee Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.769-778
    • /
    • 2023
  • This study compared and analyzed parental leave and child allowance, two major policies among solutions to the current serious low fertility rate problem, using unstructured data, and sought future directions and implications for related response policies based on this. The collection keywords were "low fertility + parental leave" and "low fertility + child allowance", and data analysis was conducted in the following order: text frequency analysis, centrality analysis, network visualization, and CONCOR analysis. As a result of the analysis, first, parental leave was found to be a realistic and practical policy in response to low fertility rates, as data analysis showed more diverse and systematic discussions than child allowance. Second, in terms of child allowance, data analysis showed that there was a high level of information and interest in the cash grant benefit system, including child allowance, but there were no other unique features or active discussions. As a future improvement plan, both policies need to utilize the existing system. First, parental leave requires improvement in the working environment and blind spots in order to expand the system, and second, child allowance requires a change in the form of payment that deviates from the uniform and biased system. should be sought, and it was proposed to expand the target age.

Weighting Effect on the Weighted Mean in Finite Population (유한모집단에서 가중평균에 포함된 가중치의 효과)

  • Kim, Kyu-Seong
    • Survey Research
    • /
    • v.7 no.2
    • /
    • pp.53-69
    • /
    • 2006
  • Weights can be made and imposed in both sample design stage and analysis stage in a sample survey. While in design stage weights are related with sample data acquisition quantities such as sample selection probability and response rate, in analysis stage weights are connected with external quantities, for instance population quantities and some auxiliary information. The final weight is the product of all weights in both stage. In the present paper, we focus on the weight in analysis stage and investigate the effect of such weights imposed on the weighted mean when estimating the population mean. We consider a finite population with a pair of fixed survey value and weight in each unit, and suppose equal selection probability designs. Under the condition we derive the formulas of the bias as well as mean square error of the weighted mean and show that the weighted mean is biased and the direction and amount of the bias can be explained by the correlation between survey variate and weight: if the correlation coefficient is positive, then the weighted mein over-estimates the population mean, on the other hand, if negative, then under-estimates. Also the magnitude of bias is getting larger when the correlation coefficient is getting greater. In addition to theoretical derivation about the weighted mean, we conduct a simulation study to show quantities of the bias and mean square errors numerically. In the simulation, nine weights having correlation coefficient with survey variate from -0.2 to 0.6 are generated and four sample sizes from 100 to 400 are considered and then biases and mean square errors are calculated in each case. As a result, in the case or 400 sample size and 0.55 correlation coefficient, the amount or squared bias of the weighted mean occupies up to 82% among mean square error, which says the weighted mean might be biased very seriously in some cases.

  • PDF