• Title/Summary/Keyword: interval factor method

Search Result 204, Processing Time 0.036 seconds

Clinical Use of PFA®-100 in Pre-surgical Screening for Platelet Function Test (수술 전 혈소판 기능 검사를 위한 PFA®-100의 임상적 이용)

  • Kim, Sung-Man;Yang, Seung-Bae;Lee, Jehoon
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.41 no.1
    • /
    • pp.1-5
    • /
    • 2009
  • The Platelet Function Analyzer (PFA)$^{(R)}$-100 measures the ability of platelets activated in a high-shear environment to occlude an aperture in a membrane treated with collagen and epinephrine (CEPI) or collagen and ADP (CADP). The time taken for the flow across the membrane to stop (closure time, CT) is recorded. The aim of this study was to assess the potential of the PFA$^{(R)}$-100 as a primary clinical screening tool using the wide spectrum of clinical samples assessed for platelet function as well as to perform the optimal algorithm for the use of PFA$^{(R)}$-100. We established the reference interval in 460 hospital inpatients defined as having normal platelet function based on classical laboratory tests. The reference interval by using the range $5^{th}$ and $95^{th}$ percentile was 84~251 seconds for males CEPI-CT and 85~249 seconds for females CEPI-CT. A total of 1,200 inpatients were enrolled to identify impaired hemostasis before surgical interventions. The abnormal group showing prolonged CEPI-CT was 303 cases (18.9%). Only 3 cases had both abnormal CEPI-CT and CADP-CT. Several factors including sample errors, drugs, hematologic abnoralities were contributed to unexpected prolonged CEPI-CT for screening test. The von Willebrand factor (vWF:Ag) assay was performed only in one patient to verify the algorithm for the use of PFA$^{(R)}$-100. The PFA$^{(R)}$-100 was sensitive and rapid method for primary screening test of platelet dysfunction, so we can substitute it for the bleeding time in routine clinical practice.

  • PDF

A Study on the Improvement of Reliability of Safety Instrumented Function of Hydrodesulfurization Reactor Heater (수소화 탈황 반응기 히터의 안전계장기능 신뢰도 향상에 관한 연구)

  • Kwak, Heung Sik;Park, Dal Jae
    • Journal of the Korean Society of Safety
    • /
    • v.32 no.4
    • /
    • pp.7-15
    • /
    • 2017
  • International standards such as IEC-61508 and IEC-61511 require Safety Integrity Levels (SILs) for Safety Instrumented Functions (SIFs) in process industries. SIL verification is one of the methods for process safety description. Results of the SIL verification in some cases indicated that several Safety Instrumented Functions (SIFs) do not satisfy the required SIL. This results in some problems in terms of cost and risks to the industries. This study has been performed to improve the reliability of a safety instrumented function (SIF) installed in hydrodesulfurization reactor heater using Partial Stroke Testing (PST). Emergency shutdown system was chosen as an SIF in this study. SIL verification has been performed for cases chosen through the layer of protection analysis method. The probability of failure on demands (PFDs) for SIFs in fault tree analysis was $4.82{\times}10^{-3}$. As a result, the SIFs were unsuitable for the needed RRF, although they were capable of satisfying their target SIL 2. So, different PST intervals from 1 to 4 years were applied to the SIFs. It was found that the PFD of SIFs was $2.13{\times}10^{-3}$ and the RRF was 469 at the PST interval of one year, and this satisfies the RRF requirements in this case. It was also found that shorter interval of PST caused higher reliability of the SIF.

Comparison of ISO-GUM and Monte Carlo Method for Evaluation of Measurement Uncertainty (몬테카를로 방법과 ISO-GUM 방법의 불확도 평가 결과 비교)

  • Ha, Young-Cheol;Her, Jae-Young;Lee, Seung-Jun;Lee, Kang-Jin
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.38 no.7
    • /
    • pp.647-656
    • /
    • 2014
  • To supplement the ISO-GUM method for the evaluation of measurement uncertainty, a simulation program using the Monte Carlo method (MCM) was developed, and the MCM and GUM methods were compared. The results are as follows: (1) Even under a non-normal probability distribution of the measurand, MCM provides an accurate coverage interval; (2) Even if a probability distribution that emerged from combining a few non-normal distributions looks as normal, there are cases in which the actual distribution is not normal and the non-normality can be determined by the probability distribution of the combined variance; and (3) If type-A standard uncertainties are involved in the evaluation of measurement uncertainty, GUM generally offers an under-valued coverage interval. However, this problem can be solved by the Bayesian evaluation of type-A standard uncertainty. In this case, the effective degree of freedom for the combined variance is not required in the evaluation of expanded uncertainty, and the appropriate coverage factor for 95% level of confidence was determined to be 1.96.

Development of Improvement Effect Prediction System of C.G.S Method based on Artificial Neural Network (인공신경망을 기반으로 한 C.G.S 공법의 개량효과 예측시스템 개발)

  • Kim, Jeonghoon;Hong, Jongouk;Byun, Yoseph;Jung, Euiyoup;Seo, Seokhyun;Chun, Byungsik
    • Journal of the Korean GEO-environmental Society
    • /
    • v.14 no.9
    • /
    • pp.31-37
    • /
    • 2013
  • In this study installation diameter, interval, area replacement ratio and ground hardness of applicable ground in C.G.S method should be mastered through surrounding ground by conducting modeling. Optimum artificial neural network was selected through the study of the parameter of artificial neural network and prediction model was developed by the relationship with numerical analysis and artificial neural network. As this result, C.G.S pile settlement and ground settlement were found to be equal in terms of diameter, interval, area replacement ratio and ground hardness, presented in a single curve, which means that the behavior pattern of applied ground in C.G.S method was presented as some form, and based on such a result, learning the artificial neural network for 3D behavior was found to be possible. As the study results of artificial neural network internal factor, when using the number of neural in hidden layer 10, momentum constant 0.2 and learning rate 0.2, relationship between input and output was expressed properly. As a result of evaluating the ground behavior of C.G.S method which was applied to using such optimum structure of artificial neural network model, is that determination coefficient in case of C.G.S pile settlement was 0.8737, in case of ground settlement was 0.7339 and in case of ground heaving was 0.7212, sufficient reliability was known.

Multiple Regression Analysis to Determine the Reservoir Classification in the Empirical Area-Reduction Method (경험적 면적감소법을 위한 저수지 분류에 관한 연구)

  • 권오훈
    • Water for future
    • /
    • v.10 no.1
    • /
    • pp.95-100
    • /
    • 1977
  • The empirical area-reduction method by W.M. Borland and C.R. Miller and its revised procedure by W.T. Moody were made of fitting the area and storage curves to the Van't Hul distributions. It should be noted that the reservoir is classified into one of the four standard types on the basis of the topographical feature of the reservoir in application of the method. In other words, this method did not take into account several considerafble factors affecting the mode of sediment deposition, but only the shape of the reservoir as a governign factor. This is why the method occasionally creates ambiguity in classification and accordingly leads to unexpected mode of deposition. This paper describes a generating an formula to decide the standard classification of four types Van's Hul distributions, taking into consideration quantitatively sediment-loss percent and capacity-inflow ratio as well as the shape of the reservoirs by multiple regression analysis using the least square method to get a better fit to the design curves. The result is expressed as $Y=-1.95+55.8X_1+0.14X_2+0.12X_3$ in which the the values of Y locate the standard type I through type IV in the range from ten to forty with the interval of ten. The regression analysis was correlated well with the standard errors of estimate of around two except for the case of the type IV. This formula does not give big difference from the Borland's work in general sityation, but it demonstrates acceptable results, giving somewhat precise replys for the specific reservoirs. Its application to the Soyang Lake, one of the largest reservoirs in the country, defined clearly the type II, while the original method located it in the boundary of the type II and type III.

  • PDF

An application of image processing technique for bed materials analysis in gravel bed stream: focusing Namgang (자갈하천의 하상재료분석을 위한 화상해석법 적용: 남강을 중심으로)

  • Kim, Ki Heung;Jung, Hea Reyn
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.8
    • /
    • pp.655-664
    • /
    • 2018
  • The riverbed material survey is to investigate the particle size distribution, specific gravity, porosity, etc. as basic data necessary for river channel plan such as calculation of sediment transport and change of river bed. In principle, the survey spots are 1 km interval in the longitudinal direction of the river and 3 points or more in the 1 cross section. Therefore, depending on longitudinal length of the river to be investigated, the number of surveyed sites is very large, and the time and cost for the investigation are correspondingly required. This study is to compare the particle size analysis method with the volumetric method and the image analysis method in work efficiency and cost and to examine the applicability of the image analysis method. It was confirmed that the diameter of the equivalent circle converted by the image analysis method can be applied to the analysis of bed material particle size. In the gravel stream with a particle size of less than 10 cm and a large shape factor, the analytical result of the bed material by the image analysis method is accurate. However, when the shape factor decreases as the particle size increases, the error increases. In addition, analysis results of the work efficiency and cost of the volume method and the image analysis method showed a reduction of about 80%.

A novel evidence theory model and combination rule for reliability estimation of structures

  • Tao, Y.R.;Wang, Q.;Cao, L.;Duan, S.Y.;Huang, Z.H.H.;Cheng, G.Q.
    • Structural Engineering and Mechanics
    • /
    • v.62 no.4
    • /
    • pp.507-517
    • /
    • 2017
  • Due to the discontinuous nature of uncertainty quantification in conventional evidence theory(ET), the computational cost of reliability analysis based on ET model is very high. A novel ET model based on fuzzy distribution and the corresponding combination rule to synthesize the judgments of experts are put forward in this paper. The intersection and union of membership functions are defined as belief and plausible membership function respectively, and the Murfhy's average combination rule is adopted to combine the basic probability assignment for focal elements. Then the combined membership functions are transformed to the equivalent probability density function by a normalizing factor. Finally, a reliability analysis procedure for structures with the mixture of epistemic and aleatory uncertainties is presented, in which the equivalent normalization method is adopted to solve the upper and lower bound of reliability. The effectiveness of the procedure is demonstrated by a numerical example and an engineering example. The results also show that the reliability interval calculated by the suggested method is almost identical to that solved by conventional method. Moreover, the results indicate that the computational cost of the suggested procedure is much less than that of conventional method. The suggested ET model provides a new way to flexibly represent epistemic uncertainty, and provides an efficiency method to estimate the reliability of structures with the mixture of epistemic and aleatory uncertainties.

Optimum design of lead-rubber bearing system with uncertainty parameters

  • Fan, Jian;Long, Xiaohong;Zhang, Yanping
    • Structural Engineering and Mechanics
    • /
    • v.56 no.6
    • /
    • pp.959-982
    • /
    • 2015
  • In this study, a non-stationary random earthquake Clough-Penzien model is used to describe earthquake ground motion. Using stochastic direct integration in combination with an equivalent linear method, a solution is established to describe the non-stationary response of lead-rubber bearing (LRB) system to a stochastic earthquake. Two parameters are used to develop an optimization method for bearing design: the post-yielding stiffness and the normalized yield strength of the isolation bearing. Using the minimization of the maximum energy response level of the upper structure subjected to an earthquake as an objective function, and with the constraints that the bearing failure probability is no more than 5% and the second shape factor of the bearing is less than 5, a calculation method for the two optimal design parameters is presented. In this optimization process, the radial basis function (RBF) response surface was applied, instead of the implicit objective function and constraints, and a sequential quadratic programming (SQP) algorithm was used to solve the optimization problems. By considering the uncertainties of the structural parameters and seismic ground motion input parameters for the optimization of the bearing design, convex set models (such as the interval model and ellipsoidal model) are used to describe the uncertainty parameters. Subsequently, the optimal bearing design parameters were expanded at their median values into first-order Taylor series expansions, and then, the Lagrange multipliers method was used to determine the upper and lower boundaries of the parameters. Moreover, using a calculation example, the impacts of site soil parameters, such as input peak ground acceleration, bearing diameter and rubber shore hardness on the optimization parameters, are investigated.

Inter-rater agreement among shoulder surgeons on treatment options for proximal humeral fractures among shoulder surgeons

  • Kim, Hyojune;Song, Si-Jung;Jeon, In-Ho;Koh, Kyoung Hwan
    • Clinics in Shoulder and Elbow
    • /
    • v.25 no.1
    • /
    • pp.49-56
    • /
    • 2022
  • Background: The treatment approach for proximal humeral fractures is determined by various factors, including patient age, sex, dominant arm, fracture pattern, presence of osteoporosis, preexisting arthritis, rotator cuff status, and medical comorbidities. However, there is a lack of consensus in the literature regarding the optimal treatment for displaced proximal humeral fractures. This study aimed to assess and quantify the decision-making process for either conservative or surgical treatment and the choice of surgical method among shoulder surgeons when treating proximal humeral fractures. Methods: Forty sets of true anteroposterior view, scapular Y projection view, and three-dimensional computed tomography of proximal humeral fractures were provided to 12 shoulder surgeons along with clinical information. Surveys regarding Neer classification, decisions between conservative and surgical treatments, and chosen methods were conducted twice with an interval of 2 months. The factors affecting the treatment plans were also assessed. Results: The inter-rater agreement was fair for Neer classification (kappa=0.395), moderate for the decision between conservative and surgical treatments (kappa=0.528), and substantial for the chosen method of surgical treatment (kappa=0.740). The percentage of agreement was 71.1% for Neer classification, 84.6% for the decision between conservative and surgical treatment, and 96.4% for the chosen method of surgical treatment. The fracture pattern was the most crucial factor in deciding between conservative and surgical treatments, followed by age and physical activity. Conclusions: The decision between conservative and surgical treatment for proximal humeral fractures showed good agreement, while the chosen method between osteosynthesis and arthroplasty showed substantial agreement among shoulder surgeons.

A Study on the Coastal Forest Landscape Management Considering Parallax Effect in Gangneung (패럴랙스 효과를 고려한 강릉 해안림의 경관 관리에 관한 연구)

  • Seo, Mi-Ryeong;Kim, Choong-Sik;An, Kyoung-Jin
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.40 no.4
    • /
    • pp.18-27
    • /
    • 2012
  • This paper proposes a management method for a coastal black pine forest landscape considering the parallax effect. For the study, 10 coastal black pine forests in Gangneung were investigated about the average width of the coastal forests, the average diameters, and the intervals of the pines. Categorizations were realized for the 3 types of scene(sea, field, mountain, residential area, commercial area), diameter(16cm, 22cm, 28cm) and interval(5m, 7m, 10m) to produce a total of 45 scenic simulations. An investigation was made on the scenic preferences using 45 simulation images with S.D, and Likert Scales. The results were as follows: According the comparison of scenic preferences, natural landscapes(sea, field, and mountain) ranked high among preferences, with fabricated landscapes(residential area, commercial area) ranked low. The highest scenic preferences were shown with the seascape and an interval of 7m between the trees. On the contrary, the interrelationship was very low between the visual quantity of the scenic's elements(green, sky, building, road etc.) and the scenic preferences. As the results of the factor analysis, the 3 sense factors of "Depth(78.0%)" "Diversity(l5.6%)" and "Spatiality(6.4%)" explained coastal scenic preferences. "Spatiality" showed significant differences at intervals of 5~7m, and 10m between trees. This shows coastal forest management based on the interval of 10m standard affecting scenic preference.