• Title/Summary/Keyword: E-Metrics

Search Result 194, Processing Time 0.025 seconds

Impact of nonphysician, technology-guided alert level selection on rates of appropriate trauma triage in the United States: a before and after study

  • Megan E. Harrigan;Pamela A. Boremski;Bryan R. Collier;Allison N. Tegge;Jacob R. Gillen
    • Journal of Trauma and Injury
    • /
    • v.36 no.3
    • /
    • pp.231-241
    • /
    • 2023
  • Purpose: Overtriage and undertriage rates are critical metrics in trauma, influenced by both trauma team activation (TTA) criteria and compliance with these criteria. Analysis of undertriaged patients at a level I trauma center revealed suboptimal compliance with existing criteria. This study assessed triage patterns after implementing compliance-focused process interventions. Methods: A physician-driven, free-text alert system was modified to a nonphysician, hospital dispatcher-guided system. The latter employed dropdown menus to maximize compliance with criteria. The preintervention period included patients who presented between May 12, 2020, and December 31, 2020. The postintervention period incorporated patients who presented from May 12, 2021, through December 31, 2021. We evaluated appropriate triage, overtriage, and undertriage using the Standardized Trauma Assessment Tool. Statistical analyses were conducted with an α level of 0.05. Results: The new system was associated with improved compliance with existing TTA criteria (from 70.3% to 79.3%, P=0.023) and decreased undertriage (from 6.0% to 3.2%, P=0.002) at the expense of increasing overtriage (from 46.6% to 57.4%, P<0.001), ultimately decreasing the appropriate triage rate (from 78.4% to 74.6%, P=0.007). Conclusions: This study assessed a workflow change designed to improve compliance with TTA criteria. Improved compliance decreased undertriage to below the target threshold of 5%, albeit at the expense of increased overtriage. The decrease in appropriate triage despite compliance improvements suggests that the current criteria at this institution are not adequately tailored to optimally balance the minimization of undertriage and overtriage. This finding underscores the importance of improved compliance in evaluating the efficacy of TTA criteria.

A Study on the Application of Modeling to predict the Distribution of Legally Protected Species Under Climate Change - A Case Study of Rodgersia podophylla - (기후변화에 따른 법정보호종 분포 예측을 위한 종분포모델 적용 방법 검토 - Rodgersia podophylla를 중심으로 -)

  • Yoo, Youngjae;Hwang, Jinhoo;Jeon, Seong-woo
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.27 no.3
    • /
    • pp.29-43
    • /
    • 2024
  • Legally protected species are one of the crucial considerations in the field of natural ecology when conducting environmental impact assessments (EIAs). The occurrence of legally protected species, especially 'Endangered Wildlife' designated by Ministry of Environment, significantly influences the progression of projects subject to EIA, necessitating clear investigations and presentations of their habitats. In perspective of statistics, a minimum of 30 occurrence coordinates is required for population prediction, but most of endangered wildlife has insufficient coordinates and it posing challenges for distribution prediction through modeling. Consequently, this study aims to propose modeling methodologies applicable when coordinate data are limited, focusing on Rodgersia podophylla, representing characteristics of endangered wildlife and northern plant species. For this methodology, 30 random sampling coordinates were used as input data, assuming little survey data, and modeling was performed using individual models included in BIOMOD2. After that, the modeling results were evaluated by using discrimination capacity and the reality reflection ability. An optimal modeling technique was proposed by ensemble the remaining models except for the MaxEnt model, which was found to be less reliable in the modeling results. Alongside discussions on discrimination capacity metrics(e.g. TSS and AUC) presented in modeling results, this study provides insights and suggestions for improvement, but it has limitations that it is difficult to use universally because it is not a study conducted on various species. By supporting survey site selection in EIA processes, this research is anticipated to contribute to minimizing situations where protected species are overlooked in survey results.

The Structural and Functional Analysis of Landscape Changes in Daegu Metropolitan Sphere using Landscape Indices & Ecosystem Service Value (경관지수와 생태계용역가치를 활용한 대구광역도시권 경관의 구조적·기능적 변화 분석)

  • Choi, Won-Young;Jung, Sung-Gwan;Oh, Jeong-Hak;You, Ju-Han
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.8 no.4
    • /
    • pp.102-113
    • /
    • 2005
  • Ecosystem is composed of human, biotic and abiotic environment. Landscape is an ecosystem which appear in a unit region. These landscape are the spatiotemporal land mosaic which is combined with various landscape elements. And, land use and land cover changes are important factors of landscape structure changes. This study is mainly focused on the analysing the spatiotemporal change patterns of Daegu metropolitan sphere forest landscape, using landscape indices and Ecosystem Service Value (ESV) which quantify ecosystem structures and functions. The results of this study are as follow: The encroachment and fragmentation of forest were due to linear developments, i. e. road construction, rather than large-scale developments such as residental lands or industrial complexes. And, the core area percentages of landscape gradually decreased and these could possibly deteriorate the soundness of forest areas by reducing the core areas which are habitats of species. In addition, there was intimate relations between ESV and forest landscape area. The results of this study can be detached standards for impartial judgements between the logic of development & conservation, and basic standards for the establishment of development plans, i. e. metropolitan-plans, which are adequately reflected ecosystem value.

  • PDF

Multiple-biometric Attributes of Biomarkers and Bioindicators for Evaluations of Aquatic Environment in an Urban Stream Ecosystem and the Multimetric Eco-Model (도심하천 생태계의 수환경 평가를 위한 생지표 바이오마커 및 바이오인디케이터 메트릭 속성 및 다변수 생태 모형)

  • Kang, Han-Il;Kang, Nami;An, Kwang-Guk
    • Journal of Environmental Impact Assessment
    • /
    • v.22 no.6
    • /
    • pp.591-607
    • /
    • 2013
  • The objectives of the study were to evaluate the aquatic environment of an urban stream using various ecological parameters of biological biomarkers, physical habitat quality and chemical water quality and to develop a "Multimetric Eco-Model" ($M_m$-E Model) for the ecosystem evaluations. For the applications of the $M_m$-E model, three zones including the control zone ($C_Z$) of headwaters, transition zone ($T_Z$) of mid-stream and the impacted zone ($I_Z$) of downstream were designated and analyzed the seasonal variations of the model values. The biomarkers of DNA, based on the comet assay approach of single-cell gel electrophoresis (SCGE), were analyzed using the blood samples of Zacco platypus as a target species, and the parameters were used tail moment, tail DNA(%) and tail length (${\mu}m$) in the bioassay. The damages of DNA were evident in the impacted zone, but not in the control zone. The condition factor ($C_F$) as key indicators of the population evaluation indicator was analyzed along with the weight-length relation and individual abnormality. The four metrics of Qualitative Habitat Evaluation Index (QHEI) were added for the evaluations of physical habitat. In addition, the parameters of chemical water quality were used as eutrophic indicators of nitrogen (N) and phosphorus (P), chemical oxygen demand (COD) and conductivity. Overall, our results suggested that attributes of biomarkers and bioindicators in the impacted zone ($I_Z$) had sensitive response largely to the chemical stress (eutrophic indicators) and also partially to physical habitat quality, compared to the those in the control zone.

A Study on Designing Method of VoIP QoS Management Framework Model under NGN Infrastructure Environment (NGN 기반환경 에서의 VoIP QoS 관리체계 모델 설계)

  • Noh, Si-Choon;Bang, Kee-Chun
    • Journal of Digital Contents Society
    • /
    • v.12 no.1
    • /
    • pp.85-94
    • /
    • 2011
  • QoS(Quality of Service) is defined as "The collective effect of service performance which determines the degree of satisfaction of a user of the service" by ITU-T Rec. E.800. While the use of VoIP(Voice Over Internet Protocol) has been widely implemented, persistent problems with QoS are a very important sue which needs to be solved. This research is finding the assignment of VoIP QoS to deduct how to manage the control system and presenting the QoS control process and framework under NGN(Next Generation Network) environment. The trial framework is the modeling of the QoS measurement metrics, instrument, equipment, method of measurement, the series of cycle & the methodology about analysis of the result of measurement. This research underlines that the vulnerability of the VoIP protocol in relation to its QoS can be guaranteed when the product quality and management are controlled and measured systematically. Especially it's very important time to maintain the research about VoIP QoS measurement and control because the big conversion of new network technology paradigm is now spreading. In addition, when the proposed method is applied, it can reduce an overall delay and can contribute to improved service quality, in relation to signal, voice processing, filtering more effectively.

Water Level Prediction on the Golok River Utilizing Machine Learning Technique to Evaluate Flood Situations

  • Pheeranat Dornpunya;Watanasak Supaking;Hanisah Musor;Oom Thaisawasdi;Wasukree Sae-tia;Theethut Khwankeerati;Watcharaporn Soyjumpa
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.31-31
    • /
    • 2023
  • During December 2022, the northeast monsoon, which dominates the south and the Gulf of Thailand, had significant rainfall that impacted the lower southern region, causing flash floods, landslides, blustery winds, and the river exceeding its bank. The Golok River, located in Narathiwat, divides the border between Thailand and Malaysia was also affected by rainfall. In flood management, instruments for measuring precipitation and water level have become important for assessing and forecasting the trend of situations and areas of risk. However, such regions are international borders, so the installed measuring telemetry system cannot measure the rainfall and water level of the entire area. This study aims to predict 72 hours of water level and evaluate the situation as information to support the government in making water management decisions, publicizing them to relevant agencies, and warning citizens during crisis events. This research is applied to machine learning (ML) for water level prediction of the Golok River, Lan Tu Bridge area, Sungai Golok Subdistrict, Su-ngai Golok District, Narathiwat Province, which is one of the major monitored rivers. The eXtreme Gradient Boosting (XGBoost) algorithm, a tree-based ensemble machine learning algorithm, was exploited to predict hourly water levels through the R programming language. Model training and testing were carried out utilizing observed hourly rainfall from the STH010 station and hourly water level data from the X.119A station between 2020 and 2022 as main prediction inputs. Furthermore, this model applies hourly spatial rainfall forecasting data from Weather Research and Forecasting and Regional Ocean Model System models (WRF-ROMs) provided by Hydro-Informatics Institute (HII) as input, allowing the model to predict the hourly water level in the Golok River. The evaluation of the predicted performances using the statistical performance metrics, delivering an R-square of 0.96 can validate the results as robust forecasting outcomes. The result shows that the predicted water level at the X.119A telemetry station (Golok River) is in a steady decline, which relates to the input data of predicted 72-hour rainfall from WRF-ROMs having decreased. In short, the relationship between input and result can be used to evaluate flood situations. Here, the data is contributed to the Operational support to the Special Water Resources Management Operation Center in Southern Thailand for flood preparedness and response to make intelligent decisions on water management during crisis occurrences, as well as to be prepared and prevent loss and harm to citizens.

  • PDF

A Study on Low-Light Image Enhancement Technique for Improvement of Object Detection Accuracy in Construction Site (건설현장 내 객체검출 정확도 향상을 위한 저조도 영상 강화 기법에 관한 연구)

  • Jong-Ho Na;Jun-Ho Gong;Hyu-Soung Shin;Il-Dong Yun
    • Tunnel and Underground Space
    • /
    • v.34 no.3
    • /
    • pp.208-217
    • /
    • 2024
  • There is so much research effort for developing and implementing deep learning-based surveillance systems to manage health and safety issues in construction sites. Especially, the development of deep learning-based object detection in various environmental changes has been progressing because those affect decreasing searching performance of the model. Among the various environmental variables, the accuracy of the object detection model is significantly dropped under low illuminance, and consistent object detection accuracy cannot be secured even the model is trained using low-light images. Accordingly, there is a need of low-light enhancement to keep the performance under low illuminance. Therefore, this paper conducts a comparative study of various deep learning-based low-light image enhancement models (GLADNet, KinD, LLFlow, Zero-DCE) using the acquired construction site image data. The low-light enhanced image was visually verified, and it was quantitatively analyzed by adopting image quality evaluation metrics such as PSNR, SSIM, Delta-E. As a result of the experiment, the low-light image enhancement performance of GLADNet showed excellent results in quantitative and qualitative evaluation, and it was analyzed to be suitable as a low-light image enhancement model. If the low-light image enhancement technique is applied as an image preprocessing to the deep learning-based object detection model in the future, it is expected to secure consistent object detection performance in a low-light environment.

Automatic Detection and Classification of Rib Fractures on Thoracic CT Using Convolutional Neural Network: Accuracy and Feasibility

  • Qing-Qing Zhou;Jiashuo Wang;Wen Tang;Zhang-Chun Hu;Zi-Yi Xia;Xue-Song Li;Rongguo Zhang;Xindao Yin;Bing Zhang;Hong Zhang
    • Korean Journal of Radiology
    • /
    • v.21 no.7
    • /
    • pp.869-879
    • /
    • 2020
  • Objective: To evaluate the performance of a convolutional neural network (CNN) model that can automatically detect and classify rib fractures, and output structured reports from computed tomography (CT) images. Materials and Methods: This study included 1079 patients (median age, 55 years; men, 718) from three hospitals, between January 2011 and January 2019, who were divided into a monocentric training set (n = 876; median age, 55 years; men, 582), five multicenter/multiparameter validation sets (n = 173; median age, 59 years; men, 118) with different slice thicknesses and image pixels, and a normal control set (n = 30; median age, 53 years; men, 18). Three classifications (fresh, healing, and old fracture) combined with fracture location (corresponding CT layers) were detected automatically and delivered in a structured report. Precision, recall, and F1-score were selected as metrics to measure the optimum CNN model. Detection/diagnosis time, precision, and sensitivity were employed to compare the diagnostic efficiency of the structured report and that of experienced radiologists. Results: A total of 25054 annotations (fresh fracture, 10089; healing fracture, 10922; old fracture, 4043) were labelled for training (18584) and validation (6470). The detection efficiency was higher for fresh fractures and healing fractures than for old fractures (F1-scores, 0.849, 0.856, 0.770, respectively, p = 0.023 for each), and the robustness of the model was good in the five multicenter/multiparameter validation sets (all mean F1-scores > 0.8 except validation set 5 [512 x 512 pixels; F1-score = 0.757]). The precision of the five radiologists improved from 80.3% to 91.1%, and the sensitivity increased from 62.4% to 86.3% with artificial intelligence-assisted diagnosis. On average, the diagnosis time of the radiologists was reduced by 73.9 seconds. Conclusion: Our CNN model for automatic rib fracture detection could assist radiologists in improving diagnostic efficiency, reducing diagnosis time and radiologists' workload.

Exploring the Role of Preference Heterogeneity and Causal Attribution in Online Ratings Dynamics

  • Chu, Wujin;Roh, Minjung
    • Asia Marketing Journal
    • /
    • v.15 no.4
    • /
    • pp.61-101
    • /
    • 2014
  • This study investigates when and how disagreements in online customer ratings prompt more favorable product evaluations. Among the three metrics of volume, valence, and variance that feature in the research on online customer ratings, volume and valence have exhibited consistently positive patterns in their effects on product sales or evaluations (e.g., Dellarocas, Zhang, and Awad 2007; Liu 2006). Ratings variance, or the degree of disagreement among reviewers, however, has shown rather mixed results, with some studies reporting positive effects on product sales (e.g., Clement, Proppe, and Rott 2007) while others finding negative effects on product evaluations (e.g., Zhu and Zhang 2010). This study aims to resolve these contradictory findings by introducing preference heterogeneity as a possible moderator and causal attribution as a mediator to account for the moderating effect. The main proposition of this study is that when preference heterogeneity is perceived as high, a disagreement in ratings is attributed more to reviewers' different preferences than to unreliable product quality, which in turn prompts better quality evaluations of a product. Because disagreements mostly result from differences in reviewers' tastes or the low reliability of a product's quality (Mizerski 1982; Sen and Lerman 2007), a greater level of attribution to reviewer tastes can mitigate the negative effect of disagreement on product evaluations. Specifically, if consumers infer that reviewers' heterogeneous preferences result in subjectively different experiences and thereby highly diverse ratings, they would not disregard the overall quality of a product. However, if consumers infer that reviewers' preferences are quite homogeneous and thus the low reliability of the product quality contributes to such disagreements, they would discount the overall product quality. Therefore, consumers would respond more favorably to disagreements in ratings when preference heterogeneity is perceived as high rather than low. This study furthermore extends this prediction to the various levels of average ratings. The heuristicsystematic processing model so far indicates that the engagement in effortful systematic processing occurs only when sufficient motivation is present (Hann et al. 2007; Maheswaran and Chaiken 1991; Martin and Davies 1998). One of the key factors affecting this motivation is the aspiration level of the decision maker. Only under conditions that meet or exceed his aspiration level does he tend to engage in systematic processing (Patzelt and Shepherd 2008; Stephanous and Sage 1987). Therefore, systematic causal attribution processing regarding ratings variance is likely more activated when the average rating is high enough to meet the aspiration level than when it is too low to meet it. Considering that the interaction between ratings variance and preference heterogeneity occurs through the mediation of causal attribution, this greater activation of causal attribution in high versus low average ratings would lead to more pronounced interaction between ratings variance and preference heterogeneity in high versus low average ratings. Overall, this study proposes that the interaction between ratings variance and preference heterogeneity is more pronounced when the average rating is high as compared to when it is low. Two laboratory studies lend support to these predictions. Study 1 reveals that participants exposed to a high-preference heterogeneity book title (i.e., a novel) attributed disagreement in ratings more to reviewers' tastes, and thereby more favorably evaluated books with such ratings, compared to those exposed to a low-preference heterogeneity title (i.e., an English listening practice book). Study 2 then extended these findings to the various levels of average ratings and found that this greater preference for disagreement options under high preference heterogeneity is more pronounced when the average rating is high compared to when it is low. This study makes an important theoretical contribution to the online customer ratings literature by showing that preference heterogeneity serves as a key moderator of the effect of ratings variance on product evaluations and that causal attribution acts as a mediator of this moderation effect. A more comprehensive picture of the interplay among ratings variance, preference heterogeneity, and average ratings is also provided by revealing that the interaction between ratings variance and preference heterogeneity varies as a function of the average rating. In addition, this work provides some significant managerial implications for marketers in terms of how they manage word of mouth. Because a lack of consensus creates some uncertainty and anxiety over the given information, consumers experience a psychological burden regarding their choice of a product when ratings show disagreement. The results of this study offer a way to address this problem. By explicitly clarifying that there are many more differences in tastes among reviewers than expected, marketers can allow consumers to speculate that differing tastes of reviewers rather than an uncertain or poor product quality contribute to such conflicts in ratings. Thus, when fierce disagreements are observed in the WOM arena, marketers are advised to communicate to consumers that diverse, rather than uniform, tastes govern reviews and evaluations of products.

  • PDF

Olympic Advertisers Win Gold, Experience Stock Price Gains During and After the Games (오운선수작위엄고대언인영득금패(奥运选手作为广告代言人赢得金牌), 비새중화비새후적고표개격상양(比赛中和比赛后的股票价格上扬))

  • Tomovick, Chuck;Yelkur, Rama
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.80-88
    • /
    • 2010
  • There has been considerable research examining the relationship between stockholders equity and various marketing strategies. These include studies linking stock price performance to advertising, customer service metrics, new product introductions, research and development, celebrity endorsers, brand perception, brand extensions, brand evaluation, company name changes, and sports sponsorships. Another facet of marketing investments which has received heightened scrutiny for its purported influence on stockholder equity is television advertisement embedded within specific sporting events such as the Super Bowl. Research indicates that firms which advertise in Super Bowls experience stock price gains. Given this reported relationship between advertising investment and increased shareholder value, for both general and special events, it is surprising that relatively little research attention has been paid to investigating the relationship between advertising in the Olympic Games and its subsequent impact on stockholder equity. While attention has been directed at examining the effectiveness of sponsoring the Olympic Games, much less focus has been placed on the financial soundness of advertising during the telecasts of these Games. Notable exceptions to this include Peters (2008), Pfanner (2008), Saini (2008), and Keller Fay Group (2009). This paper presents a study of Olympic advertisers who ran TV ads on NBC in the American telecasts of the 2000, 2004, and 2008 Summer Olympic Games. Five hypothesis were tested: H1: The stock prices of firms which advertised on American telecasts of the 2008, 2004 and 2000 Olympics (referred to as O-Stocks), will outperform the S&P 500 during this same period of time (i.e., the Monday before the Games through to the Friday after the Games). H2: O-Stocks will outperform the S&P 500 during the medium term, that is, for the period of the Monday before the Games through to the end of each Olympic calendar year (December 31st of 2000, 2004, and 2008 respectively). H3: O-Stocks will outperform the S&P 500 in the longer term, that is, for the period of the Monday before the Games through to the midpoint of the following years (June 30th of 2001, 2005, and 2009 respectively). H4: There will be no difference in the performance of these O-Stocks vs. the S&P 500 in the Non-Olympic time control periods (i.e. three months earlier for each of the Olympic years). H5: The annual revenue of firms which advertised on American telecasts of the 2008, 2004 and 2000 Olympics will be higher for those years than the revenue for those same firms in the years preceding those three Olympics respectively. In this study, we recorded stock prices of those companies that advertised during the Olympics for the last three Summer Olympic Games (i.e. Beijing in 2008, Athens in 2004, and Sydney in 2000). We identified these advertisers using Google searches as well as with the help of the television network (i.e., NBC) that hosted the Games. NBC held the American broadcast rights to all three Olympic Games studied. We used Internet sources to verify the parent companies of the brands that were advertised each year. Stock prices of these parent companies were found using Yahoo! Finance. Only companies that were publicly held and traded were used in the study. We identified changes in Olympic advertisers' stock prices over the four-week period that included the Monday before through the Friday after the Games. In total, there were 117 advertisers of the Games on telecasts which were broadcast in the U.S. for 2008, 2004, and 2000 Olympics. Figure 1 provides a breakdown of those advertisers, by industry sector. Results indicate the stock of the firms that advertised (O-Stocks) out-performed the S&P 500 during the period of interest and under-performed the S&P 500 during the earlier control periods. These same O-Stocks also outperformed the S&P 500 from the start of these Games through to the end of each Olympic year, and for six months beyond that. Price pressure linkage, signaling theory, high involvement viewers, and corporate activation strategies are believed to contribute to these positive results. Implications for advertisers and researchers are discussed, as are study limitations and future research directions.