• Title/Summary/Keyword: correlated decision

Search Result 180, Processing Time 0.035 seconds

Evaluation of the Relationships Between Kellgren-Lawrence Radiographic Score and Knee Osteoarthritis-related Pain, Function, and Muscle Strength

  • Kim, Si-hyun;Park, Kyue-nam
    • Physical Therapy Korea
    • /
    • v.26 no.2
    • /
    • pp.69-75
    • /
    • 2019
  • Background: Knee osteoarthritis (OA) diagnosis using Kellgren-Lawrence scores is commonly used to help decision-making during assessment of the severity of OA with assessment of pain, function and muscle strength. The association between Kellgren-Lawrence scores and functional/clinical outcomes remains controversial in patients with knee OA. Objects: The purpose of this study was to examine the relationships between Kellgren-Lawrence scores and knee pain associated with OA, function during daily living and sports activities, quality of life, and knee muscle strength in patients with knee OA. Methods: We recruited 66 patients with tibiofemoral knee OA and determined knee joint Kellgren-Lawrence scores using standing anteroposterior radiographs. Self-reported knee pain, daily living function, sports/recreation function, and quality of life were measured using the knee injury and OA outcome score (KOOS). Knee extensors and flexors were assessed using a handheld dynamometer. We performed Spearman's rank correlation analyses to evaluate the relationships between Kellgren-Lawrence and KOOS scores or muscle strength. Results: Kellgren-Lawrence scores were significantly negatively correlated with KOOS scores for knee pain, daily living function, sports/recreation function, and quality of life. Statistically significant negative correlations were found between Kellgren-Lawrence scores and knee extensor strength but not flexor strength. Conclusion: Higher Kellgren-Lawrence scores were associated with more severe knee pain and lower levels of function in daily living and sports/recreation, quality of life, and knee extensor strength in patients with knee OA. Therefore, we conclude that knee OA assessment via self-reported KOOS and knee extensor strength may be a cost-effective alternative to radiological exams.

Low Complexity Linear Receiver Implementation of SOQPSK-TG Signal Using the Cross-correlated Trellis-Coded Quadrature Modulation(XTCQM) Technique (SOQPSK-TG 신호의 교차상관 격자부호화 직교변조(XTCQM) 기법을 사용한 저복잡도 선형 수신기 구현)

  • Kim, KyunHoi;Eun, Changsoo
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.3
    • /
    • pp.193-201
    • /
    • 2022
  • SOQPSK-TG is a modulated signal for aircraft telemetry with excellent frequency efficiency and power efficiency. In this paper, the phase waveform of the partial response SOQPSK-TG modulation is linearly approximated and modeled as a full response double duobinary SOQPSK (SOQPSK-DD) signal. And using the XTCQM method and the Laurent decomposition method, the SOQPSK-DD signal was approximated as OQPSK having linear pulse waveforms, and the results of the two methods were proved to be the same. In addition, it was confirmed that the Laurent decomposition waveform of the SOQPSK-DD signal approximates the Laurent decomposition waveform of the original SOQPSK-TG signal. And it was shown that the decision feedback IQ-detector, which applied the Laurent decomposition waveform of SOQPSK-DD to the detection filter, exhibits almost the same performance even with a simpler waveform than before.

Underwater Navigation of AUVs Using Uncorrelated Measurement Error Model of USBL

  • Lee, Pan-Mook;Park, Jin-Yeong;Baek, Hyuk;Kim, Sea-Moon;Jun, Bong-Huan;Kim, Ho-Sung;Lee, Phil-Yeob
    • Journal of Ocean Engineering and Technology
    • /
    • v.36 no.5
    • /
    • pp.340-352
    • /
    • 2022
  • This article presents a modeling method for the uncorrelated measurement error of the ultra-short baseline (USBL) acoustic positioning system for aiding navigation of underwater vehicles. The Mahalanobis distance (MD) and principal component analysis are applied to decorrelate the errors of USBL measurements, which are correlated in the x- and y-directions and vary according to the relative direction and distance between a reference station and the underwater vehicles. The proposed method can decouple the radial-direction error and angular direction error from each USBL measurement, where the former and latter are independent and dependent, respectively, of the distance between the reference station and the vehicle. With the decorrelation of the USBL errors along the trajectory of the vehicles in every time step, the proposed method can reduce the threshold of the outlier decision level. To demonstrate the effectiveness of the proposed method, simulation studies were performed with motion data obtained from a field experiment involving an autonomous underwater vehicle and USBL signals generated numerically by matching the specifications of a specific USBL with the data of a global positioning system. The simulations indicated that the navigation system is more robust in rejecting outliers of the USBL measurements than conventional ones. In addition, it was shown that the erroneous estimation of the navigation system after a long USBL blackout can converge to the true states using the MD of the USBL measurements. The navigation systems using the uncorrelated error model of the USBL, therefore, can effectively eliminate USBL outliers without loss of uncontaminated signals.

Predicting the Baltic Dry Bulk Freight Index Using an Ensemble Neural Network Model (통합적인 인공 신경망 모델을 이용한 발틱운임지수 예측)

  • SU MIAO
    • Korea Trade Review
    • /
    • v.48 no.2
    • /
    • pp.27-43
    • /
    • 2023
  • The maritime industry is playing an increasingly vital part in global economic expansion. Specifically, the Baltic Dry Index is highly correlated with global commodity prices. Hence, the importance of BDI prediction research increases. But, since the global situation has become more volatile, it has become methodologically more difficult to predict the BDI accurately. This paper proposes an integrated machine-learning strategy for accurately forecasting BDI trends. This study combines the benefits of a convolutional neural network (CNN) and long short-term memory neural network (LSTM) for research on prediction. We collected daily BDI data for over 27 years for model fitting. The research findings indicate that CNN successfully extracts BDI data features. On this basis, LSTM predicts BDI accurately. Model R2 attains 94.7 percent. Our research offers a novel, machine-learning-integrated approach to the field of shipping economic indicators research. In addition, this study provides a foundation for risk management decision-making in the fields of shipping institutions and financial investment.

Prognostic factors and predictive models in hot gallbladder surgery: A prospective observational study in a high-volume center

  • Giovanni Domenico Tebala;Amanda Shabana;Mahul Patel;Benjamin Samra;Alan Chetwynd;Mickaela Nixon;Siddhee Pradhan;Bara'a Elhag;Gabriel Mok;Alexandra Mighiu;Diandra Antunes;Zoe Slack;Roberto Cirocchi;Giles Bond-Smith
    • Annals of Hepato-Biliary-Pancreatic Surgery
    • /
    • v.28 no.2
    • /
    • pp.203-213
    • /
    • 2024
  • Backgrounds/Aims: The standard treatment for acute cholecystitis, biliary pancreatitis and intractable biliary colics ("hot gallbladder") is emergency laparoscopic cholecystectomy (LC). This paper aims to identify the prognostic factors and create statistical models to predict the outcomes of emergency LC for "hot gallbladder." Methods: A prospective observational cohort study was conducted on 466 patients having an emergency LC in 17 months. Primary endpoint was "suboptimal treatment," defined as the use of escape strategies due to the impossibility to complete the LC. Secondary endpoints were postoperative morbidity and length of postoperative stay. Results: About 10% of patients had a "suboptimal treatment" predicted by age and low albumin. Postop morbidity was 17.2%, predicted by age, admission day, and male sex. Postoperative length of stay was correlated to age, low albumin, and delayed surgery. Conclusions: Several predictive prognostic factors were found to be related to poor emergency LC outcomes. These can be useful in the decision-making process and to inform patients of risks and benefits of an emergency vs. delayed LC for hot gallbladder.

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

A study on the regulation of negative emotions in the Ultimatum Game: Comparison between Korean older and young adults (최후통첩게임 상황에서의 부정정서 조절에 관한 연구: 한국 노인과 청년 비교)

  • Jeon, Dasom;Ghim, Hei-Rhee;Hur, Ahjeong;Park, Sunwoo;Kim, Moongeol
    • 한국노년학
    • /
    • v.39 no.4
    • /
    • pp.921-939
    • /
    • 2019
  • According to the social selectivity theory (SST), despite the disadvantages of life conditions, older adults experience less negative emotions because they regulate their emotions by avoiding negative stimuli or situations. Based on the SST, this study attempted to find out whether older adults are better able to regulate negative emotions than young adults in the Ultimatum Game (UG). In an UG, if the proposer proposes to distribute a portion of the money to the responder, the responder must decide whether to accept or reject it. If the responder accepts the offer, the proposer and the responder can each have their own share as proposed, but if s/he reject the offer, both get nothing. Thus, if the responder considers own economic benefits, it is a more reasonable decision to accept the unfair offer no matter how low, than to reject it. To accept an unfair offer, the responder must regulate the anger felt at the proposer. If older adults could regulate anger better than young adults, they would be less likely to reject the unfair offer than young adults. Fifty-seven olders and 60 university students participated in this study. Both the older and young adults accepted most of the fair offers. In contrast, older adults accepted unfair offers at a significantly higher rate than young adults. In addition, compared to young adults, older adults reported anger less frequently at the unfair offers. Accepting unfair offers was negatively correlated with anger report, but positively correlated with the emotion regulation measured by ERQ. The ERQ score was negatively correlated with anger report. Emotion regulation partially mediated the relationship between the age groups and acceptance of unfair offers. The present results showed that older adults accepted the unfair offers at a higher rate than young adults because they could regulate the negative emotions felt at the unfair offer better than young adults. This study provided new evidence for the claim that improving emotional regulation is a major developmental change in adulthood.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

To Assess Whether Lee's Grading System for Central Lumbar Spinal Stenosis Can Be Used as a Decision-Making Tool for Surgical Treatment (요추 중심 신경관 협착에 있어서 Lee's Grade를 통한 MRI 평가방법이 수술적 치료 결정에 유용한가에 대한 연구)

  • Do Yeon Ahn;Hee Jin Park;Jung Woo Yi;Ji Na Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.83 no.1
    • /
    • pp.102-111
    • /
    • 2022
  • Purpose To evaluate the correlation between Lee's grades and surgical intervention for central lumbar spinal stenosis (CLSS) and to assess whether this grading system can be used as a decision-making tool for the surgical treatment of this condition. Materials and Methods This retrospective study included 290 patients (M:F = 156:134; mean age, 46 ± 16 years). Radiologists assessed the presence and grade of CLSS at the stenosis point according to Lee's grading system, in which CLSS is classified into four grades according to the shape of the cauda equina. Correlation coefficients (rs) between Lee's grades and the operation were calculated with Spearman rank correlation. Results Among the operated patients, grade 2 was the most commonly assigned grade (50%-58%), grade 3 was less common (35%), and grade 0 was the least common (2%-3%). Among the non-operated patients, grade 1 was the most common (63%-65%), grade 0 was less common (15%-16%), and grade 3 was the least common (8%). The distribution of grades differed between the operated and non-operated groups (p < 0.001). Less than 25% of patients who underwent surgery were assigned grades 0 and 1, and more than 88% were assigned grades 2 and 3. A moderate correlation was found between the grade and surgical intervention (rs = 0.632 and rs = 0.583). Conclusion Lee's grade was moderately correlated with surgical intervention. Lee's grading system can be a decision-making tool for the surgical treatment of CLSS.