• Title/Summary/Keyword: Error performance

Search Result 9,558, Processing Time 0.047 seconds

A simulation study for various propensity score weighting methods in clinical problematic situations (임상에서 발생할 수 있는 문제 상황에서의 성향 점수 가중치 방법에 대한 비교 모의실험 연구)

  • Siseong Jeong;Eun Jeong Min
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.5
    • /
    • pp.381-397
    • /
    • 2023
  • The most representative design used in clinical trials is randomization, which is used to accurately estimate the treatment effect. However, comparison between the treatment group and the control group in an observational study without randomization is biased due to various unadjusted differences, such as characteristics between patients. Propensity score weighting is a widely used method to address these problems and to minimize bias by adjusting those confounding and assess treatment effects. Inverse probability weighting, the most popular method, assigns weights that are proportional to the inverse of the conditional probability of receiving a specific treatment assignment, given observed covariates. However, this method is often suffered by extreme propensity scores, resulting in biased estimates and excessive variance. Several alternative methods including trimming, overlap weights, and matching weights have been proposed to mitigate these issues. In this paper, we conduct a simulation study to compare performance of various propensity score weighting methods under diverse situation, such as limited overlap, misspecified propensity score, and treatment contrary to prediction. From the simulation results overlap weights and matching weights consistently outperform inverse probability weighting and trimming in terms of bias, root mean squared error and coverage probability.

Optimization-based Deep Learning Model to Localize L3 Slice in Whole Body Computerized Tomography Images (컴퓨터 단층촬영 영상에서 3번 요추부 슬라이스 검출을 위한 최적화 기반 딥러닝 모델)

  • Seongwon Chae;Jae-Hyun Jo;Ye-Eun Park;Jin-Hyoung, Jeong;Sung Jin Kim;Ahnryul Choi
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.331-337
    • /
    • 2023
  • In this paper, we propose a deep learning model to detect lumbar 3 (L3) CT images to determine the occurrence and degree of sarcopenia. In addition, we would like to propose an optimization technique that uses oversampling ratio and class weight as design parameters to address the problem of performance degradation due to data imbalance between L3 level and non-L3 level portions of CT data. In order to train and test the model, a total of 150 whole-body CT images of 104 prostate cancer patients and 46 bladder cancer patients who visited Gangneung Asan Medical Center were used. The deep learning model used ResNet50, and the design parameters of the optimization technique were selected as six types of model hyperparameters, data augmentation ratio, and class weight. It was confirmed that the proposed optimization-based L3 level extraction model reduced the median L3 error by about 1.0 slices compared to the control model (a model that optimized only 5 types of hyperparameters). Through the results of this study, accurate L3 slice detection was possible, and additionally, we were able to present the possibility of effectively solving the data imbalance problem through oversampling through data augmentation and class weight adjustment.

A 2×2 MIMO Spatial Multiplexing 5G Signal Reception in a 500 km/h High-Speed Vehicle using an Augmented Channel Matrix Generated by a Delay and Doppler Profiler

  • Suguru Kuniyoshi;Rie Saotome;Shiho Oshiro;Tomohisa Wada
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.1-10
    • /
    • 2023
  • This paper proposes a method to extend Inter-Carrier Interference (ICI) canceling Orthogonal Frequency Division Multiplexing (OFDM) receivers for 5G mobile systems to spatial multiplexing 2×2 MIMO (Multiple Input Multiple Output) systems to support high-speed ground transportation services by linear motor cars traveling at 500 km/h. In Japan, linear-motor high-speed ground transportation service is scheduled to begin in 2027. To expand the coverage area of base stations, 5G mobile systems in high-speed moving trains will have multiple base station antennas transmitting the same downlink (DL) signal, forming an expanded cell size along the train rails. 5G terminals in a fast-moving train can cause the forward and backward antenna signals to be Doppler-shifted in opposite directions, so the receiver in the train may have trouble estimating the exact channel transfer function (CTF) for demodulation. A receiver in such high-speed train sees the transmission channel which is composed of multiple Doppler-shifted propagation paths. Then, a loss of sub-carrier orthogonality due to Doppler-spread channels causes ICI. The ICI Canceller is realized by the following three steps. First, using the Demodulation Reference Symbol (DMRS) pilot signals, it analyzes three parameters such as attenuation, relative delay, and Doppler-shift of each multi-path component. Secondly, based on the sets of three parameters, Channel Transfer Function (CTF) of sender sub-carrier number n to receiver sub-carrier number l is generated. In case of n≠l, the CTF corresponds to ICI factor. Thirdly, since ICI factor is obtained, by applying ICI reverse operation by Multi-Tap Equalizer, ICI canceling can be realized. ICI canceling performance has been simulated assuming severe channel condition such as 500 km/h, 8 path reverse Doppler Shift for QPSK, 16QAM, 64QAM and 256QAM modulations. In particular, 2×2MIMO QPSK and 16QAM modulation schemes, BER (Bit Error Rate) improvement was observed when the number of taps in the multi-tap equalizer was set to 31 or more taps, at a moving speed of 500 km/h and in an 8-pass reverse doppler shift environment.

A Three-Dimensional Deep Convolutional Neural Network for Automatic Segmentation and Diameter Measurement of Type B Aortic Dissection

  • Yitong Yu;Yang Gao;Jianyong Wei;Fangzhou Liao;Qianjiang Xiao;Jie Zhang;Weihua Yin;Bin Lu
    • Korean Journal of Radiology
    • /
    • v.22 no.2
    • /
    • pp.168-178
    • /
    • 2021
  • Objective: To provide an automatic method for segmentation and diameter measurement of type B aortic dissection (TBAD). Materials and Methods: Aortic computed tomography angiographic images from 139 patients with TBAD were consecutively collected. We implemented a deep learning method based on a three-dimensional (3D) deep convolutional neural (CNN) network, which realizes automatic segmentation and measurement of the entire aorta (EA), true lumen (TL), and false lumen (FL). The accuracy, stability, and measurement time were compared between deep learning and manual methods. The intra- and inter-observer reproducibility of the manual method was also evaluated. Results: The mean dice coefficient scores were 0.958, 0.961, and 0.932 for EA, TL, and FL, respectively. There was a linear relationship between the reference standard and measurement by the manual and deep learning method (r = 0.964 and 0.991, respectively). The average measurement error of the deep learning method was less than that of the manual method (EA, 1.64% vs. 4.13%; TL, 2.46% vs. 11.67%; FL, 2.50% vs. 8.02%). Bland-Altman plots revealed that the deviations of the diameters between the deep learning method and the reference standard were -0.042 mm (-3.412 to 3.330 mm), -0.376 mm (-3.328 to 2.577 mm), and 0.026 mm (-3.040 to 3.092 mm) for EA, TL, and FL, respectively. For the manual method, the corresponding deviations were -0.166 mm (-1.419 to 1.086 mm), -0.050 mm (-0.970 to 1.070 mm), and -0.085 mm (-1.010 to 0.084 mm). Intra- and inter-observer differences were found in measurements with the manual method, but not with the deep learning method. The measurement time with the deep learning method was markedly shorter than with the manual method (21.7 ± 1.1 vs. 82.5 ± 16.1 minutes, p < 0.001). Conclusion: The performance of efficient segmentation and diameter measurement of TBADs based on the 3D deep CNN was both accurate and stable. This method is promising for evaluating aortic morphology automatically and alleviating the workload of radiologists in the near future.

The Measurement Algorithm for Microphone's Frequency Character Response Using OATSP (OATSP를 이용한 마이크로폰의 주파수 특성 응답 측정 알고리즘)

  • Park, Byoung-Uk;Kim, Hack-Yoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.2
    • /
    • pp.61-68
    • /
    • 2007
  • The frequency response of a microphone, which indicates the frequency range that a microphone can output within the approved level, is one of the most significant standards used to measure the characteristics of a microphone. At present, conventional methods of measuring the frequency response are complicated and involve the use of expensive equipment. To complement the disadvantages, this paper suggests a new algorithm that can measure the frequency response of a microphone in a simple manner. The algorithm suggested in this paper generates the Optimized Aoshima's Time Stretched Pulse(OATSP) signal from a computer via a standard speaker and measures the impulse response of a microphone by convolution the inverse OATSP signal and the received by the microphone to be measured. Then, the frequency response of the microphone to be measured is calculated using the signals. The performance test for the algorithm suggested in the study was conducted through a comparative analysis of the frequency response data and the measures of frequency response of the microphone measured by the algorithm. It proved that the algorithm is suitable for measuring the frequency response of a microphone, and that despite a few errors they are all within the error tolerance.

Development of an Anomaly Detection Algorithm for Verification of Radionuclide Analysis Based on Artificial Intelligence in Radioactive Wastes (방사성폐기물 핵종분석 검증용 이상 탐지를 위한 인공지능 기반 알고리즘 개발)

  • Seungsoo Jang;Jang Hee Lee;Young-su Kim;Jiseok Kim;Jeen-hyeng Kwon;Song Hyun Kim
    • Journal of Radiation Industry
    • /
    • v.17 no.1
    • /
    • pp.19-32
    • /
    • 2023
  • The amount of radioactive waste is expected to dramatically increase with decommissioning of nuclear power plants such as Kori-1, the first nuclear power plant in South Korea. Accurate nuclide analysis is necessary to manage the radioactive wastes safely, but research on verification of radionuclide analysis has yet to be well established. This study aimed to develop the technology that can verify the results of radionuclide analysis based on artificial intelligence. In this study, we propose an anomaly detection algorithm for inspecting the analysis error of radionuclide. We used the data from 'Updated Scaling Factors in Low-Level Radwaste' (NP-5077) published by EPRI (Electric Power Research Institute), and resampling was performed using SMOTE (Synthetic Minority Oversampling Technique) algorithm to augment data. 149,676 augmented data with SMOTE algorithm was used to train the artificial neural networks (classification and anomaly detection networks). 324 NP-5077 report data verified the performance of networks. The anomaly detection algorithm of radionuclide analysis was divided into two modules that detect a case where radioactive waste was incorrectly classified or discriminate an abnormal data such as loss of data or incorrectly written data. The classification network was constructed using the fully connected layer, and the anomaly detection network was composed of the encoder and decoder. The latter was operated by loading the latent vector from the end layer of the classification network. This study conducted exploratory data analysis (i.e., statistics, histogram, correlation, covariance, PCA, k-mean clustering, DBSCAN). As a result of analyzing the data, it is complicated to distinguish the type of radioactive waste because data distribution overlapped each other. In spite of these complexities, our algorithm based on deep learning can distinguish abnormal data from normal data. Radionuclide analysis was verified using our anomaly detection algorithm, and meaningful results were obtained.

Problems and Improvement Measures of Private Consulting Firms Working on Rural Area Development (농촌지역개발 민간컨설팅회사의 실태와 개선방안)

  • Kim, Jung Tae
    • Journal of Agricultural Extension & Community Development
    • /
    • v.21 no.2
    • /
    • pp.1-28
    • /
    • 2014
  • Private consulting firms that are currently participating in rural area development projects with a bottom-up approach are involved in nearly all areas of rural area development, and the policy environment that emphasizes the bottom-up approach will further expand their participation. Reviews of private consulting firms, which started out with high expectations in the beginning, are now becoming rather negative. Expertise is the key issue in the controversy over private consulting firms, and the analysis tends to limit the causes of the problems within firms. This study was conducted on the premise that the fixation on cause and structure results in policy issues in the promotion process. That is because the government authorities are responsible for managing and supervising the implementation of policies, not developing the policies. The current issues with consulting firms emerged because of the hasty implementation of private consulting through the government policy trend without sufficient consideration, as well as the policy environment that demanded short-term outcomes even though the purpose of bottom-up rural area development lies in the ideology of endogenous development focused on the changes in residents' perceptions. Research was conducted to determine how the problems of private consulting firms that emerged and were addressed in this context influenced the consulting market, using current data and based on the firms' business performance. In analyzing the types, firms were divided into three groups: top performers including market leaders (9), excellent performers (36), and average performers (34). An analysis of the correlation between the business performance of each type and managerial resources such as each firm's expertise revealed that there was only a correlation between human resources and regional development in excellent performers, and none was found with the other types. These results imply that external factors other than a firm's capabilities (e.g., expertise) play a significant role in the standards of selecting private consulting firms. Thus, government authorities must reflect on their error of hastily adopting private consulting firms without sufficient consideration and must urgently establish response measures.

Enhancing Predictive Accuracy of Collaborative Filtering Algorithms using the Network Analysis of Trust Relationship among Users (사용자 간 신뢰관계 네트워크 분석을 활용한 협업 필터링 알고리즘의 예측 정확도 개선)

  • Choi, Seulbi;Kwahk, Kee-Young;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.113-127
    • /
    • 2016
  • Among the techniques for recommendation, collaborative filtering (CF) is commonly recognized to be the most effective for implementing recommender systems. Until now, CF has been popularly studied and adopted in both academic and real-world applications. The basic idea of CF is to create recommendation results by finding correlations between users of a recommendation system. CF system compares users based on how similar they are, and recommend products to users by using other like-minded people's results of evaluation for each product. Thus, it is very important to compute evaluation similarities among users in CF because the recommendation quality depends on it. Typical CF uses user's explicit numeric ratings of items (i.e. quantitative information) when computing the similarities among users in CF. In other words, user's numeric ratings have been a sole source of user preference information in traditional CF. However, user ratings are unable to fully reflect user's actual preferences from time to time. According to several studies, users may more actively accommodate recommendation of reliable others when purchasing goods. Thus, trust relationship can be regarded as the informative source for identifying user's preference with accuracy. Under this background, we propose a new hybrid recommender system that fuses CF and social network analysis (SNA). The proposed system adopts the recommendation algorithm that additionally reflect the result analyzed by SNA. In detail, our proposed system is based on conventional memory-based CF, but it is designed to use both user's numeric ratings and trust relationship information between users when calculating user similarities. For this, our system creates and uses not only user-item rating matrix, but also user-to-user trust network. As the methods for calculating user similarity between users, we proposed two alternatives - one is algorithm calculating the degree of similarity between users by utilizing in-degree and out-degree centrality, which are the indices representing the central location in the social network. We named these approaches as 'Trust CF - All' and 'Trust CF - Conditional'. The other alternative is the algorithm reflecting a neighbor's score higher when a target user trusts the neighbor directly or indirectly. The direct or indirect trust relationship can be identified by searching trust network of users. In this study, we call this approach 'Trust CF - Search'. To validate the applicability of the proposed system, we used experimental data provided by LibRec that crawled from the entire FilmTrust website. It consists of ratings of movies and trust relationship network indicating who to trust between users. The experimental system was implemented using Microsoft Visual Basic for Applications (VBA) and UCINET 6. To examine the effectiveness of the proposed system, we compared the performance of our proposed method with one of conventional CF system. The performances of recommender system were evaluated by using average MAE (mean absolute error). The analysis results confirmed that in case of applying without conditions the in-degree centrality index of trusted network of users(i.e. Trust CF - All), the accuracy (MAE = 0.565134) was lower than conventional CF (MAE = 0.564966). And, in case of applying the in-degree centrality index only to the users with the out-degree centrality above a certain threshold value(i.e. Trust CF - Conditional), the proposed system improved the accuracy a little (MAE = 0.564909) compared to traditional CF. However, the algorithm searching based on the trusted network of users (i.e. Trust CF - Search) was found to show the best performance (MAE = 0.564846). And the result from paired samples t-test presented that Trust CF - Search outperformed conventional CF with 10% statistical significance level. Our study sheds a light on the application of user's trust relationship network information for facilitating electronic commerce by recommending proper items to users.

Performance Test of Portable Hand-Held HPGe Detector Prototype for Safeguard Inspection (안전조치 사찰을 위한 휴대형 HPGe 검출기 시제품 성능평가 실험)

  • Kwak, Sung-Woo;Ahn, Gil Hoon;Park, Iljin;Ham, Young Soo;Dreyer, Jonathan
    • Journal of Radiation Protection and Research
    • /
    • v.39 no.1
    • /
    • pp.54-60
    • /
    • 2014
  • IAEA has employed various types of radiation detectors - HPGe, NaI, CZT - for accountancy of nuclear material. Among them, HPGe has been mainly used in verification activities required for high accuracy. Due to its essential cooling component(a liquid-nitrogen cooling or a mechanical cooling system), it is large and heavy and needs long cooling time before use. New hand-held portable HPGe has been developed to address such problems. This paper deals with results of performance evaluation test of the new hand-held portable HPGe prototype which was used during IAEA's inspection activities. Radioactive spectra obtained with the new portable HPGe showed different characteristics depending on types and enrichments of nuclear materials inspected. Also, Gamma-rays from daughter radioisotopes in the decay series of $^{235}U$ and $^{238}U$ and characteristic x-rays from uranium were able to be remarkably separated from other peaks in the spectra. A relative error of enrichment measured by the new portable HPGe was in the range of 9 to 27%. The enrichment measurement results didn't meet partially requirement of IAEA because of a small size of a radiation sensing material. This problem might be solved through a further study. This paper discusses how to determine enrichment of nuclear material as well as how to apply the new hand-held portable HPGe to safeguard inspection. There have been few papers to deal with IAEA inspection activity in Korea to verify accountancy of nuclear material in national nuclear facilities. This paper would contribute to analyzing results of safeguards inspection. Also, it is expected that things discussed about further improvement of a radiation detector would make contribution to development of a radiation detector in the related field.

Mathematical Models to Predict Staphylococcus aureus Growth on Processed Cheeses

  • Kim, Kyungmi;Lee, Heeyoung;Moon, Jinsan;Kim, Youngjo;Heo, Eunjeong;Park, Hyunjung;Yoon, Yohan
    • Journal of Food Hygiene and Safety
    • /
    • v.28 no.3
    • /
    • pp.217-221
    • /
    • 2013
  • This study developed predictive models for the kinetic behavior of Staphylococcus aureus on processed cheeses. Mozzarella slice cheese and cheddar slice cheese were inoculated with 0.1 ml of a S. aureus strain mixture (ATCC13565, ATCC14458, ATCC23235, ATCC27664, and NCCP10826). The inoculated samples were then stored at $4^{\circ}C$ (1440 h), $15^{\circ}C$ (288 h), $25^{\circ}C$ (72 h), and $30^{\circ}C$ (48 h), and the growth of all bacteria and of S. aureus were enumerated on tryptic soy agar and mannitol salt agar, respectively. The Baranyi model was fitted to the growth data of S. aureus to calculate growth rate (${\mu}_{max}$; ${\log}CFU{\cdot}g^{-1}{\cdot}h^{-1}$), lag phase duration (LPD; h), lower asymptote (log CFU/g), and upper asymptote (log CFU/g). The growth parameters were further analyzed using the square root model as a function of temperature. The model performance was validated with observed data, and the root mean square error (RMSE) was calculated. At $4^{\circ}C$, S. aureus cell growth was not observed on either processed cheese, but S. aureus growth on the mozzarella and cheddar cheeses was observed at $15^{\circ}C$, $25^{\circ}C$, and $30^{\circ}C$. The ${\mu}_{max}$ values increased, but LPD values decreased as storage temperature increased. In addition, the developed models showed acceptable performance (RMSE = 0.3500-0.5344). This result indicates that the developed kinetic model should be useful in describing the growth pattern of S. aureus in processed cheeses.