• Title/Summary/Keyword: Random measure.

Search Result 473, Processing Time 0.028 seconds

A Study on Predictive Modeling of I-131 Radioactivity Based on Machine Learning (머신러닝 기반 고용량 I-131의 용량 예측 모델에 관한 연구)

  • Yeon-Wook You;Chung-Wun Lee;Jung-Soo Kim
    • Journal of radiological science and technology
    • /
    • v.46 no.2
    • /
    • pp.131-139
    • /
    • 2023
  • High-dose I-131 used for the treatment of thyroid cancer causes localized exposure among radiology technologists handling it. There is a delay between the calibration date and when the dose of I-131 is administered to a patient. Therefore, it is necessary to directly measure the radioactivity of the administered dose using a dose calibrator. In this study, we attempted to apply machine learning modeling to measured external dose rates from shielded I-131 in order to predict their radioactivity. External dose rates were measured at 1 m, 0.3 m, and 0.1 m distances from a shielded container with the I-131, with a total of 868 sets of measurements taken. For the modeling process, we utilized the hold-out method to partition the data with a 7:3 ratio (609 for the training set:259 for the test set). For the machine learning algorithms, we chose linear regression, decision tree, random forest and XGBoost. To evaluate the models, we calculated root mean square error (RMSE), mean square error (MSE), and mean absolute error (MAE) to evaluate accuracy and R2 to evaluate explanatory power. Evaluation results are as follows. Linear regression (RMSE 268.15, MSE 71901.87, MAE 231.68, R2 0.92), decision tree (RMSE 108.89, MSE 11856.92, MAE 19.24, R2 0.99), random forest (RMSE 8.89, MSE 79.10, MAE 6.55, R2 0.99), XGBoost (RMSE 10.21, MSE 104.22, MAE 7.68, R2 0.99). The random forest model achieved the highest predictive ability. Improving the model's performance in the future is expected to contribute to lowering exposure among radiology technologists.

Confidence Measure of Depth Map for Outdoor RGB+D Database (야외 RGB+D 데이터베이스 구축을 위한 깊이 영상 신뢰도 측정 기법)

  • Park, Jaekwang;Kim, Sunok;Sohn, Kwanghoon;Min, Dongbo
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.9
    • /
    • pp.1647-1658
    • /
    • 2016
  • RGB+D database has been widely used in object recognition, object tracking, robot control, to name a few. While rapid advance of active depth sensing technologies allows for the widespread of indoor RGB+D databases, there are only few outdoor RGB+D databases largely due to an inherent limitation of active depth cameras. In this paper, we propose a novel method used to build outdoor RGB+D databases. Instead of using active depth cameras such as Kinect or LIDAR, we acquire a pair of stereo image using high-resolution stereo camera and then obtain a depth map by applying stereo matching algorithm. To deal with estimation errors that inevitably exist in the depth map obtained from stereo matching methods, we develop an approach that estimates confidence of depth maps based on unsupervised learning. Unlike existing confidence estimation approaches, we explicitly consider a spatial correlation that may exist in the confidence map. Specifically, we focus on refining confidence feature with the assumption that the confidence feature and resultant confidence map are smoothly-varying in spatial domain and are highly correlated to each other. Experimental result shows that the proposed method outperforms existing confidence measure based approaches in various benchmark dataset.

Reliability-Based Topology Optimization Using Performance Measure Approach (성능함수법을 이용한 신뢰성기반 위상 최적설계)

  • Ahn, Seung-Ho;Cho, Seon-Ho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.23 no.1
    • /
    • pp.37-43
    • /
    • 2010
  • In this paper, a reliability-based design optimization is developed for the topology design of linear structures using a performance measure approach. Spatial domain is discretized using three dimensional Reissner-Mindlin plate elements and design variable is taken as the material property of each element. A continuum based adjoint variable method is employed for the efficient computation of sensitivity with respect to the design and random variables. The performance measure approach of RBDO is employed to evaluate the probabilistic constraints. The topology optimizationproblem is formulated to have probabilistic displacement constraints. The uncertainties such as material property and external loads are considered. Numerical examples show that the developed topology optimization method could effectively yield a reliable design, comparing with the other methods such as deterministic, safety factor, and worst case approaches.

Analysis of Arduino Timer Callback for IoT Devices (IoT 디바이스를 위한 아두이노 타이머 콜백 분석)

  • Gong, Dong-Hwan;Shin, Seung-Jung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.6
    • /
    • pp.139-143
    • /
    • 2018
  • Arduino, based on open source hardware, is used in many IoT devices, and IoT devices require multitasking for various inputs and outputs. Among the several methods used for multitasking based on Arduino, we compare three methods: Timing Call by using millis(), Simple Timer library method, and Timer library method. In order to measure the execution error caused by measurement and time delay of each method, two situations are created and analyzed. In the first case, 10 random tasks of a certain size are generated to measure the time delay of each method. In the second situation, 10 random tasks of a certain size are generated to compare execution errors caused by the time lag of the Timer library. In the first case, the millis() timing call method and the Simple Timer library method have a similar time delay and the Timer library method has more time delay. In the second situation, an execution error occurred in which small-size tasks were not called back at the correct timing due to the time delay.

Deep recurrent neural networks with word embeddings for Urdu named entity recognition

  • Khan, Wahab;Daud, Ali;Alotaibi, Fahd;Aljohani, Naif;Arafat, Sachi
    • ETRI Journal
    • /
    • v.42 no.1
    • /
    • pp.90-100
    • /
    • 2020
  • Named entity recognition (NER) continues to be an important task in natural language processing because it is featured as a subtask and/or subproblem in information extraction and machine translation. In Urdu language processing, it is a very difficult task. This paper proposes various deep recurrent neural network (DRNN) learning models with word embedding. Experimental results demonstrate that they improve upon current state-of-the-art NER approaches for Urdu. The DRRN models evaluated include forward and bidirectional extensions of the long short-term memory and back propagation through time approaches. The proposed models consider both language-dependent features, such as part-of-speech tags, and language-independent features, such as the "context windows" of words. The effectiveness of the DRNN models with word embedding for NER in Urdu is demonstrated using three datasets. The results reveal that the proposed approach significantly outperforms previous conditional random field and artificial neural network approaches. The best f-measure values achieved on the three benchmark datasets using the proposed deep learning approaches are 81.1%, 79.94%, and 63.21%, respectively.

An efficient Reliability Analysis Method Based on The Design of Experiments Augmented by The Response Surface Method (실험계획법과 반응표면법을 이용한 효율적인 신뢰도 기법의 개발)

  • 이상훈;곽병만
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.700-703
    • /
    • 2004
  • A reliability analysis and design procedure based on the design of experiment (DOE) is combined with the response surface method (RSM) for numerical efficiency. The procedure established is based on a 3$^n$ full factorial DOE for numerical quadrature using explicit formula of optimum levels and weights derived for general distributions. The full factorial moment method (FFMM) shows good performance in terms of accuracy and ability to treat non-normally distributed random variables. But, the FFMM becomes very inefficient because the number of function evaluation required increases exponentially as the number of random variables considered increases. To enhance the efficiency, the response surface moment method (RSMM) is proposed. In RSMM, experiments only with high probability are conducted and the rest of data are complemented by a quadratic response surface approximation without mixed terms. The response surface is updated by conducting experiments one by one until the value of failure probability is converged. It is calculated using the Pearson system and the four statistical moments obtained from the experimental data. A measure for checking the relative importance of an experimental point is proposed and named as influence index. During the update of response surface, mixed terms can be added into the formulation.

  • PDF

Facebook Spam Post Filtering based on Instagram-based Transfer Learning and Meta Information of Posts (인스타그램 기반의 전이학습과 게시글 메타 정보를 활용한 페이스북 스팸 게시글 판별)

  • Kim, Junhong;Seo, Deokseong;Kim, Haedong;Kang, Pilsung
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.43 no.3
    • /
    • pp.192-202
    • /
    • 2017
  • This study develops a text spam filtering system for Facebook based on two variable categories: keywords learned from Instagram and meta-information of Facebook posts. Since there is no explicit labels for spam/ham posts, we utilize hash tags in Instagram to train classification models. In addition, the filtering accuracy is enhanced by considering meta-information of Facebook posts. To verify the proposed filtering system, we conduct an empirical experiment based on a total of 1,795,067 and 761,861 Facebook and Instagram documents, respectively. Employing random forest as a base classification algorithm, experimental result shows that the proposed filtering system yield 99% and 98% in terms of filtering accuracy and F1-measure, respectively. We expect that the proposed filtering scheme can be applied other web services suffering from massive spam posts but no explicit spam labels are available.

VaR Estimation with Multiple Copula Functions (다차원 Copula 함수를 이용한 VaR 추정)

  • Hong, Chong-Sun;Lee, Won-Yong
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.5
    • /
    • pp.809-820
    • /
    • 2011
  • VaR(Value at risk) is a measure of market risk management and needs to be estimated for multiple distributions. In this paper, Copula functions are used to generate distributions of multivariate random variables. The dependence structure of random variables is classified by the exchangeable Copula, fully nested Copula, partially nested Copula. For the earning rate data of four Korean industries, the parameters of the Archimedean Copula functions including Clayton, Gumbel and Frank Copula are estimated by using three kinds of dependence structure. These Copula functions are then fitted to to the data so that corresponding VaR are obtained and explored.

Construction of bivariate asymmetric copulas

  • Mukherjee, Saikat;Lee, Youngsaeng;Kim, Jong-Min;Jang, Jun;Park, Jeong-Soo
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.2
    • /
    • pp.217-234
    • /
    • 2018
  • Copulas are a tool for constructing multivariate distributions and formalizing the dependence structure between random variables. From copula literature review, there are a few asymmetric copulas available so far while data collected from the real world often exhibit asymmetric nature. This necessitates developing asymmetric copulas. In this study, we discuss a method to construct a new class of bivariate asymmetric copulas based on products of symmetric (sometimes asymmetric) copulas with powered arguments in order to determine if the proposed construction can offer an added value for modeling asymmetric bivariate data. With these newly constructed copulas, we investigate dependence properties and measure of association between random variables. In addition, the test of symmetry of data and the estimation of hyper-parameters by the maximum likelihood method are discussed. With two real example such as car rental data and economic indicators data, we perform the goodness-of-fit test of our proposed asymmetric copulas. For these data, some of the proposed models turned out to be successful whereas the existing copulas were mostly unsuccessful. The method of presented here can be useful in fields such as finance, climate and social science.

Impact of Mobility on the Ad Hoc Network Performance (이동성이 Ad Hoc 망의 성능에 미치는 영향)

  • Ahn, Hong-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.5
    • /
    • pp.201-208
    • /
    • 2010
  • Mobile Ad Hoc Network(MANET) has highly dynamic topology, hence presents a great challenge on the network performance evaluation and network protocol design. We proposed total path break up time, $\sum_{i}T_i$, as a metric to measure the performance of the total system as well as an individual connection. In this paper, we evaluate and analyze the performance of three mobility models(Random Waypoint, Manhattan, Blocked Manhattan) by applying the total path break up metric, investigate why network parameters such as packet delivery ratio, end-to-end delay, etc. vary by mobility models. We also present analysis result how much AODV Buffer improve packet delivery ratio and increase the end-to-end delay in spite of the path break up.