• Title/Summary/Keyword: Paired dataset

Search Result 22, Processing Time 0.026 seconds

An extended cloud analysis method for seismic fragility assessment of highway bridges

  • Sfahani, Mohammad Ghalami;Guan, Hong
    • Earthquakes and Structures
    • /
    • v.15 no.6
    • /
    • pp.605-616
    • /
    • 2018
  • In this paper, an extended Cloud analysis method is developed for seismic fragility assessment of existing highway bridges in the southeast Queensland region. This method extends the original Cloud analysis dataset by performing scaled Cloud analyses. The original and scaled Cloud datasets are then paired to generate seismic fragility curves. The seismic hazard in this region is critically reviewed, and the ground motion records are selected for the time-history analysis based on various record selection criteria. A parametric highway bridge model is developed in the OpenSees analysis software, and a sampling technique is employed to quantify the uncertainties of highway bridges ubiquitous in this region. Technical recommendations are also given for the seismic performance evaluation of highway bridges in such low-to-moderate seismic zones. Finally, a probabilistic fragility study is conducted by performing a total of 8000 time-history analyses and representative bridge fragility curves are generated. It is illustrated that the seismic fragility curves generated by the proposed extended Cloud analysis method are in close agreement with those which are obtained by the rigorous incremental dynamic analysis method. Also, it reveals that more than 50% of highway bridges existing in southeast Queensland will be damaged subject to a peak ground acceleration of 0.14 g.

Implementing Rule-based Healthcare Edits

  • Abdullah, Umair;Shaheen, Muhammad;Ujager, Farhan Sabir
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.116-132
    • /
    • 2022
  • Automated medical claims processing and billing is a popular application domain of information technology. Managing medical related data is a tedious job for healthcare professionals, which distracts them from their main job of healthcare. The technology used in data management has a sound impact on the quality of healthcare data. Most of Information Technology (IT) organizations use conventional software development technology for the implementation of healthcare systems. The objective of this experimental study is to devise a mechanism for use of rule-based expert systems in medical related edits and compare it with the conventional software development technology. A sample of 100 medical edits is selected as a dataset to be tested for implementation using both technologies. Besides empirical analysis, paired t-test is also used to validate the statistical significance of the difference between the two techniques. The conventional software development technology took 254.5 working hours, while rule-based technology took 81 hours to process these edits. Rule-based technology outperformed the conventional systems by increasing the confidence value to 95% and reliability measure to 0.462 (which is < 0.5) which is three times more efficient than conventional software development technology.

Development of an Algorithm for Automatic Extraction of Lower Body Landmarks Using Grasshopper Programming Language (Grasshopper 프로그래밍 기반 3D 인체형상의 하반신 기준점 자동탐색 알고리즘 설계)

  • Eun Joo Ryu;Hwa Kyung Song
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.47 no.1
    • /
    • pp.171-190
    • /
    • 2023
  • This study aims to develop algorithms for automatic extraction landmarks from the lower body of women aged 20-54 using the Grasshopper programming language, based on 3D scan data in the 8th SizeKorea dataset. First, 11 landmarks were defined using the morphological features of 3D body surfaces and clothing applications, from which automatic landmark extraction algorithms were developed. To verify the accuracy of the algorithm, this study developed an additional algorithm that could automatically measure 16 items, and algorithm-derived measurements and SizeKorea measurements were compared using paired t-test analysis. The statistical differences between the scan-derived measurements and the SizeKorea measurements were compared, with an allowable tolerance of ISO 20685-1:2018. This study found that the algorithm successfully identified most items except for the crotch point and gluteal fold point. In the case of landmarks with significant differences, the algorithms were modified. This study was significant because scan editing, landmark search, and measurement extraction were successfully performed in one interface, and the developed algorithm has a high efficiency and strong adaptability.

A Study on Automatic Alignment System based on Object Detection and Homography Estimation (객체 탐지 및 호모그래피 추정을 이용한 안저영상 자동 조정체계 시스템 연구)

  • In, Sanggyu;Beom, Junghyun;Choo, Hyunseung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.401-403
    • /
    • 2021
  • 본 시스템은 같은 환자로부터 촬영한 기존 안저영상과 초광각 안저영상을 Paired Dataset으로 지니고 있으며, 영상의 크기 및 해상도를 똑같이 맞추고, 황반부와 신경유두 및 혈관의 위치를 미세조정하는 과정을 자동화하는 것을 목표로 하고 있다. 이 과정은 황반부를 중심으로 하여 영상을 잘라내어 이미지의 크기를 맞추는 과정(Scaling)과, 황반부를 중심으로 잘라낸 한 쌍의 영상을 포개었을 때 황반부, 신경 유두, 혈관 등의 위치가 동일하도록 미세조정하는 과정(Warping)이 있다. Scaling Stage에선 기존 안저영상과 초광각 안저영상의 촬영범위가 현저하게 차이나기 때문에, 황반변성 부위를 잘 나타내도록 사전에 잘라낼 필요가 있으며, 이를 신경유두의 Object Detection을 활용할 예정이다. Warping Stage에선 동일한 위치에 같은 황반변성 정보가 내포되어야 하므로 규격조정 및 위치조정 과정이 필수적이며, 이후 안저영상 내의 특징들을 매칭하는 작업을 하기 위해 회전, 회절, 변환 작업 등이 이루어지며, 이는 Homography Estimation을 통하여 이미지 변환 matrix를 구하는 방법으로 진행된다. 자동조정된 안저영상 데이터는 추후에 GAN을 이용한 안저영상 생성모델을 위한 학습데이터로 이용할 예정이며, 현재로선 2500쌍의 데이터를 대상으로 실험을 진행중이지만, 최종적으로 3만 쌍의 안저영상 데이터를 목표로 하고 있다.

Low-dose CT Image Denoising Using Classification Densely Connected Residual Network

  • Ming, Jun;Yi, Benshun;Zhang, Yungang;Li, Huixin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.6
    • /
    • pp.2480-2496
    • /
    • 2020
  • Considering that high-dose X-ray radiation during CT scans may bring potential risks to patients, in the medical imaging industry there has been increasing emphasis on low-dose CT. Due to complex statistical characteristics of noise found in low-dose CT images, many traditional methods are difficult to preserve structural details effectively while suppressing noise and artifacts. Inspired by the deep learning techniques, we propose a densely connected residual network (DCRN) for low-dose CT image noise cancelation, which combines the ideas of dense connection with residual learning. On one hand, dense connection maximizes information flow between layers in the network, which is beneficial to maintain structural details when denoising images. On the other hand, residual learning paired with batch normalization would allow for decreased training speed and better noise reduction performance in images. The experiments are performed on the 100 CT images selected from a public medical dataset-TCIA(The Cancer Imaging Archive). Compared with the other three competitive denoising algorithms, both subjective visual effect and objective evaluation indexes which include PSNR, RMSE, MAE and SSIM show that the proposed network can improve LDCT images quality more effectively while maintaining a low computational cost. In the objective evaluation indexes, the highest PSNR 33.67, RMSE 5.659, MAE 1.965 and SSIM 0.9434 are achieved by the proposed method. Especially for RMSE, compare with the best performing algorithm in the comparison algorithms, the proposed network increases it by 7 percentage points.

A Non-parametric Analysis of the Tam-Jin River : Data Homogeneity between Monitoring Stations (탐진강 수질측정 지점 간 동질성 검정을 위한 비모수적 자료 분석)

  • Kim, Mi-Ah;Lee, Su-Woong;Lee, Jae-Kwan;Lee, Jung-Sub
    • Journal of Korean Society on Water Environment
    • /
    • v.21 no.6
    • /
    • pp.651-658
    • /
    • 2005
  • The Non-parametric Analysis is powerful in data test especially for the non- normality water quality data. The data at three monitoring stations of the Tam-Jin River were evaluated for their normality using Skewness, Q-Q plot and Shapiro-Willks tests. Various constituent of water quality data including temperature, pH, DO, SS, BOD, COD, TN and TP in the period of January 1994 to December 2004 were used as dataset. Shapiro-Willks normality test was carried out for a test 5% significance level. Most water quality data except DO at monitoring stations 1 and 2 showed that data does not normally distributed. It is indicating that non-parametric method must be used for a water quality data. Therefore, a homogeneity was conducted by Mann-Whitney U test (p<0.05). Two stations were paired in three pairs of such stations. Differences between stations 1, 2 and stations 1, 3 for pH, BOD, COD, TN and TP were meaningful, but Tam-Jin 2 and 3 stations did not meaningful. In addition, a narrow gap of the water quality ranges is not a difference. Categories in which all three pairs of stations (1 and 2, 2 and 3, 1 and 3) in the Tam-Jin River showed difference in water quality were analyzed on TN and TP. The results of in this research suggest a right analysis in the homogeneity test of water quality data and a reasonable management of pollutant sources.

Clinical Validation of a Deep Learning-Based Hybrid (Greulich-Pyle and Modified Tanner-Whitehouse) Method for Bone Age Assessment

  • Kyu-Chong Lee;Kee-Hyoung Lee;Chang Ho Kang;Kyung-Sik Ahn;Lindsey Yoojin Chung;Jae-Joon Lee;Suk Joo Hong;Baek Hyun Kim;Euddeum Shim
    • Korean Journal of Radiology
    • /
    • v.22 no.12
    • /
    • pp.2017-2025
    • /
    • 2021
  • Objective: To evaluate the accuracy and clinical efficacy of a hybrid Greulich-Pyle (GP) and modified Tanner-Whitehouse (TW) artificial intelligence (AI) model for bone age assessment. Materials and Methods: A deep learning-based model was trained on an open dataset of multiple ethnicities. A total of 102 hand radiographs (51 male and 51 female; mean age ± standard deviation = 10.95 ± 2.37 years) from a single institution were selected for external validation. Three human experts performed bone age assessments based on the GP atlas to develop a reference standard. Two study radiologists performed bone age assessments with and without AI model assistance in two separate sessions, for which the reading time was recorded. The performance of the AI software was assessed by comparing the mean absolute difference between the AI-calculated bone age and the reference standard. The reading time was compared between reading with and without AI using a paired t test. Furthermore, the reliability between the two study radiologists' bone age assessments was assessed using intraclass correlation coefficients (ICCs), and the results were compared between reading with and without AI. Results: The bone ages assessed by the experts and the AI model were not significantly different (11.39 ± 2.74 years and 11.35 ± 2.76 years, respectively, p = 0.31). The mean absolute difference was 0.39 years (95% confidence interval, 0.33-0.45 years) between the automated AI assessment and the reference standard. The mean reading time of the two study radiologists was reduced from 54.29 to 35.37 seconds with AI model assistance (p < 0.001). The ICC of the two study radiologists slightly increased with AI model assistance (from 0.945 to 0.990). Conclusion: The proposed AI model was accurate for assessing bone age. Furthermore, this model appeared to enhance the clinical efficacy by reducing the reading time and improving the inter-observer reliability.

A Novel, Deep Learning-Based, Automatic Photometric Analysis Software for Breast Aesthetic Scoring

  • Joseph Kyu-hyung Park;Seungchul Baek;Chan Yeong Heo;Jae Hoon Jeong;Yujin Myung
    • Archives of Plastic Surgery
    • /
    • v.51 no.1
    • /
    • pp.30-35
    • /
    • 2024
  • Background Breast aesthetics evaluation often relies on subjective assessments, leading to the need for objective, automated tools. We developed the Seoul Breast Esthetic Scoring Tool (S-BEST), a photometric analysis software that utilizes a DenseNet-264 deep learning model to automatically evaluate breast landmarks and asymmetry indices. Methods S-BEST was trained on a dataset of frontal breast photographs annotated with 30 specific landmarks, divided into an 80-20 training-validation split. The software requires the distances of sternal notch to nipple or nipple-to-nipple as input and performs image preprocessing steps, including ratio correction and 8-bit normalization. Breast asymmetry indices and centimeter-based measurements are provided as the output. The accuracy of S-BEST was validated using a paired t-test and Bland-Altman plots, comparing its measurements to those obtained from physical examinations of 100 females diagnosed with breast cancer. Results S-BEST demonstrated high accuracy in automatic landmark localization, with most distances showing no statistically significant difference compared with physical measurements. However, the nipple to inframammary fold distance showed a significant bias, with a coefficient of determination ranging from 0.3787 to 0.4234 for the left and right sides, respectively. Conclusion S-BEST provides a fast, reliable, and automated approach for breast aesthetic evaluation based on 2D frontal photographs. While limited by its inability to capture volumetric attributes or multiple viewpoints, it serves as an accessible tool for both clinical and research applications.

A Hybrid Recommender System based on Collaborative Filtering with Selective Use of Overall and Multicriteria Ratings (종합 평점과 다기준 평점을 선택적으로 활용하는 협업필터링 기반 하이브리드 추천 시스템)

  • Ku, Min Jung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.85-109
    • /
    • 2018
  • Recommender system recommends the items expected to be purchased by a customer in the future according to his or her previous purchase behaviors. It has been served as a tool for realizing one-to-one personalization for an e-commerce service company. Traditional recommender systems, especially the recommender systems based on collaborative filtering (CF), which is the most popular recommendation algorithm in both academy and industry, are designed to generate the items list for recommendation by using 'overall rating' - a single criterion. However, it has critical limitations in understanding the customers' preferences in detail. Recently, to mitigate these limitations, some leading e-commerce companies have begun to get feedback from their customers in a form of 'multicritera ratings'. Multicriteria ratings enable the companies to understand their customers' preferences from the multidimensional viewpoints. Moreover, it is easy to handle and analyze the multidimensional ratings because they are quantitative. But, the recommendation using multicritera ratings also has limitation that it may omit detail information on a user's preference because it only considers three-to-five predetermined criteria in most cases. Under this background, this study proposes a novel hybrid recommendation system, which selectively uses the results from 'traditional CF' and 'CF using multicriteria ratings'. Our proposed system is based on the premise that some people have holistic preference scheme, whereas others have composite preference scheme. Thus, our system is designed to use traditional CF using overall rating for the users with holistic preference, and to use CF using multicriteria ratings for the users with composite preference. To validate the usefulness of the proposed system, we applied it to a real-world dataset regarding the recommendation for POI (point-of-interests). Providing personalized POI recommendation is getting more attentions as the popularity of the location-based services such as Yelp and Foursquare increases. The dataset was collected from university students via a Web-based online survey system. Using the survey system, we collected the overall ratings as well as the ratings for each criterion for 48 POIs that are located near K university in Seoul, South Korea. The criteria include 'food or taste', 'price' and 'service or mood'. As a result, we obtain 2,878 valid ratings from 112 users. Among 48 items, 38 items (80%) are used as training dataset, and the remaining 10 items (20%) are used as validation dataset. To examine the effectiveness of the proposed system (i.e. hybrid selective model), we compared its performance to the performances of two comparison models - the traditional CF and the CF with multicriteria ratings. The performances of recommender systems were evaluated by using two metrics - average MAE(mean absolute error) and precision-in-top-N. Precision-in-top-N represents the percentage of truly high overall ratings among those that the model predicted would be the N most relevant items for each user. The experimental system was developed using Microsoft Visual Basic for Applications (VBA). The experimental results showed that our proposed system (avg. MAE = 0.584) outperformed traditional CF (avg. MAE = 0.591) as well as multicriteria CF (avg. AVE = 0.608). We also found that multicriteria CF showed worse performance compared to traditional CF in our data set, which is contradictory to the results in the most previous studies. This result supports the premise of our study that people have two different types of preference schemes - holistic and composite. Besides MAE, the proposed system outperformed all the comparison models in precision-in-top-3, precision-in-top-5, and precision-in-top-7. The results from the paired samples t-test presented that our proposed system outperformed traditional CF with 10% statistical significance level, and multicriteria CF with 1% statistical significance level from the perspective of average MAE. The proposed system sheds light on how to understand and utilize user's preference schemes in recommender systems domain.

Prediction of the remaining time and time interval of pebbles in pebble bed HTGRs aided by CNN via DEM datasets

  • Mengqi Wu;Xu Liu;Nan Gui;Xingtuan Yang;Jiyuan Tu;Shengyao Jiang;Qian Zhao
    • Nuclear Engineering and Technology
    • /
    • v.55 no.1
    • /
    • pp.339-352
    • /
    • 2023
  • Prediction of the time-related traits of pebble flow inside pebble-bed HTGRs is of great significance for reactor operation and design. In this work, an image-driven approach with the aid of a convolutional neural network (CNN) is proposed to predict the remaining time of initially loaded pebbles and the time interval of paired flow images of the pebble bed. Two types of strategies are put forward: one is adding FC layers to the classic classification CNN models and using regression training, and the other is CNN-based deep expectation (DEX) by regarding the time prediction as a deep classification task followed by softmax expected value refinements. The current dataset is obtained from the discrete element method (DEM) simulations. Results show that the CNN-aided models generally make satisfactory predictions on the remaining time with the determination coefficient larger than 0.99. Among these models, the VGG19+DEX performs the best and its CumScore (proportion of test set with prediction error within 0.5s) can reach 0.939. Besides, the remaining time of additional test sets and new cases can also be well predicted, indicating good generalization ability of the model. In the task of predicting the time interval of image pairs, the VGG19+DEX model has also generated satisfactory results. Particularly, the trained model, with promising generalization ability, has demonstrated great potential in accurately and instantaneously predicting the traits of interest, without the need for additional computational intensive DEM simulations. Nevertheless, the issues of data diversity and model optimization need to be improved to achieve the full potential of the CNN-aided prediction tool.