• Title/Summary/Keyword: Error level

Search Result 2,511, Processing Time 0.031 seconds

Accuracy of posteroanterior cephalogram landmarks and measurements identification using a cascaded convolutional neural network algorithm: A multicenter study

  • Sung-Hoon Han;Jisup Lim;Jun-Sik Kim;Jin-Hyoung Cho;Mihee Hong;Minji Kim;Su-Jung Kim;Yoon-Ji Kim;Young Ho Kim;Sung-Hoon Lim;Sang Jin Sung;Kyung-Hwa Kang;Seung-Hak Baek;Sung-Kwon Choi;Namkug Kim
    • The korean journal of orthodontics
    • /
    • v.54 no.1
    • /
    • pp.48-58
    • /
    • 2024
  • Objective: To quantify the effects of midline-related landmark identification on midline deviation measurements in posteroanterior (PA) cephalograms using a cascaded convolutional neural network (CNN). Methods: A total of 2,903 PA cephalogram images obtained from 9 university hospitals were divided into training, internal validation, and test sets (n = 2,150, 376, and 377). As the gold standard, 2 orthodontic professors marked the bilateral landmarks, including the frontozygomatic suture point and latero-orbitale (LO), and the midline landmarks, including the crista galli, anterior nasal spine (ANS), upper dental midpoint (UDM), lower dental midpoint (LDM), and menton (Me). For the test, Examiner-1 and Examiner-2 (3-year and 1-year orthodontic residents) and the Cascaded-CNN models marked the landmarks. After point-to-point errors of landmark identification, the successful detection rate (SDR) and distance and direction of the midline landmark deviation from the midsagittal line (ANS-mid, UDM-mid, LDM-mid, and Me-mid) were measured, and statistical analysis was performed. Results: The cascaded-CNN algorithm showed a clinically acceptable level of point-to-point error (1.26 mm vs. 1.57 mm in Examiner-1 and 1.75 mm in Examiner-2). The average SDR within the 2 mm range was 83.2%, with high accuracy at the LO (right, 96.9%; left, 97.1%), and UDM (96.9%). The absolute measurement errors were less than 1 mm for ANS-mid, UDM-mid, and LDM-mid compared with the gold standard. Conclusions: The cascaded-CNN model may be considered an effective tool for the auto-identification of midline landmarks and quantification of midline deviation in PA cephalograms of adult patients, regardless of variations in the image acquisition method.

Study on Developing the Information System for ESG Disclosure Management (ESG 정보공시 관리를 위한 정보시스템 개발에 관한 연구)

  • Kim, Seung-wook
    • Journal of Venture Innovation
    • /
    • v.7 no.1
    • /
    • pp.77-90
    • /
    • 2024
  • While discussions on ESG are actively taking place in Europe and other countries, the number of countries pushing for mandatory ESG information disclosure related to non-financial information of listed companies is rapidly increasing. However, as companies respond to mandatory global ESG information disclosure, problems are emerging such as the stringent requirements of global ESG disclosure standards, the complexity of data management, and a lack of understanding and preparation of the ESG system itself. In addition, it requires a reasonable analysis of how business management opportunities and risk factors due to climate change affect the company's financial impact, so it is expected to be quite difficult to analyze the results that meet the disclosure standards. In order to perform tasks such as ESG management activities and information disclosure, data of various types and sources is required and management through an information system is necessary to measure this transparently, collect it without error, and manage it without omission. Therefore, in this study, we designed an ESG data integrated management model to integrate and manage various related indicators and data in order to transparently and efficiently convey the company's ESG activities to various stakeholders through ESG information disclosure. A framework for implementing an information system to handle management was developed. These research results can help companies facing difficulties in ESG disclosure at a practical level to efficiently manage ESG information disclosure. In addition, the presentation of an integrated data management model through analysis of the ESG disclosure work process and the development of an information system to support ESG information disclosure were significant in the academic aspects needed to study ESG in the future.

Development of an Anomaly Detection Algorithm for Verification of Radionuclide Analysis Based on Artificial Intelligence in Radioactive Wastes (방사성폐기물 핵종분석 검증용 이상 탐지를 위한 인공지능 기반 알고리즘 개발)

  • Seungsoo Jang;Jang Hee Lee;Young-su Kim;Jiseok Kim;Jeen-hyeng Kwon;Song Hyun Kim
    • Journal of Radiation Industry
    • /
    • v.17 no.1
    • /
    • pp.19-32
    • /
    • 2023
  • The amount of radioactive waste is expected to dramatically increase with decommissioning of nuclear power plants such as Kori-1, the first nuclear power plant in South Korea. Accurate nuclide analysis is necessary to manage the radioactive wastes safely, but research on verification of radionuclide analysis has yet to be well established. This study aimed to develop the technology that can verify the results of radionuclide analysis based on artificial intelligence. In this study, we propose an anomaly detection algorithm for inspecting the analysis error of radionuclide. We used the data from 'Updated Scaling Factors in Low-Level Radwaste' (NP-5077) published by EPRI (Electric Power Research Institute), and resampling was performed using SMOTE (Synthetic Minority Oversampling Technique) algorithm to augment data. 149,676 augmented data with SMOTE algorithm was used to train the artificial neural networks (classification and anomaly detection networks). 324 NP-5077 report data verified the performance of networks. The anomaly detection algorithm of radionuclide analysis was divided into two modules that detect a case where radioactive waste was incorrectly classified or discriminate an abnormal data such as loss of data or incorrectly written data. The classification network was constructed using the fully connected layer, and the anomaly detection network was composed of the encoder and decoder. The latter was operated by loading the latent vector from the end layer of the classification network. This study conducted exploratory data analysis (i.e., statistics, histogram, correlation, covariance, PCA, k-mean clustering, DBSCAN). As a result of analyzing the data, it is complicated to distinguish the type of radioactive waste because data distribution overlapped each other. In spite of these complexities, our algorithm based on deep learning can distinguish abnormal data from normal data. Radionuclide analysis was verified using our anomaly detection algorithm, and meaningful results were obtained.

Enhanced Indoor Localization Scheme Based on Pedestrian Dead Reckoning and Kalman Filter Fusion with Smartphone Sensors (스마트폰 센서를 이용한 PDR과 칼만필터 기반 개선된 실내 위치 측위 기법)

  • Harun Jamil;Naeem Iqbal;Murad Ali Khan;Syed Shehryar Ali Naqvi;Do-Hyeun Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.4
    • /
    • pp.101-108
    • /
    • 2024
  • Indoor localization is a critical component for numerous applications, ranging from navigation in large buildings to emergency response. This paper presents an enhanced Pedestrian Dead Reckoning (PDR) scheme using smartphone sensors, integrating neural network-aided motion recognition, Kalman filter-based error correction, and multi-sensor data fusion. The proposed system leverages data from the accelerometer, magnetometer, gyroscope, and barometer to accurately estimate a user's position and orientation. A neural network processes sensor data to classify motion modes and provide real-time adjustments to stride length and heading calculations. The Kalman filter further refines these estimates, reducing cumulative errors and drift. Experimental results, collected using a smartphone across various floors of University, demonstrate the scheme's ability to accurately track vertical movements and changes in heading direction. Comparative analyses show that the proposed CNN-LSTM model outperforms conventional CNN and Deep CNN models in angle prediction. Additionally, the integration of barometric pressure data enables precise floor level detection, enhancing the system's robustness in multi-story environments. Proposed comprehensive approach significantly improves the accuracy and reliability of indoor localization, making it viable for real-world applications.

Social Network-based Hybrid Collaborative Filtering using Genetic Algorithms (유전자 알고리즘을 활용한 소셜네트워크 기반 하이브리드 협업필터링)

  • Noh, Heeryong;Choi, Seulbi;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.19-38
    • /
    • 2017
  • Collaborative filtering (CF) algorithm has been popularly used for implementing recommender systems. Until now, there have been many prior studies to improve the accuracy of CF. Among them, some recent studies adopt 'hybrid recommendation approach', which enhances the performance of conventional CF by using additional information. In this research, we propose a new hybrid recommender system which fuses CF and the results from the social network analysis on trust and distrust relationship networks among users to enhance prediction accuracy. The proposed algorithm of our study is based on memory-based CF. But, when calculating the similarity between users in CF, our proposed algorithm considers not only the correlation of the users' numeric rating patterns, but also the users' in-degree centrality values derived from trust and distrust relationship networks. In specific, it is designed to amplify the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the trust relationship network. Also, it attenuates the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the distrust relationship network. Our proposed algorithm considers four (4) types of user relationships - direct trust, indirect trust, direct distrust, and indirect distrust - in total. And, it uses four adjusting coefficients, which adjusts the level of amplification / attenuation for in-degree centrality values derived from direct / indirect trust and distrust relationship networks. To determine optimal adjusting coefficients, genetic algorithms (GA) has been adopted. Under this background, we named our proposed algorithm as SNACF-GA (Social Network Analysis - based CF using GA). To validate the performance of the SNACF-GA, we used a real-world data set which is called 'Extended Epinions dataset' provided by 'trustlet.org'. It is the data set contains user responses (rating scores and reviews) after purchasing specific items (e.g. car, movie, music, book) as well as trust / distrust relationship information indicating whom to trust or distrust between users. The experimental system was basically developed using Microsoft Visual Basic for Applications (VBA), but we also used UCINET 6 for calculating the in-degree centrality of trust / distrust relationship networks. In addition, we used Palisade Software's Evolver, which is a commercial software implements genetic algorithm. To examine the effectiveness of our proposed system more precisely, we adopted two comparison models. The first comparison model is conventional CF. It only uses users' explicit numeric ratings when calculating the similarities between users. That is, it does not consider trust / distrust relationship between users at all. The second comparison model is SNACF (Social Network Analysis - based CF). SNACF differs from the proposed algorithm SNACF-GA in that it considers only direct trust / distrust relationships. It also does not use GA optimization. The performances of the proposed algorithm and comparison models were evaluated by using average MAE (mean absolute error). Experimental result showed that the optimal adjusting coefficients for direct trust, indirect trust, direct distrust, indirect distrust were 0, 1.4287, 1.5, 0.4615 each. This implies that distrust relationships between users are more important than trust ones in recommender systems. From the perspective of recommendation accuracy, SNACF-GA (Avg. MAE = 0.111943), the proposed algorithm which reflects both direct and indirect trust / distrust relationships information, was found to greatly outperform a conventional CF (Avg. MAE = 0.112638). Also, the algorithm showed better recommendation accuracy than the SNACF (Avg. MAE = 0.112209). To confirm whether these differences are statistically significant or not, we applied paired samples t-test. The results from the paired samples t-test presented that the difference between SNACF-GA and conventional CF was statistical significant at the 1% significance level, and the difference between SNACF-GA and SNACF was statistical significant at the 5%. Our study found that the trust/distrust relationship can be important information for improving performance of recommendation algorithms. Especially, distrust relationship information was found to have a greater impact on the performance improvement of CF. This implies that we need to have more attention on distrust (negative) relationships rather than trust (positive) ones when tracking and managing social relationships between users.

Determination of $^{14}C$ in Environmental Samples Using $CO_2$ Absorption Method ($Co_2$ 흡수법에 의한 환경시료중 $^{14}C$ 정량)

  • Lee, Sang-Kuk;Kim, Chang-Kyu;Kim, Cheol-Su;Kim, Yong-Jae;Rho, Byung-Hwan,
    • Journal of Radiation Protection and Research
    • /
    • v.22 no.1
    • /
    • pp.35-46
    • /
    • 1997
  • A simple and precise method of $^{14}C$ was developed to analyze $^{14}C$ in the environment samples using a commercially available $^{14}CO_2$ absorbent and a liquid scintillation counter. An air sampler and a combustion system were developed to collect HTO and $^{14}CO_2$ in the air and the biological samples simultaneously. The collection yield of $^{14}CO_2$ by the air sampler was in the range of 73-89% . The yield of the combustion system was 97%. In preparing samples for counting, the optimum ratio of $CO_2$ absorbent to the scintillator for mixing was 1:1. No variation of the specific activity of $^{14}C$ in the counting sample was observed up to 70 days after preparation of the samples. The detection limit for$^{14}C$ was 0.025 Bq/gC, which is the level applicable to the natural level of $^{14}C$. The analytical result of $^{14}C$ obtained by the present method were within ${\pm}6%$ of the relative error from the one by the benzene synthesis. The specific activity of $^{14}C$ in the air collected at Taejon during the period of October 1996 ranged from 0.26 to 0.27 Bq/gC. The specific activity of $^{14}C$ in the air collected at 1km from the Wolsong nuclear power plant a 679 MWe PHWR, was $0.54{\pm}0.03$ Bq/gC. The ranges of specific activities of $^{14}C$ in the pine needles and the vegetations from the areas around the Wolsong nuclear power plant were 0.56-0.67 Bq/gC and 0.23-1.41 Bq/gC, respectively.

  • PDF

Development of Empirical Fragility Function for High-speed Railway System Using 2004 Niigata Earthquake Case History (2004 니가타 지진 사례 분석을 통한 고속철도 시스템의 지진 취약도 곡선 개발)

  • Yang, Seunghoon;Kwak, Dongyoup
    • Journal of the Korean Geotechnical Society
    • /
    • v.35 no.11
    • /
    • pp.111-119
    • /
    • 2019
  • The high-speed railway system is mainly composed of tunnel, bridge, and viaduct to meet the straightness needed for keeping the high speed up to 400 km/s. Seismic fragility for the high-speed railway infrastructure can be assessed as two ways: one way is studying each element of infrastructure analytically or numerically, but it requires lots of research efforts due to wide range of railway system. On the other hand, empirical method can be used to access the fragility of an entire system efficiently, which requires case history data. In this study, we collect the 2004 MW 6.6 Niigata earthquake case history data to develop empirical seismic fragility function for a railway system. Five types of intensity measures (IMs) and damage levels are assigned to all segments of target system for which the unit length is 200 m. From statistical analysis, probability of exceedance for a certain damage level (DL) is calculated as a function of IM. For those probability data points, log-normal CDF is fitted using MLE method, which forms fragility function for each damage level of exceedance. Evaluating fragility functions calculated, we observe that T=3.0 spectral acceleration (SAT3.0) is superior to other IMs, which has lower standard deviation of log-normal CDF and low error of the fit. This indicates that long-period ground motion has more impacts on railway infrastructure system such as tunnel and bridge. It is observed that when SAT3.0 = 0.1 g, P(DL>1) = 2%, and SAT3.0 = 0.2 g, P(DL>1) = 23.9%.

DISEASE DIAGNOSED AND DESCRIBED BY NIRS

  • Tsenkova, Roumiana N.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1031-1031
    • /
    • 2001
  • The mammary gland is made up of remarkably sensitive tissue, which has the capability of producing a large volume of secretion, milk, under normal or healthy conditions. When bacteria enter the gland and establish an infection (mastitis), inflammation is initiated accompanied by an influx of white cells from the blood stream, by altered secretory function, and changes in the volume and composition of secretion. Cell numbers in milk are closely associated with inflammation and udder health. These somatic cell counts (SCC) are accepted as the international standard measurement of milk quality in dairy and for mastitis diagnosis. NIR Spectra of unhomogenized composite milk samples from 14 cows (healthy and mastitic), 7days after parturition and during the next 30 days of lactation were measured. Different multivariate analysis techniques were used to diagnose the disease at very early stage and determine how the spectral properties of milk vary with its composition and animal health. PLS model for prediction of somatic cell count (SCC) based on NIR milk spectra was made. The best accuracy of determination for the 1100-2500nm range was found using smoothed absorbance data and 10 PLS factors. The standard error of prediction for independent validation set of samples was 0.382, correlation coefficient 0.854 and the variation coefficient 7.63%. It has been found that SCC determination by NIR milk spectra was indirect and based on the related changes in milk composition. From the spectral changes, we learned that when mastitis occurred, the most significant factors that simultaneously influenced milk spectra were alteration of milk proteins and changes in ionic concentration of milk. It was consistent with the results we obtained further when applied 2DCOS. Two-dimensional correlation analysis of NIR milk spectra was done to assess the changes in milk composition, which occur when somatic cell count (SCC) levels vary. The synchronous correlation map revealed that when SCC increases, protein levels increase while water and lactose levels decrease. Results from the analysis of the asynchronous plot indicated that changes in water and fat absorptions occur before other milk components. In addition, the technique was used to assess the changes in milk during a period when SCC levels do not vary appreciably. Results indicated that milk components are in equilibrium and no appreciable change in a given component was seen with respect to another. This was found in both healthy and mastitic animals. However, milk components were found to vary with SCC content regardless of the range considered. This important finding demonstrates that 2-D correlation analysis may be used to track even subtle changes in milk composition in individual cows. To find out the right threshold for SCC when used for mastitis diagnosis at cow level, classification of milk samples was performed using soft independent modeling of class analogy (SIMCA) and different spectral data pretreatment. Two levels of SCC - 200 000 cells/$m\ell$ and 300 000 cells/$m\ell$, respectively, were set up and compared as thresholds to discriminate between healthy and mastitic cows. The best detection accuracy was found with 200 000 cells/$m\ell$ as threshold for mastitis and smoothed absorbance data: - 98% of the milk samples in the calibration set and 87% of the samples in the independent test set were correctly classified. When the spectral information was studied it was found that the successful mastitis diagnosis was based on reviling the spectral changes related to the corresponding changes in milk composition. NIRS combined with different ways of spectral data ruining can provide faster and nondestructive alternative to current methods for mastitis diagnosis and a new inside into disease understanding at molecular level.

  • PDF

Recommending Core and Connecting Keywords of Research Area Using Social Network and Data Mining Techniques (소셜 네트워크와 데이터 마이닝 기법을 활용한 학문 분야 중심 및 융합 키워드 추천 서비스)

  • Cho, In-Dong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.127-138
    • /
    • 2011
  • The core service of most research portal sites is providing relevant research papers to various researchers that match their research interests. This kind of service may only be effective and easy to use when a user can provide correct and concrete information about a paper such as the title, authors, and keywords. However, unfortunately, most users of this service are not acquainted with concrete bibliographic information. It implies that most users inevitably experience repeated trial and error attempts of keyword-based search. Especially, retrieving a relevant research paper is more difficult when a user is novice in the research domain and does not know appropriate keywords. In this case, a user should perform iterative searches as follows : i) perform an initial search with an arbitrary keyword, ii) acquire related keywords from the retrieved papers, and iii) perform another search again with the acquired keywords. This usage pattern implies that the level of service quality and user satisfaction of a portal site are strongly affected by the level of keyword management and searching mechanism. To overcome this kind of inefficiency, some leading research portal sites adopt the association rule mining-based keyword recommendation service that is similar to the product recommendation of online shopping malls. However, keyword recommendation only based on association analysis has limitation that it can show only a simple and direct relationship between two keywords. In other words, the association analysis itself is unable to present the complex relationships among many keywords in some adjacent research areas. To overcome this limitation, we propose the hybrid approach for establishing association network among keywords used in research papers. The keyword association network can be established by the following phases : i) a set of keywords specified in a certain paper are regarded as co-purchased items, ii) perform association analysis for the keywords and extract frequent patterns of keywords that satisfy predefined thresholds of confidence, support, and lift, and iii) schematize the frequent keyword patterns as a network to show the core keywords of each research area and connecting keywords among two or more research areas. To estimate the practical application of our approach, we performed a simple experiment with 600 keywords. The keywords are extracted from 131 research papers published in five prominent Korean journals in 2009. In the experiment, we used the SAS Enterprise Miner for association analysis and the R software for social network analysis. As the final outcome, we presented a network diagram and a cluster dendrogram for the keyword association network. We summarized the results in Section 4 of this paper. The main contribution of our proposed approach can be found in the following aspects : i) the keyword network can provide an initial roadmap of a research area to researchers who are novice in the domain, ii) a researcher can grasp the distribution of many keywords neighboring to a certain keyword, and iii) researchers can get some idea for converging different research areas by observing connecting keywords in the keyword association network. Further studies should include the following. First, the current version of our approach does not implement a standard meta-dictionary. For practical use, homonyms, synonyms, and multilingual problems should be resolved with a standard meta-dictionary. Additionally, more clear guidelines for clustering research areas and defining core and connecting keywords should be provided. Finally, intensive experiments not only on Korean research papers but also on international papers should be performed in further studies.

Hydrological Drought Assessment and Monitoring Based on Remote Sensing for Ungauged Areas (미계측 유역의 수문학적 가뭄 평가 및 감시를 위한 원격탐사의 활용)

  • Rhee, Jinyoung;Im, Jungho;Kim, Jongpil
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.4
    • /
    • pp.525-536
    • /
    • 2014
  • In this study, a method to assess and monitor hydrological drought using remote sensing was investigated for use in regions with limited observation data, and was applied to the Upper Namhangang basin in South Korea, which was seriously affected by the 2008-2009 drought. Drought information may be obtained more easily from meteorological data based on water balance than hydrological data that are hard to estimate. Air temperature data at 2 m above ground level (AGL) were estimated using remotely sensed data, evapotranspiration was estimated from the air temperature, and the correlations between precipitation minus evapotranspiration (P-PET) and streamflow percentiles were examined. Land Surface Temperature data with $1{\times}1km$ spatial resolution as well as Atmospheric Profile data with $5{\times}5km$ spatial resolution from MODIS sensor on board Aqua satellite were used to estimate monthly maximum and minimum air temperature in South Korea. Evapotranspiration was estimated from the maximum and minimum air temperature using the Hargreaves method and the estimates were compared to existing data of the University of Montana based on Penman-Monteith method showing smaller coefficient of determination values but smaller error values. Precipitation was obtained from TRMM monthly rainfall data, and the correlations of 1-, 3-, 6-, and 12-month P-PET percentiles with streamflow percentiles were analyzed for the Upper Namhan-gang basin in South Korea. The 1-month P-PET percentile during JJA (r = 0.89, tau = 0.71) and SON (r = 0.63, tau = 0.47) in the Upper Namhan-gang basin are highly correlated with the streamflow percentile with 95% confidence level. Since the effect of precipitation in the basin is especially high, the correlation between evapotranspiration percentile and streamflow percentile is positive. These results indicate that remote sensing-based P-PET estimates can be used for the assessment and monitoring of hydrological drought. The high spatial resolution estimates can be used in the decision-making process to minimize the adverse impacts of hydrological drought and to establish differentiated measures coping with drought.