• Title/Summary/Keyword: Dimensional accuracy

Search Result 2,609, Processing Time 0.039 seconds

Echocardiographic Diagnosis of Pulmonary Arterial Hypertension in Chronic Lung Disease with Hypoxemia (만성 저산소성 폐질환의 폐동맥 고혈압에 대한 심초음파 검사)

  • Chang, Jung-Hyun
    • Tuberculosis and Respiratory Diseases
    • /
    • v.46 no.6
    • /
    • pp.846-855
    • /
    • 1999
  • Background : Secondary pulmonary hypertension is an important final endpoint in patients with chronic hypoxic lung disease, accompanied by deterioration of pulmonary hemodynamics. The clinical diagnosis of pulmonary hypertension and/or cor pulmonale could be difficult, and simple noninvasive evaluation of pulmonary artery pressures has been an relevant clinical challenge for many years. Doppler echocardiography might to be a more reliable method for evaluating pulmonary hemodynamics in such patients in terms of the accuracy, reproducibility and easiness for obtaining an appropriate echocardiographic window than M-mode echocardiography. The aim of this study was to assess echocardiographic parameters associated with pulmonary arterial hypertension, defined by increasing right ventricular systolic pressure(RVSP), calculated from trans-tricuspid gradient in patients with chronic hypoxic lungs. Method : We examined 19 patients with chronic hypoxic lung disease, suspected pulmonary hypertension under the clinical guidelines by two dimensional echocardiography via the left parasternal and subcostal approach in a supine position. Doppler echocardiography measured RVSP from tricuspid regurgitant velocity in continuous wave with 2.5MHz transducer and acceleration time(AT) on right ventricular outflow tract in pulsed wave for the estimation of pulmonary arterial pressure. Results : On echocardiography, moderate to severe degree of pulmonary arterial hypertension was defined as RVSP more than 40mmHg, presenting tricuspid regurgitation. Increased right ventricular endsystolic diameter and shortened AT were noted in the increased RVSP group. Increased RVSP was correlated negatively with the shortening of AT. Other clinical data, including pulmonary functional parameters, arterial blood gas analysis and M mode echocardiographic parameters were not changed significantly with the increased RVSP. Conclusion : These findings suggest that shortened AT on pulsed doppler can be useful when quantifying pulmonary arterial pressure with increased RVSP in patients with chronic lung disease with hypoxemia. Doppler echocardiography in pulmonary hypertension of chronic hypoxic lungs is an useful option, based on noninvasiveness under routine clinical practice.

  • PDF

A study to 3D dose measurement and evaluation for Respiratory Motion in Lung Cancer Stereotactic Body Radiotherapy Treatment (폐암의 정위적체부방사선치료시 호흡 움직임에 따른 3D 선량 측정평가)

  • Choi, Byeong-Geol;Choi, Chang-Heon;Yun, Il-Gyu;Yang, Jin-Seong;Lee, Dong-Myeong;Park, Ju-Mi
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.1
    • /
    • pp.59-67
    • /
    • 2014
  • Purpose : This study aims to evaluate 3D dosimetric impact for MIP image and each phase image in stereotactic body radiotherapy (SBRT) for lung cancer using volumetric modulated arc therapy (VMAT). Materials and Methods : For each of 5 patients with non-small-cell pulmonary tumors, a respiration-correlated four-dimensional computed tomography (4DCT) study was performed. We obtain ten 3D CT images corresponding to phases of a breathing cycle. Treatment plans were generated using MIP CT image and each phases 3D CT. We performed the dose verification of the TPS with use of the Ion chamber and COMPASS. The dose distribution that were 3D reconstructed using MIP CT image compared with dose distribution on the corresponding phase of the 4D CT data. Results : Gamma evaluation was performed to evaluate the accuracy of dose delivery for MIP CT data and 4D CT data of 5 patients. The average percentage of points passing the gamma criteria of 2 mm/2% about 99%. The average Homogeneity Index difference between MIP and each 3D data of patient dose was 0.03~0.04. The average difference between PTV maximum dose was 3.30 cGy, The average different Spinal Coad dose was 3.30 cGy, The average of difference with $V_{20}$, $V_{10}$, $V_5$ of Lung was -0.04%~2.32%. The average Homogeneity Index difference between MIP and each phase 3d data of all patient was -0.03~0.03. The average PTV maximum dose difference was minimum for 10% phase and maximum for 70% phase. The average Spain cord maximum dose difference was minimum for 0% phase and maximum for 50% phase. The average difference of $V_{20}$, $V_{10}$, $V_5$ of Lung show bo certain trend. Conclusion : There is no tendency of dose difference between MIP with 3D CT data of each phase. But there are appreciable difference for specific phase. It is need to study about patient group which has similar tumor location and breathing motion. Then we compare with dose distribution for each phase 3D image data or MIP image data. we will determine appropriate image data for treatment plan.

Detection Ability of Occlusion Object in Deep Learning Algorithm depending on Image Qualities (영상품질별 학습기반 알고리즘 폐색영역 객체 검출 능력 분석)

  • LEE, Jeong-Min;HAM, Geon-Woo;BAE, Kyoung-Ho;PARK, Hong-Ki
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.3
    • /
    • pp.82-98
    • /
    • 2019
  • The importance of spatial information is rapidly rising. In particular, 3D spatial information construction and modeling for Real World Objects, such as smart cities and digital twins, has become an important core technology. The constructed 3D spatial information is used in various fields such as land management, landscape analysis, environment and welfare service. Three-dimensional modeling with image has the hig visibility and reality of objects by generating texturing. However, some texturing might have occlusion area inevitably generated due to physical deposits such as roadside trees, adjacent objects, vehicles, banners, etc. at the time of acquiring image Such occlusion area is a major cause of the deterioration of reality and accuracy of the constructed 3D modeling. Various studies have been conducted to solve the occlusion area. Recently the researches of deep learning algorithm have been conducted for detecting and resolving the occlusion area. For deep learning algorithm, sufficient training data is required, and the collected training data quality directly affects the performance and the result of the deep learning. Therefore, this study analyzed the ability of detecting the occlusion area of the image using various image quality to verify the performance and the result of deep learning according to the quality of the learning data. An image containing an object that causes occlusion is generated for each artificial and quantified image quality and applied to the implemented deep learning algorithm. The study found that the image quality for adjusting brightness was lower at 0.56 detection ratio for brighter images and that the image quality for pixel size and artificial noise control decreased rapidly from images adjusted from the main image to the middle level. In the F-measure performance evaluation method, the change in noise-controlled image resolution was the highest at 0.53 points. The ability to detect occlusion zones by image quality will be used as a valuable criterion for actual application of deep learning in the future. In the acquiring image, it is expected to contribute a lot to the practical application of deep learning by providing a certain level of image acquisition.

Effect of abutment superimposition process of dental model scanner on final virtual model (치과용 모형 스캐너의 지대치 중첩 과정이 최종 가상 모형에 미치는 영향)

  • Yu, Beom-Young;Son, Keunbada;Lee, Kyu-Bok
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.57 no.3
    • /
    • pp.203-210
    • /
    • 2019
  • Purpose: The purpose of this study was to verify the effect of the abutment superimposition process on the final virtual model in the scanning process of single and 3-units bridge model using a dental model scanner. Materials and methods: A gypsum model for single and 3-unit bridges was manufactured for evaluating. And working casts with removable dies were made using Pindex system. A dental model scanner (3Shape E1 scanner) was used to obtain CAD reference model (CRM) and CAD test model (CTM). The CRM was scanned without removing after dividing the abutments in the working cast. Then, CTM was scanned with separated from the divided abutments and superimposed on the CRM (n=20). Finally, three-dimensional analysis software (Geomagic control X) was used to analyze the root mean square (RMS) and Mann-Whitney U test was used for statistical analysis (${\alpha}=.05$). Results: The RMS mean abutment for single full crown preparation was $10.93{\mu}m$ and the RMS average abutment for 3 unit bridge preparation was $6.9{\mu}m$. The RMS mean of the two groups showed statistically significant differences (P<.001). In addition, errors of positive and negative of two groups averaged $9.83{\mu}m$, $-6.79{\mu}m$ and 3-units bridge abutment $6.22{\mu}m$, $-3.3{\mu}m$, respectively. The mean values of the errors of positive and negative of two groups were all statistically significantly lower in 3-unit bridge abutments (P<.001). Conclusion: Although the number of abutments increased during the scan process of the working cast with removable dies, the error due to the superimposition of abutments did not increase. There was also a significantly higher error in single abutments, but within the range of clinically acceptable scan accuracy.

A Study on the Impacters of the Disabled Worker's Subjective Career Success in the Competitive Labour Market: Application of the Multi-Level Analysis of the Individual and Organizational Properties (경쟁고용 장애인근로자의 주관적 경력성공에 대한 영향요인 분석: 개인 및 조직특성에 대한 다층분석의 적용)

  • Kwon, Jae-yong;Lee, Dong-Young;Jeon, Byong-Ryol
    • 한국사회정책
    • /
    • v.24 no.1
    • /
    • pp.33-66
    • /
    • 2017
  • Based on the premise that the systematic career process of workers in the general labor market was one of core elements of successful achievements and their establishment both at the individual and organizational level, this study set out to conduct empirical analysis of factors influencing the subjective career success of disabled workers in competitive employment at the multi-dimensional levels of individuals and organizations(corporations) and thus provide practical implications for the career management directionality of their successful vocational life with data based on practical and statistical accuracy. For those purposes, the investigator administered a structured questionnaire to 126 disabled workers at 48 companies in Seoul, Gyeonggi, Chungcheong, and Gangwon and collected data about the individual and organizational characteristics. Then the influential factors were analyzed with the multilevel analysis technique by taking into consideration the organizational effects. The analysis results show that organizational characteristics explained 32.1% of total variance of subjective career success, which confirms practical implications for the importance of organizational variables and the legitimacy of applying the multilevel model. The significant influential factors include the degree of disability, desire for growth, self-initiating career attitude and value-oriented career attitude at the individual level and the provision of disability-related convenience, career support, personnel support, and interpersonal support at the organizational level. The latter turned out to have significant moderating effects on the influences of subjective career success on the characteristic variables at the individual level. Those findings call for plans to increase subjective career success through the activation of individual factors based on organizational effects. The study thus proposed and discussed integrated individual-corporate practice strategies including setting up a convenience support system by reflecting the disability characteristics, applying a worker support program, establishing a frontier career development support system, and providing assistance for a human network.

A Generalized Adaptive Deep Latent Factor Recommendation Model (일반화 적응 심층 잠재요인 추천모형)

  • Kim, Jeongha;Lee, Jipyeong;Jang, Seonghyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.249-263
    • /
    • 2023
  • Collaborative Filtering, a representative recommendation system methodology, consists of two approaches: neighbor methods and latent factor models. Among these, the latent factor model using matrix factorization decomposes the user-item interaction matrix into two lower-dimensional rectangular matrices, predicting the item's rating through the product of these matrices. Due to the factor vectors inferred from rating patterns capturing user and item characteristics, this method is superior in scalability, accuracy, and flexibility compared to neighbor-based methods. However, it has a fundamental drawback: the need to reflect the diversity of preferences of different individuals for items with no ratings. This limitation leads to repetitive and inaccurate recommendations. The Adaptive Deep Latent Factor Model (ADLFM) was developed to address this issue. This model adaptively learns the preferences for each item by using the item description, which provides a detailed summary and explanation of the item. ADLFM takes in item description as input, calculates latent vectors of the user and item, and presents a method that can reflect personal diversity using an attention score. However, due to the requirement of a dataset that includes item descriptions, the domain that can apply ADLFM is limited, resulting in generalization limitations. This study proposes a Generalized Adaptive Deep Latent Factor Recommendation Model, G-ADLFRM, to improve the limitations of ADLFM. Firstly, we use item ID, commonly used in recommendation systems, as input instead of the item description. Additionally, we apply improved deep learning model structures such as Self-Attention, Multi-head Attention, and Multi-Conv1D. We conducted experiments on various datasets with input and model structure changes. The results showed that when only the input was changed, MAE increased slightly compared to ADLFM due to accompanying information loss, resulting in decreased recommendation performance. However, the average learning speed per epoch significantly improved as the amount of information to be processed decreased. When both the input and the model structure were changed, the best-performing Multi-Conv1d structure showed similar performance to ADLFM, sufficiently counteracting the information loss caused by the input change. We conclude that G-ADLFRM is a new, lightweight, and generalizable model that maintains the performance of the existing ADLFM while enabling fast learning and inference.

One-Dimensional Consolidation Simulation of Kaolinte using Geotechnical Online Testing Method (온라인 실험을 이용한 카올리나이트 점토의 일차원 압밀 시뮬레이션)

  • Kwon, Youngcheul
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.4C
    • /
    • pp.247-254
    • /
    • 2006
  • Online testing method is one of the numerical experiment methods using experimental information for a numerical analysis directly. The method has an advantage in that analysis can be conducted without using an idealized mechanical model, because mechanical properties are updated from element test for a numerical analysis in real time. The online testing method has mainly been used for the geotechnical seismic engineering, whose major target is sand. A testing method that may be applied to a consolidation problem has recently been developed and laboratory and field verifications have been tried. Although related research thus far has mainly used a method to update average reaction for a numerical analysis by positioning an element tests at the center of a consolidation layer, a weakness that accuracy of the analysis can be impaired as the thickness of the consolidation layer becomes more thicker has been pointed out regarding the method. To clarify the effectiveness and possible analysis scope of the online testing method in relation to the consolidation problem, we need to review the results by applying experiment conditions that may completely exclude such a factor. This research reviewed the results of the online consolidation test in terms of reproduction of the consolidation settlement and the dissipation of excess pore water pressure of a clay specimen by comparing the results of an online consolidation test and a separated-type consolidation test carried out under the same conditions. As a result, the online consolidation test reproduced the change of compressibility according effective stress of clay without a huge contradiction. In terms of the dissipation rate of excess pore water pressure, however, the online consolidation test was a little faster. In conclusion, experiment procedure needs to improve in a direction that hydraulic conductivity can be updated in real time so as to more precisely predict the dissipation of excess pore water pressure. Further research or improvement should be carried out with regard to the consolidation settlement after the end of the dissipation of excess pore water pressure.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

Dose verification for Gated Volumetric Modulated Arc Therapy according to Respiratory period (호흡연동 용적변조 회전방사선치료에서 호흡주기에 따른 선량전달 정확성 검증)

  • Jeon, Soo Dong;Bae, Sun Myung;Yoon, In Ha;Kang, Tae Young;Baek, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.1
    • /
    • pp.137-147
    • /
    • 2014
  • Purpose : The purpose of this study is to verify the accuracy of dose delivery according to the patient's breathing cycle in Gated Volumetric Modulated Arc Therapy Materials and Methods : TrueBeam STxTM(Varian Medical System, Palo Alto, CA) was used in this experiment. The Computed tomography(CT) images that were acquired with RANDO Phantom(Alderson Research Laboratories Inc. Stamford. CT, USA), using Computerized treatment planning system(Eclipse 10.0, Varian, USA), were used to create VMAT plans using 10MV FFF with 1500 cGy/fx (case 1, 2, 3) and 220 cGy/fx(case 4, 5, 6) of doserate of 1200 MU/min. The regular respiratory period of 1.5, 2.5, 3.5 and 4.5 sec and the patients respiratory period of 2.2 and 3.5 sec were reproduced with the $QUASAR^{TM}$ Respiratory Motion Phantom(Modus Medical Devices Inc), and it was set up to deliver radiation at the phase mode between the ranges of 30 to 70%. The results were measured at respective respiratory conditions by a 2-Dimensional ion chamber array detector(I'mRT Matrixx, IBA Dosimetry, Germany) and a MultiCube Phantom(IBA Dosimetry, Germany), and the Gamma pass rate(3 mm, 3%) were compared by the IMRT analysis program(OmniPro I'mRT system software Version 1.7b, IBA Dosimetry, Germany) Results : The gamma pass rates of Case 1, 2, 3, 4, 5 and 6 were the results of 100.0, 97.6, 98.1, 96.3, 93.0, 94.8% at a regular respiratory period of 1.5 sec and 98.8, 99.5, 97.5, 99.5, 98.3, 99.6% at 2.5 sec, 99.6, 96.6, 97.5, 99.2, 97.8, 99.1% at 3.5 sec and 99.4, 96.3, 97.2, 99.0, 98.0, 99.3% at 4.5 sec, respectively. When a patient's respiration was reproduced, 97.7, 95.4, 96.2, 98.9, 96.2, 98.4% at average respiratory period of 2.2 sec, and 97.3, 97.5, 96.8, 100.0, 99.3, 99.8% at 3.5 sec, respectively. Conclusion : The experiment showed clinically reliable results of a Gamma pass rate of 95% or more when 2.5 sec or more of a regular breathing period and the patient's breathing were reproduced. While it showed the results of 93.0% and 94.8% at a regular breathing period of 1.5 sec of Case 5 and 6, it could be confirmed that the accurate dose delivery could be possible on the most respiratory conditions because based on the results of 100 patients's respiratory period analysis as no one sustained a respiration of 1.5 sec. But, pretreatment dose verification should be precede because we can't exclude the possibility of error occurrence due to extremely short respiratory period, also a training at the simulation and careful monitoring are necessary for a patient to maintain stable breathing. Consequently, more reliable and accurate treatments can be administered.