• Title/Summary/Keyword: model rank

Search Result 607, Processing Time 0.031 seconds

Cycle-Consistent Generative Adversarial Network: Effect on Radiation Dose Reduction and Image Quality Improvement in Ultralow-Dose CT for Evaluation of Pulmonary Tuberculosis

  • Chenggong Yan;Jie Lin;Haixia Li;Jun Xu;Tianjing Zhang;Hao Chen;Henry C. Woodruff;Guangyao Wu;Siqi Zhang;Yikai Xu;Philippe Lambin
    • Korean Journal of Radiology
    • /
    • v.22 no.6
    • /
    • pp.983-993
    • /
    • 2021
  • Objective: To investigate the image quality of ultralow-dose CT (ULDCT) of the chest reconstructed using a cycle-consistent generative adversarial network (CycleGAN)-based deep learning method in the evaluation of pulmonary tuberculosis. Materials and Methods: Between June 2019 and November 2019, 103 patients (mean age, 40.8 ± 13.6 years; 61 men and 42 women) with pulmonary tuberculosis were prospectively enrolled to undergo standard-dose CT (120 kVp with automated exposure control), followed immediately by ULDCT (80 kVp and 10 mAs). The images of the two successive scans were used to train the CycleGAN framework for image-to-image translation. The denoising efficacy of the CycleGAN algorithm was compared with that of hybrid and model-based iterative reconstruction. Repeated-measures analysis of variance and Wilcoxon signed-rank test were performed to compare the objective measurements and the subjective image quality scores, respectively. Results: With the optimized CycleGAN denoising model, using the ULDCT images as input, the peak signal-to-noise ratio and structural similarity index improved by 2.0 dB and 0.21, respectively. The CycleGAN-generated denoised ULDCT images typically provided satisfactory image quality for optimal visibility of anatomic structures and pathological findings, with a lower level of image noise (mean ± standard deviation [SD], 19.5 ± 3.0 Hounsfield unit [HU]) than that of the hybrid (66.3 ± 10.5 HU, p < 0.001) and a similar noise level to model-based iterative reconstruction (19.6 ± 2.6 HU, p > 0.908). The CycleGAN-generated images showed the highest contrast-to-noise ratios for the pulmonary lesions, followed by the model-based and hybrid iterative reconstruction. The mean effective radiation dose of ULDCT was 0.12 mSv with a mean 93.9% reduction compared to standard-dose CT. Conclusion: The optimized CycleGAN technique may allow the synthesis of diagnostically acceptable images from ULDCT of the chest for the evaluation of pulmonary tuberculosis.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

A Study on the Method for the Estimate of Construction Management in the Program Management (종합건설사업관리 사업관리비용산정을 위한 방법연구 - 기획단계에서 실시설계 입찰까지 -)

  • Baek, Myeongchang;Park, Junmo;Park, Gilbeom;Kim, Okkyue
    • Korean Journal of Construction Engineering and Management
    • /
    • v.15 no.5
    • /
    • pp.3-12
    • /
    • 2014
  • With the scale of program management getting larger and complexity, More accurate and precise method for management cost estimate is demanded. However, most of project management cost estimates are based on similar cases and hence can not reflect distinct features of each project. Cost estimate precision is also not up to the standards, and also troublesome to policy-making and budget allocation. Therefore, project typical model for cost estimate of Comprehensive Project Management is developed, and makes it easier to manage level of effort and allocate cost by applying characteristic factor of each project. This study categorized the project package by phase; planning, detailed design, and bid procurement, to develop typical model. And by designating specific level of effort required for each field and rank, the study aims to improve the method for calculating the detailed and objective program cost. Outcome of this study will prevent conflicts between client and contractor, accurately calculate program management contract cost for the client, and become a reference for the contractor to receive rational and practical payments for their work.

Estimation and assessment of natural drought index using principal component analysis (주성분 분석을 활용한 자연가뭄지수 산정 및 평가)

  • Kim, Seon-Ho;Lee, Moon-Hwan;Bae, Deg-Hyo
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.6
    • /
    • pp.565-577
    • /
    • 2016
  • The objective of this study is to propose a method for computing the Natural Drought Index (NDI) that does not consider man-made drought facilities. Principal Component Analysis (PCA) was used to estimate the NDI. Three monthly moving cumulative runoff, soil moisture and precipitation were selected as input data of the NDI during 1977~2012. Observed precipitation data was collected from KMA ASOS (Korea Meteorological Association Automatic Synoptic Observation System), while model-driven runoff and soil moisture from Variable Infiltration Capacity Model (VIC Model) were used. Time series analysis, drought characteristic analysis and spatial analysis were used to assess the utilization of NDI and compare with existing SPI, SRI and SSI. The NDI precisely reflected onset and termination of past drought events with mean absolute error of 0.85 in time series analysis. It explained well duration and inter-arrival time with 1.3 and 1.0 respectively in drought characteristic analysis. Also, the NDI reflected regional drought condition well in spatial analysis. The accuracy rank of drought onset, termination, duration and inter-arrival time was calculated by using NDI, SPI, SRI and SSI. The result showed that NDI is more precise than the others. The NDI overcomes the limitation of univariate drought indices and can be useful for drought analysis as representative measure of different types of drought such as meteorological, hydrological and agricultural droughts.

Kinetic of Catalytic CO2 Gasification for Cyprus Coal by Gas-Solid Reaction Model (기-고체 반응모델을 이용한 Cyprus탄의 CO2 저온촉매가스화 반응거동)

  • Hwang, Soon Choel;Lee, Do Kyun;Kim, Sang Kyum;Lee, Si Hyun;Rhee, Young Woo
    • Korean Chemical Engineering Research
    • /
    • v.53 no.5
    • /
    • pp.653-662
    • /
    • 2015
  • In general, the coal gasification has to be operated under high temperature ($1300{\sim}1400^{\circ}C$) and pressure (30~40 bar). However, to keep this conditions, it needs unnecessary and excessive energy. In this work, to reduce the temperature of process, alkali catalysts such as $K_2CO_3$ and $Na_2CO_3$ were added into Cyprus coal. We investigated the kinetic of Cyprus char-$CO_2$ gasification. To determine the gasification conditions, the coal (with and without catalysts) gasified with fixed variables (catalyst loading, catalytic effects of $Na_2CO_3$ and $K_2CO_3$, temperatures) by using TGA. When catalysts are added by physical mixing method into Cyprus coal the reaction rate of coal added 7 wt% $Na_2CO_3$ is faster than raw coal for Cyprus char-$CO_2$ gasification. The activation energy of coal added 7 wt% $Na_2CO_3$ was calculated as 63 kJ/mol which was lower than raw char. It indicates that $Na_2CO_3$ can improve the reactivity of char-$CO_2$ gasification.

A Study on the Financial Strength of Households on House Investment Demand (가계 재무건전성이 주택투자수요에 미치는 영향에 관한 연구)

  • Rho, Sang-Youn;Yoon, Bo-Hyun;Choi, Young-Min
    • Journal of Distribution Science
    • /
    • v.12 no.4
    • /
    • pp.31-39
    • /
    • 2014
  • Purpose - This study investigates the following two issues. First, we attempt to find the important determinants of housing investment and to identify their significance rank using survey panel data. Recently, the expansion of global uncertainty in the real estate market has directly and indirectly influenced the Korean housing market; households demonstrate a sensitive reaction to changes in that market. Therefore, this study aims to draw conclusions from understanding how the impact of financial strength of the household is related to house investment. Second, we attempt to verify the effectiveness of diverse indices of financial strength such as DTI, LTV, and PIR as measures to monitor the housing market. In the continuous housing market recession after the global crisis, the government places top priority on residence stability. However, the government still imposes forceful restraints on indices of financial strength. We believe this study verifies the utility of these regulations when used in the housing market. Research design, data, and methodology - The data source for this study is the "National Survey of Tax and Benefit" from 2007 (1st) to 2011 (5th) by the Korea Institute of Public Finance. Based on this survey data, we use panel data of 3,838 households that have been surveyed continuously for 5 years. We sort the base variables according to relevance of house investment criteria using the decision tree model (DTM), which is the standard decision-making model for data-mining techniques. The DTM method is known as a powerful methodology to identify contributory variables for predictive power. In addition, we analyze how important explanatory variables and the financial strength index of households affect housing investment with the binary logistic multi-regressive model. Based on the analyses, we conclude that the financial strength index has a significant role in house investment demand. Results - The results of this research are as follows: 1) The determinants of housing investment are age, consumption expenditures, income, total assets, rent deposit, housing price, habits satisfaction, housing scale, number of household members, and debt related to housing. 2) The impact power of these determinants has changed more or less annually due to economic situations and housing market conditions. The level of consumption expenditure and income are the main determinants before 2009; however, the determinants of housing investment changed to indices of the financial strength of households, i.e., DTI, LTV, and PIR, after 2009. 3) Most of all, since 2009, housing loans has been a more important variable than the level of consumption in making housing market decisions. Conclusions - The results of this research show that sound financing of households has a stronger effect on housing investment than reduced consumption expenditures. At the same time, the key indices that must be monitored by the government under economic emergency conditions differ from those requiring monitoring under normal market conditions; therefore, political indices to encourage and promote the housing market must be divided based on market conditions.

A Comparison of Machine Learning Species Distribution Methods for Habitat Analysis of the Korea Water Deer (Hydropotes inermis argyropus) (고라니 서식지 분석을 위한 기계학습식 종분포모형 비교)

  • Song, Won-Kyong;Kim, Eun-Young
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.1
    • /
    • pp.171-180
    • /
    • 2012
  • The field of wildlife habitat conservation research has attracted attention as integrated biodiversity management strategies. Considering the status of the species surveying data and the environmental variables in Korea, the GARP and Maxent models optimized for presence-only data could be one of the most suitable models in habitat modeling. For make sure applicability in the domestic environment we applied the machine learning species distribution model for analyzing habitats of the Korea water deer($Hydropotes$ $inermis$ $argyropus$) in the $Sapgyocheon$ watershed, $Chungcheong$ province. We used the $3^{rd}$ National Natural Environment Survey data and 10 environment variables by literature review for the modelling. Analysis results showed that habitats for the Korea water deer were predicted 16.3%(Maxent) and 27.1%(GARP), respectively. In terms of accuracy(training/test) the Maxent(0.85/0.69) was higher than the GARP(0.65/0.61), and the Spearman's rank correlation coefficient result of the Maxent(${\rho}$=0.71, p<0.01) was higher than the result of GARP(${\rho}$=0.55, p<0.05). However results could be depended on sites and target species, therefore selection of the appropriate model considering on the situation will be important to analyzing habitats.

PM2.5 Simulations for the Seoul Metropolitan Area: (III) Application of the Modeled and Observed PM2.5 Ratio on the Contribution Estimation (수도권 초미세먼지 농도모사: (III) 관측농도 대비 모사농도 비율 적용에 따른 기여도 변화 검토)

  • Bae, Changhan;Yoo, Chul;Kim, Byeong-Uk;Kim, Hyun Cheol;Kim, Soontae
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.33 no.5
    • /
    • pp.445-457
    • /
    • 2017
  • In this study, we developed an approach to better account for uncertainties in estimated contributions from fine particulate matter ($PM_{2.5}$) modeling. Our approach computes a Concentration Correction Factor (CCF) which is a ratio of observed concentrations to baseline model concentrations. We multiply modeled direct contribution estimates with CCF to obtain revised contributions. Overall, the modeling system showed reasonably good performance, correlation coefficient R of 0.82 and normalized mean bias of 2%, although the model underestimated some PM species concentrations. We also noticed that model biases vary seasonally. We compared contribution estimates of major source sectors before and after applying CCFs. We observed that different source sectors showed variable magnitudes of sensitivities to the CCF application. For example, the total primary $PM_{2.5}$ contribution was increased $2.4{\mu}g/m^3$ or 63% after the CCF application. Out of a $2.4{\mu}g/m^3$ increment, line sources and area source made up $1.3{\mu}g/m^3$ and $0.9{\mu}g/m^3$ which is 92% of the total contribution changes. We postulated two major reasons for variations in estimated contributions after the CCF application: (1) monthly variability of unadjusted contributions due to emission source characteristics and (2) physico-chemical differences in environmental conditions that emitted precursors undergo. Since emissions-to-$PM_{2.5}$ concentration conversion rate is an important piece of information to prioritize control strategy, we examined the effects of CCF application on the estimated conversion rates. We found that the application of CCFs can alter the rank of conversion efficiencies of source sectors. Finally, we discussed caveats of our current approach such as no consideration of ion neutralization which warrants further studies.

Risk-based Safety Impact Assessment for Construction Projects (위험도 접근방법에 의한 건설사업 안전영향평가방안에 관한 연구)

  • Choi Hyun-Ho;Jung Pyung-Ki;Seo Jong-Won;Choi Ook
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • 2004.11a
    • /
    • pp.504-509
    • /
    • 2004
  • Safety assessment of construction projects may be affected by various factors such as types and scale of projects, construction methods, procedure, climactic, and site conditions etc. Presently, in planning and design phases, designers are still often uncertain of their responsibilities, l3i]I information and training of safety. Therefore, designers are still failing to exploit the potential that have to eliminate and reduce risks on site. In this study, the concepts of safety impact assessment is introduced in order to derive the performing design for safety in design phase. For this purpose, a framework for safety impact assessment model using risk-based approach for construction projects is suggested. The suggested model includes of information survey and scenarios, classification of safety impact factors occurred by design and construction, and quantitative estimation of magnitude and frequency. Moreover, the checklist which is enable to identify relationship between safety impact factors and design factors is developed and the methodology of safety impact assessment model using risk-based approach is also proposed.

  • PDF

Elaborate Image Quality Assessment with a Novel Luminance Adaptation Effect Model (새로운 광적응 효과 모델을 이용한 정교한 영상 화질 측정)

  • Bae, Sung-Ho;Kim, Munchurl
    • Journal of Broadcast Engineering
    • /
    • v.20 no.6
    • /
    • pp.818-826
    • /
    • 2015
  • Recently, objective image quality assessment (IQA) methods that elaborately reflect the visual quality perception characteristics of human visual system (HVS) have actively been studied. Among those characteristics of HVS, luminance adaptation (LA) effect, indicating that HVS has different sensitivities depending on background luminance values to distortions, has widely been reflected into many existing IQA methods via Weber's law model. In this paper, we firstly reveal that the LA effect based on Weber's law model has inaccurately been reflected into the conventional IQA methods. To solve this problem, we firstly derive a new LA effect-based Local weight Function (LALF) that can elaborately reflect LA effect into IQA methods. We validate the effectiveness of our proposed LALF by applying LALF into SSIM (Structural SIMilarity) and PSNR methods. Experimental results show that the SSIM based on LALF yields remarkable performance improvement of 5% points compared to the original SSIM in terms of Spear rank order correlation coefficient between estimated visual quality values and measured subjective visual quality scores. Moreover, the PSNR (Peak to Signal Noise Ratio) based on LALF yields performance improvement of 2.5% points compared to the original PSNR.