• Title/Summary/Keyword: information usefulness

Search Result 2,656, Processing Time 0.032 seconds

Comparison of 99mTc-Tin colloid colloid and 99mTc-DISIDA Hepatoscintigraphy in Miniature Pigs (미니돼지에서 99mTc-Tin colloid와 99mTc-DISIDA를 사용한 간신티그라피의 비교 연구)

  • Shim, Kyung-Mi;Kim, Se-Eun;Lee, Won-Guk;Koong, Sung-Soo;Bae, Chun-Sik;Lee, Jae-Yeong;Choi, Seok-Hwa;Han, Ho-Jae;Kang, Seong-Soo;Park, Soo-Hyun
    • Journal of Life Science
    • /
    • v.16 no.6
    • /
    • pp.1060-1065
    • /
    • 2006
  • Non-invasive evaluation of liver function in animal models remains a challenge. Hepatoscintigraphy provides information about changes in liver size and shape, and enables to understand general liver function. Futhermore it is readily used to diagnosis complications of liver transplantation like hepatitis, rejections and biliary complications. In this study, we investigated the usefulness of evaluating the liver function in miniature pigs with $^{99m}Tc-Tin$ colloid and $^{99m}Tc-DISIDA$ which are the most commonly used radiopharmaceuticals in human medicine. In result, $^{99m}Tc-Tin$ colloid was uptaked in lung, liver, gastric wall and kidney in miniature pigs. And $^{99m}Tc-DISIDA$ showed continuous uptake images of heart, lung, liver, gallbladder and duodenum, and it was similar to human's. Therefore we could conclude $^{99m}Tc-Tin$ colloid would not be suitable for evaluating hepatic function because of it's nonspecific affinity, however $^{99m}Tc-DISIDA$ scintigraphy would be an effective method for detecting hepatobiliary function in miniature pigs.

Study on the Availability of Repeated Flexible Bronchoscopy(RFB) (반복적 굴곡성 기관지경검사(RFB)의 유용성에 대한 연구)

  • Lee, Hong-Lyeol;Moon, Tae-Hoon;Cho, Jae-Hwa;Ryu, Jeong-Seon;Kwak, Seung-Min;Cho, Chul-Ho
    • Tuberculosis and Respiratory Diseases
    • /
    • v.48 no.3
    • /
    • pp.365-376
    • /
    • 2000
  • Background : Ever since Flexible Fiberoptic Bronchoscopy was introduced into clinical practice, it has played an important role in both diagnosis and therapy of respiratory diseases. Repeated bronchoscopic examinations is are not so uncommon. This study was designed prospectively to assess the clinical availability of the Repeated Flexible Bronchoscopy (RFB). Methods : Pre-established indications were as follows : 1) To confirm diagnosis or the cell type in proven malignancy 2) to diagnose or locate hemoptysis 3) to follow-up or confirm recurrence 4) to use in therapy. We performed RFB and analyzed the data in 156 patients during 28-month period. Results : The frequency of RFB was 23.0%. The indication for diagnosis or cell type of malignancy was 25 cases, of which 2 cases were confirmed by a third bronchoscopic examination and 3 cases by surgical procedures. Localization of the bleeding site was confirmed in 53.8%. RFB for small cell lung cancer yielded more information on residual or recurred lesion not apparent even with the CT scan in 30%. Previous cases of bronchostenosis due to endo-bronchial tuberculosis was shown to have worsened in 66.7%. Therapeutic manipulations were done in 126 cases, and bronchial suction was most common. Complications showed decreasing tendency with repeated examinations. Conclusion : The RFB for diagnosis or cell type of malignancy was useful in that comfirmation of diagnosis was possible in 85.7% of malignancy. More aggressive procedures should be employed including TBLB or TBNA. The RFB showed possible usefulness in the follow-up of patients with small cell lung cancer. For the patients with hemoptysis or endobronchial tuberculosis, the RFB did not the significance did not show significance because its results did not influence the diagnosis, therapy or clinical course.

  • PDF

Analysis of the Efficiency of Gyeonggi-do Senior Welfare Centers by DEA Model (DEA를 이용한 경기도 노인복지관 효율성 분석)

  • Kim, Keum Hwan;Pak, Ae Kyung;Ryu, Seo Hyun;Lee, Nam Sik
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.8 no.3
    • /
    • pp.165-177
    • /
    • 2013
  • The purpose of this study was to examine the efficiency of senior welfare centers and the cause of differences among senior welfare centers in that regard, and to investigate influential factors for the differences in efficiency and the size of the influence of the factors. What methods would be effective at assessing the efficiency of senior welfare centers by taking into account their circumstances was reviewed, andpost-hoc analyses were made by using data envelopment analysis(DEA) and DAE/AP Modified prosthetic which were useful tools to evaluate relative efficiency. After 20 senior welfare centers located in Gyeonggi-do were selected, their yearly operating data of 2009 were utilized. The purpose of this study was to examine the efficiency of senior welfare centers. The evaluation data released by the Gyeonggi Welfare Foundation were analyzed by DEA, which is one of nonparametric statistics, and it was possible to obtain significant results on the regional operating efficiency of social welfare centers in 14 metropolitan cities and provinces, the causes and degree of their inefficiency and what areas one could refer to. As the data for the counties were utilized in this study, it's not quite possible to produce accurate results on the relative efficiency of senior welfare centers, but this study could be said to be of significance in that it suggested how to evaluate the overall operating efficiency of senior welfare centers in the counties involving the degree of their operating inefficiency, what improvements should be made and what reference groups there might be and provided information on the usefulness of the DEA model.

  • PDF

A Dynamic Prefetch Filtering Schemes to Enhance Usefulness Of Cache Memory (캐시 메모리의 유용성을 높이는 동적 선인출 필터링 기법)

  • Chon Young-Suk;Lee Byung-Kwon;Lee Chun-Hee;Kim Suk-Il;Jeon Joong-Nam
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.123-136
    • /
    • 2006
  • The prefetching technique is an effective way to reduce the latency caused memory access. However, excessively aggressive prefetch not only leads to cache pollution so as to cancel out the benefits of prefetch but also increase bus traffic leading to overall performance degradation. In this thesis, a prefetch filtering scheme is proposed which dynamically decides whether to commence prefetching by referring a filtering table to reduce the cache pollution due to unnecessary prefetches In this thesis, First, prefetch hashing table 1bitSC filtering scheme(PHT1bSC) has been shown to analyze the lock problem of the conventional scheme, this scheme such as conventional scheme used to be N:1 mapping, but it has the two state to 1bit value of each entries. A complete block address table filtering scheme(CBAT) has been introduced to be used as a reference for the comparative study. A prefetch block address lookup table scheme(PBALT) has been proposed as the main idea of this paper which exhibits the most exact filtering performance. This scheme has a length of the table the same as the PHT1bSC scheme, the contents of each entry have the fields the same as CBAT scheme recently, never referenced data block address has been 1:1 mapping a entry of the filter table. On commonly used prefetch schemes and general benchmarks and multimedia programs simulates change cache parameters. The PBALT scheme compared with no filtering has shown enhanced the greatest 22%, the cache miss ratio has been decreased by 7.9% by virtue of enhanced filtering accuracy compared with conventional PHT2bSC. The MADT of the proposed PBALT scheme has been decreased by 6.1% compared with conventional schemes to reduce the total execution time.

Efficient Application of Westgard Multi-Rules and Quality Control Implementation Improvement (Westgard Multi-Rules의 효율적 적용과 조치사항의 개선)

  • Jung, Heung Soo;Oh, Youn Jung;Bae, Jin Soo;Baek, Jin Young;Hwang, Bo ra;Shin, Yong Hwan
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.1
    • /
    • pp.60-64
    • /
    • 2017
  • Purpose Westgard multi-rules application based on test quality improvement and commercialized international standard has been widely used in quality control. However, it is difficult to applicate the Westgard multi-rules in nuclear medicine in vitro tests due to the larger sample sizes and the simultaneous measurement of quality control material and patient sample. This study investigated the usefulness of Westgard multi-rules application in nuclear medicine in vitro tests. Materials and Methods A total of 282 systematic error multi-rules (22s, 101s) recorded in the samsung medical center computer system from January 2013 to June 2016 along with 117 cases of corrective measure record was analyzed. The Quality control implementation is recorded in Hospital information system were divided into 4 high-level areas including quality control material error, experimental procedural error, Kit lot number management error, and others. To prevent quality control material error, the existing method that each staff used their own method was changed. The staff who in charge of managing the quality control material was designated and daily consumption amount of every test was strictly controlled by one person. To prevent other errors, every test step was standardized so that the entire test procedures are identically implemented. Results The total quality control implementation was 117 cases; As a result, 62 quality control material errors were 62 cases, experimental process errors were 24 cases, Kit lot number control errors were 18 cases, and other errors were 13 cases. The quality control material error was corrected and could be used fresh materials within 2 days after thawing. The cases of systemic error were decreased to causes as quality control material error. The quality control materials were reduced above 10 vials to a monthly average. In addition, these errors of experimental processing and Kit lot number were improved by test standardization. Consequently, the cases of 101s and 22s in systematic error rules decreased at least 2 cases to a monthly average. Conclusion To confirm of systematic error through multi-rules application quickly, it is necessary to base on management of the QC material, target values and standard deviation. Moreover, in the event of a systematic error, it was found important to record measures based on test cause analysis. The experiment results are expected to contribute to internal quality control improvement and prompt and accurate result reporting through error recording and causal analysis based on Westgard multi-rules analysis.

  • PDF

Evaluation of the Jaw-Tracking Technique for Volume-Modulated Radiation Therapy in Brain Cancer and Head and Neck Cancer (뇌암 및 두경부암 체적변조방사선치료시 Jaw-Tracking 기법의 선량학적 유용성 평가)

  • Kim, Hee Sung;Moon, Jae Hee;Kim, Koon Joo;Seo, Jung Min;Lee, Joung Jin;Choi, Jae Hoon;Kim, Sung Ki;Jang, In-Gi
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.177-183
    • /
    • 2018
  • Purpose : Volumetric Modulated Arc Therapy(VMAT) has the advantage of uniformly and precisely irradiating the tumor to the shape of the tumor while reducing the risk of radiation damage to normal tissues. such as brain cancer, head and neck cancer and prostate cancer, It is being used for treatment. The purpose of this study is to evaluate the usefulness of the Jaw-Tracking technique(JTT) in VMAT for brain and head and neck cancer. Materials and Methods : We selected eight patients with brain and head and neck cancer(4 Brain, 4 head and neck) who were treated with the VMAT treatment technique. Contouring information of the patient's tumor and normal organ was fused to the Rando phantom using the deformable registration of Velocity(Varian, USA). A treatment plan was developed using the Varian Eclipse(ver 15.5, Varian, USA) with the same patient actual beam parameters except for the use of jaw-tracking. As the evaluation index, the maximum dose and mean dose of target and OAR were compared and a portal dosimetry was performed for the treatment plan verification. Results : When using JTT, the relative dose of OAR decreased by 5.24 % and the maximum dose by 7.05 %, respectively, compared with the Static-Jaw technique(SJT). In the various OARs, the mean dose and maximum dose reduction ranges ranged from 0.01 to 3.16 Gy and from 0.12 to 6.27 Gy, respectively. In the case of the target, the maximum dose of GTV, CTV, PTV decreased by 0.17 %, 0.43 %, and 0.37 % in JTT, and the mean dose decreased by 0.24 %, 0.47 % and 0.47 %, respectively. Gamma analysis The JTT and SJT passing rates were $98{\pm}1.73%$ and $97{\pm}1.83%$ on the basis of 3 % / 3 mm, respectively. Comparing the doses of all OARs applied to the experiment, it was found that the use of JTT resulted in a significant decrease in dose due to additional jaw shielding besides MLC than SJT. Conclusion : In radiation therapy using VMAT treatment plan, we can apply JTT in the case of adjacent tumor and normal organs such as brain cancer and head and neck cancer, and in radiotherapy required large field and high energy caused increase leakage dose through MLC. It is considered that the target dose of PTV can be increased by lowering the dose of normal tissue surrounding the tumor.

  • PDF

Thought Experiments: on the Working Imagination and its Limitation (사고실험 - 상상의 작용과 한도에 대해)

  • Hwang, Hee-sook
    • Journal of Korean Philosophical Society
    • /
    • v.146
    • /
    • pp.307-328
    • /
    • 2018
  • The use of thought experiments has a long history in many disciplines including science. In the field of philosophy, thought experiments have frequently appeared in the pre-existing literature on the contemporary Analytic Philosophy. A thought experiment refers to a synthetic environment where the designer of the experiment-with his or her intuition and imagination-tests common-sense knowledge. It can be understood as a conceptual tool for testing the validity of the common understanding of an issue or a phenomenon. However, we are not certain about the usefulness or efficacy of a thought experiment in knowledge production. The design of a thought experiment is meant to lure readers into believing as intended by the experiment itself. Thus, regardless of the purpose of a thought experiment, many readers who encounter the experiment could feel deceived. In this paper, to analyze the logic of thought experiments and to seek the source of uneasiness the readers and critics may feel about thought experiments, I draw lessons from three renowned thought-experiments: Thomson's 'ailing violinist', Putnam's 'brain in a vat', and Searle's 'Chinese room'. Imaginative thought experiments are usually constructed around a gap between the reality and the knowledge/information at hand. From the three experiments, several lessons can be learned. First, the evidence of the existence of a gap provided via thought experiments can serve as arguments for counterfactual situations. At the same time, the credibility and efficacy of the thought experiments can be damaged as soon as the thought-experiments are carried out with inappropriate and/or murky directions regarding the procedures of the experiment or the background of the study. According to D. R. Hofstadter and D. C. Dennett(1981), the 'knob setting' in a thought experiment can be altered in the middle of a simulation of the experimental condition, and then the implications of the thought experiment change altogether, indicating that an entirely different conclusion can be deduced from thought experiment. Lastly, some pre-suppositions and bias of the experiment designers play a considerable role in the validity and the chances of success of a thought experiment; thus, it is recommended that the experiment-designers refrain from exercising too much of their imagination in order to avoid contaminating the design of the experiment and/or wrongly accepting preconceived/misguided conclusions.

Analysis of Waterbody Changes in Small and Medium-Sized Reservoirs Using Optical Satellite Imagery Based on Google Earth Engine (Google Earth Engine 기반 광학 위성영상을 이용한 중소규모 저수지 수체 변화 분석)

  • Younghyun Cho;Joonwoo Noh
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.363-375
    • /
    • 2024
  • Waterbody change detection using satellite images has recently been carried out in various regions in South Korea, utilizing multiple types of sensors. This study utilizes optical satellite images from Landsat and Sentinel-2 based on Google Earth Engine (GEE) to analyze long-term surface water area changes in four monitored small and medium-sized water supply dams and agricultural reservoirs in South Korea. The analysis covers 19 years for the water supply dams and 27 years for the agricultural reservoirs. By employing image analysis methods such as normalized difference water index, Canny Edge Detection, and Otsu'sthresholding for waterbody detection, the study reliably extracted water surface areas, allowing for clear annual changes in waterbodies to be observed. When comparing the time series data of surface water areas derived from satellite images to actual measured water levels, a high correlation coefficient above 0.8 was found for the water supply dams. However, the agricultural reservoirs showed a lower correlation, between 0.5 and 0.7, attributed to the characteristics of agricultural reservoir management and the inadequacy of comparative data rather than the satellite image analysis itself. The analysis also revealed several inconsistencies in the results for smaller reservoirs, indicating the need for further studies on these reservoirs. The changes in surface water area, calculated using GEE, provide valuable spatial information on waterbody changes across the entire watershed, which cannot be identified solely by measuring water levels. This highlights the usefulness of efficiently processing extensive long-term satellite imagery data. Based on these findings, it is expected that future research could apply this method to a larger number of dam reservoirs with varying sizes,shapes, and monitoring statuses, potentially yielding additional insights into different reservoir groups.

An Expert System for the Estimation of the Growth Curve Parameters of New Markets (신규시장 성장모형의 모수 추정을 위한 전문가 시스템)

  • Lee, Dongwon;Jung, Yeojin;Jung, Jaekwon;Park, Dohyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.17-35
    • /
    • 2015
  • Demand forecasting is the activity of estimating the quantity of a product or service that consumers will purchase for a certain period of time. Developing precise forecasting models are considered important since corporates can make strategic decisions on new markets based on future demand estimated by the models. Many studies have developed market growth curve models, such as Bass, Logistic, Gompertz models, which estimate future demand when a market is in its early stage. Among the models, Bass model, which explains the demand from two types of adopters, innovators and imitators, has been widely used in forecasting. Such models require sufficient demand observations to ensure qualified results. In the beginning of a new market, however, observations are not sufficient for the models to precisely estimate the market's future demand. For this reason, as an alternative, demands guessed from those of most adjacent markets are often used as references in such cases. Reference markets can be those whose products are developed with the same categorical technologies. A market's demand may be expected to have the similar pattern with that of a reference market in case the adoption pattern of a product in the market is determined mainly by the technology related to the product. However, such processes may not always ensure pleasing results because the similarity between markets depends on intuition and/or experience. There are two major drawbacks that human experts cannot effectively handle in this approach. One is the abundance of candidate reference markets to consider, and the other is the difficulty in calculating the similarity between markets. First, there can be too many markets to consider in selecting reference markets. Mostly, markets in the same category in an industrial hierarchy can be reference markets because they are usually based on the similar technologies. However, markets can be classified into different categories even if they are based on the same generic technologies. Therefore, markets in other categories also need to be considered as potential candidates. Next, even domain experts cannot consistently calculate the similarity between markets with their own qualitative standards. The inconsistency implies missing adjacent reference markets, which may lead to the imprecise estimation of future demand. Even though there are no missing reference markets, the new market's parameters can be hardly estimated from the reference markets without quantitative standards. For this reason, this study proposes a case-based expert system that helps experts overcome the drawbacks in discovering referential markets. First, this study proposes the use of Euclidean distance measure to calculate the similarity between markets. Based on their similarities, markets are grouped into clusters. Then, missing markets with the characteristics of the cluster are searched for. Potential candidate reference markets are extracted and recommended to users. After the iteration of these steps, definite reference markets are determined according to the user's selection among those candidates. Then, finally, the new market's parameters are estimated from the reference markets. For this procedure, two techniques are used in the model. One is clustering data mining technique, and the other content-based filtering of recommender systems. The proposed system implemented with those techniques can determine the most adjacent markets based on whether a user accepts candidate markets. Experiments were conducted to validate the usefulness of the system with five ICT experts involved. In the experiments, the experts were given the list of 16 ICT markets whose parameters to be estimated. For each of the markets, the experts estimated its parameters of growth curve models with intuition at first, and then with the system. The comparison of the experiments results show that the estimated parameters are closer when they use the system in comparison with the results when they guessed them without the system.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.