• Title/Summary/Keyword: 자동

Search Result 20,435, Processing Time 0.061 seconds

Analysis of Patient Effective Dose in PET/CT; Using CT Dosimetry Programs (CT 선량 측정 프로그램을 이용한 PET/CT 검사 환자의 예측 유효 선량의 분석)

  • Kim, Jung-Sun;Jung, Woo-Young;Park, Seung-Yong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.77-82
    • /
    • 2010
  • Purpose: As PET/CT come into wide use, it caused increasing of expose in clinical use. Therefore, Korea Food and Drug Administration issued Patient DRL (Diagnostic Reference Level) in CT scan. In this study, to build the basis of patient dose reduction, we analyzed effective dose in transmission scan with CT scan. Materials and Methods: From February, 2010 to March 180 patients (age: $55{\pm}16$, weight: $61.0{\pm}10.4$ kg) who examined $^{18}F$-FDG PET/CT in Asan Medical Center. Biograph Truepoint 40 (SIEMENS, GERMANY), Biograph Sensation 16 (SIEMENS, GERMANY) and Discovery STe8 (GE healthcare, USA) were used in this study. Per each male and female average of 30 patients doses were analyzed by one. Automatic exposure control system for controlling the dose can affect the largest by a patient's body weight less than 50 kg, 50-60 kg less, 60 kg more than the average of the three groups were divided doses. We compared that measured value of CT-expo v1.7 and ImPACT v1.0. The relationship between body weight and the effective dose were analyzed. Results: When using CT-Expo V1.7, effective dose with BIO40, BIO16 and DSTe8 respectably were $6.46{\pm}1.18$ mSv, $9.36{\pm}1.96 $mSv and $9.36{\pm}1.96$ mSv for 30 male patients respectably $6.29{\pm}0.97$ mSv, $10.02{\pm}2.42$ mSv and $9.05{\pm}2.27$ mSv for 30 female patients respectably. When using ImPACT v1.0, effective dose with BIO40, BIO16 and DSTe8 respectably were $6.54{\pm}1.21$ mSv, $8.36{\pm}1.69$ mSv and $9.74{\pm}2.55$Sv for 30 male patients respectably $5.87{\pm}1.09$ mSv, $8.43{\pm}1.89$ mSv and $9.19{\pm}2.29$ mSv for female patients respectably. When divided three groups which were under 50 kg, 50~60 kg and over 60 kg respectably were 6.27 mSv, 7.67 mSv and 9.33 mSv respectably using CT-Expo V1.7, 5.62 mSv, 7.22 mSv and 8.91 mSv respectably using ImPACT v1.0. Weight and the effective dose coefficient analysis showed a very strong positive correlation(r=743, r=0.693). Conclusion: Using such a dose evaluation programs, easier to predict and evaluate the effective dose possible without performing phantom study and such dose evaluation programs could be used to collect basic data for CT dose management.

  • PDF

The Photography as Technological Aesthetics (데크놀로지 미학으로서의 사진)

  • Jin, Dong-Sun
    • Journal of Science of Art and Design
    • /
    • v.11
    • /
    • pp.221-249
    • /
    • 2007
  • Today, photography is facing to the crisis of identity and dilemma of ontology from the digital imaging process in the new technology form. It is very important points to say rethinking of the traditional photographic medium, that has changed the way we view the world and ourselves is perhaps an understatement and that photography has transformed our essential understanding of reality. Now, no longer are photographic images regarded as the true automatic recording, innocent evidence and the mirror to the reality. Rather, photography constructs the world for our entertainment, helping to create the comforting illusions by which we live. The recognition that photographs are not constructions and reflections of reality, is the basis for the actual presence within the contemporary photographic world. It is shock. This thesis's aim is to look for the problems of photographic identity and ontological crisis that is controlling and regulating digital photographic imagery, allowing the reproduction of the electronic simulations era. Photography loses its special aesthetic status and becomes no more true information and, exclusively evidence by traditional film and paper that appeared both as a technological accuracy and as a medium-specific aesthetic. The result, photography is facing two crises, one is the photographic ontology(the introduction of computerized digital images) and the other is photographic epistemology(having to do broader changes in ethics, knowledge and culture). Taken together, these crises apparently threaten us with the death of photography, with the 'end' of photography and the culture it sustains. The thesis's meaning is to look into the dilemma of photography's ontology and epistemology, especially, automatical index and digital codes from its origin, meaning, and identity as the technological medium. Thus, in particular, thesis focuses on the analog imagery presence, from the nature in the material world, and the digital imagery presence from the cultural situations in our society. And also thesis's aim is to examine the main issues of the history of photography has been concentrated on the ontological arguments since the discovery of photography in 1839. Photography has never been only one static technology form. Rather, its nearly two centuries of technological development have been marked by numerous, competing of technological innovation and self revolution from the dual aspects. This thesis examines recent account of photography by the analysis of the medium's concept, meaning, identity between film base image and digital base image from the aspects of photographic ontology and epistemology. Thus, the structure of thesis is fairy straightforward to examine what appear to be two opposing view of photographic conditions and ontological situations. Thesis' view contrasts that figure out the value of photography according to its fundamental characteristic as a medium. Also, it seeks a possible solution to the dilemma of photographic ontology through the medium's origin from the early years of the nineteenth century to the raising questions about the different meaning(analog/digital) of photography, now. Finally, this thesis emphasizes and concludes that the photographic ontological crisis reflects to the paradoxical dynamic structure, that unsolved the origins of the medium, itself. Moreover, even photography is not single identity of the photographic ontology, and also can not be understood as having a static identity or singular status from the dynamic field of technologies, practices, and images.

  • PDF

Influence of Age on The Adenosine Deaminase Activity in Patients with Exudative Pleural Effusion (연령의 증가가 삼출성 흉수 Adenosine Deaminase 활성도에 미치는 영향)

  • Yeon, Kyu-Min;Kim, Chong-Ju;Kim, Jeong-Soo;Kim, Chi-Hoon
    • Tuberculosis and Respiratory Diseases
    • /
    • v.53 no.5
    • /
    • pp.530-541
    • /
    • 2002
  • Background : Pleural fluid adenosine deaminase (ADA) activity can be helpful in a differntial diagnosis of an exudative pleural effusion because it is increased in a tuberculous pleural effusion. The ADA activity is determined mainly by the lymphocyte function. Age-associated immune decline is characterized by a decrease in T-lymphocyte function. For that reason, the pleural fluid ADA level would be lower in older patients with exudative pleural effusion. This study focused on the influence of age on the pleural fluid ADA activity in patients with exudative pleural effusion. Methods : A total of 81 patients with exudative pleural effusion were enrolled in this study. In all patients, the pleural fluid ADA activity was measured using an automated kinetic method. Results : The mean age of the patients was $52.7{\pm}21.2$ years. In all patients with exudative pleural effusion, the pleural fluid ADA activity revealed a significant difference between young patients (under 65 years of age) and old patients (p<0.05), and showed a negative correlation with age (r=-0.325, p<0.05). In the 60 patients with a tuberculous pleural effusion, the pleural fluid ADA activity revealed a significant difference between the young and older patients : $103.5{\pm}36.9$ IU/L in young patients Vs. $72.2{\pm}31.6$ IU/L in old patients (p<0.05), and showed a negative correlation with age (r=-0.384, p<0.05). In the 21 patients with non-tuberculous exudative pleural effusion, the pleural fluid ADA activity of the young patients and old patients was similar : $23.7{\pm}15.3$ IU/L in young patients Vs. $16.1{\pm}10.2$ IU/L in old patients (p>0.05), and did not show any correlation with age (r=-0.263, p>0.05). The diagnostic cutoff value of pleural fluid ADA activity for tuberculous pleural effusion was lower in the older patients (25.9 IU/L) than in the younger patients (49.1 IU/L) or all patients (38.4 IU/L) with exudative pleural effusion. Conclusion : Tuberculous pleural effusion is an important possibility to consider in older patients with a clinical suspicion of a tuberculous pleural effusion, although no marked increase in the pleural fluid ADA activity is usually detected. For a diagnosis of a tuberculous pleural effusion in old patients, the cutoff for the pleural fluid ADA activity should be set lower.

Development of a split beam transducer for measuring fish size distribution (어체 크기의 자동 식별을 위한 split beam 음향 변환기의 재발)

  • 이대재;신형일
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.37 no.3
    • /
    • pp.196-213
    • /
    • 2001
  • A split beam ultrasonic transducer operating at a frequency of 70 kHz to use in the fish sizing echo sounder was developed and the acoustic radiation characteristics were experimentally analyzed. The amplitude shading method utilizing the properties of the Chebyshev polynomials was used to obtain side lobe levels below -20 dB and to optimize the relationship between main beam width and side lobe level of the transducer, and the amplitude shading coefficient to each of the elements was achieved by changing the amplitude contribution of elements with 4 weighting transformers embodied in the planar array transducer assembly. The planar array split beam transducer assembly was composed of 36 piezoelectric ceramics (NEPEC N-21, Tokin) of rod type of 10 mm in diameter and 18.7 mm in length of 70 kHz arranged in the rectangular configuration, and the 4 electrical inputs were supplied to the beamformer. A series of impedance measurements were conducted to check the uniformity of the individual quadrants, and also in the configurations of reception and transmission, resonant frequency, and the transmitting and receiving characteristics were measured in the water tank and analyzed, respectively. The results obtained are summarized as follows : 1. Average resonant and antiresonant frequencies of electrical impedance for four quadrants of the split beam transducer in water were 69.8 kHz and 83.0 kHz, respectively. Average electrical impedance for each individual transducer quadrant was 49.2$\Omega$ at resonant frequency and 704.7$\Omega$ at antiresonant frequency. 2. The resonance peak in the transmitting voltage response (TVR) for four quadrants of the split beam transducer was observed all at 70.0 kHz and the value of TVR was all about 165.5 dB re 1 $\mu$Pa/V at 1 m at 70.0 kHz with bandwidth of 10.0 kHz between -3 dB down points. The resonance peak in the receiving sensitivity (SRT) for four combined quadrants (quad LU+LL, quad RU+RL, quad LU+RU, quad LL+RL) of the split beam transducer was observed all at 75.0 kHz and the value of SRT was all about -177.7 dB re 1 V/$\mu$Pa at 75.0 kHz with bandwidth of 10.0 kHz between -3 dB down points. The sum beam transmitting voltage response and receiving senstivity was 175.0 dB re 1$\mu$Pa/V at 1 m at 75.0 kHz with bandwidth of 10.0 kHz, respectively. 3. The sum beam of split beam transducer was approximately circular with a half beam angle of $9.0^\circ$ at -3 dB points all in both axis of the horizontal plane and the vertical plane. The first measured side lobe levels for the sum beam of split beam transducer were -19.7 dB at $22^\circ$ and -19.4 dB at $-26^\circ$ in the horizontal plane, respectively and -20.1 dB at $22^\circ$ and -22.0 dB at $-26^\circ$ in the vertical plane, respectively. 4. The developed split beam transducer was tested to estimate the angular position of the target in the beam through split beam phase measurements, and the beam pattern loss for target strength corrections was measured and analyzed.

  • PDF

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

A Hybrid Recommender System based on Collaborative Filtering with Selective Use of Overall and Multicriteria Ratings (종합 평점과 다기준 평점을 선택적으로 활용하는 협업필터링 기반 하이브리드 추천 시스템)

  • Ku, Min Jung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.85-109
    • /
    • 2018
  • Recommender system recommends the items expected to be purchased by a customer in the future according to his or her previous purchase behaviors. It has been served as a tool for realizing one-to-one personalization for an e-commerce service company. Traditional recommender systems, especially the recommender systems based on collaborative filtering (CF), which is the most popular recommendation algorithm in both academy and industry, are designed to generate the items list for recommendation by using 'overall rating' - a single criterion. However, it has critical limitations in understanding the customers' preferences in detail. Recently, to mitigate these limitations, some leading e-commerce companies have begun to get feedback from their customers in a form of 'multicritera ratings'. Multicriteria ratings enable the companies to understand their customers' preferences from the multidimensional viewpoints. Moreover, it is easy to handle and analyze the multidimensional ratings because they are quantitative. But, the recommendation using multicritera ratings also has limitation that it may omit detail information on a user's preference because it only considers three-to-five predetermined criteria in most cases. Under this background, this study proposes a novel hybrid recommendation system, which selectively uses the results from 'traditional CF' and 'CF using multicriteria ratings'. Our proposed system is based on the premise that some people have holistic preference scheme, whereas others have composite preference scheme. Thus, our system is designed to use traditional CF using overall rating for the users with holistic preference, and to use CF using multicriteria ratings for the users with composite preference. To validate the usefulness of the proposed system, we applied it to a real-world dataset regarding the recommendation for POI (point-of-interests). Providing personalized POI recommendation is getting more attentions as the popularity of the location-based services such as Yelp and Foursquare increases. The dataset was collected from university students via a Web-based online survey system. Using the survey system, we collected the overall ratings as well as the ratings for each criterion for 48 POIs that are located near K university in Seoul, South Korea. The criteria include 'food or taste', 'price' and 'service or mood'. As a result, we obtain 2,878 valid ratings from 112 users. Among 48 items, 38 items (80%) are used as training dataset, and the remaining 10 items (20%) are used as validation dataset. To examine the effectiveness of the proposed system (i.e. hybrid selective model), we compared its performance to the performances of two comparison models - the traditional CF and the CF with multicriteria ratings. The performances of recommender systems were evaluated by using two metrics - average MAE(mean absolute error) and precision-in-top-N. Precision-in-top-N represents the percentage of truly high overall ratings among those that the model predicted would be the N most relevant items for each user. The experimental system was developed using Microsoft Visual Basic for Applications (VBA). The experimental results showed that our proposed system (avg. MAE = 0.584) outperformed traditional CF (avg. MAE = 0.591) as well as multicriteria CF (avg. AVE = 0.608). We also found that multicriteria CF showed worse performance compared to traditional CF in our data set, which is contradictory to the results in the most previous studies. This result supports the premise of our study that people have two different types of preference schemes - holistic and composite. Besides MAE, the proposed system outperformed all the comparison models in precision-in-top-3, precision-in-top-5, and precision-in-top-7. The results from the paired samples t-test presented that our proposed system outperformed traditional CF with 10% statistical significance level, and multicriteria CF with 1% statistical significance level from the perspective of average MAE. The proposed system sheds light on how to understand and utilize user's preference schemes in recommender systems domain.

Effect of the Angle of Ventricular Septal Wall on Left Anterior Oblique View in Multi-Gated Cardiac Blood Pool Scan (게이트 심장 혈액풀 스캔에서 심실중격 각도에 따른 좌전사위상 변화에 대한 연구)

  • You, Yeon Wook;Lee, Chung Wun;Seo, Yeong Deok;Choi, Ho Yong;Kim, Yun Cheol;Kim, Yong Geun;Won, Woo Jae;Bang, Ji-In;Lee, Soo Jin;Kim, Tae-Sung
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.20 no.1
    • /
    • pp.13-19
    • /
    • 2016
  • Purpose In order to calculate the left ventricular ejection fraction (LVEF) accurately, it is important to acquire the best septal view of left ventricle in the multi-gated cardiac blood pool scan (GBP). This study aims to acquire the best septal view by measuring angle of ventricular septal wall (${\theta}$) using enhanced CT scan and compare with conventional method using left anterior oblique (LAO) 45 view. Materials and Methods From March to July in 2015, we analyzed the 253 patients who underwent both enhanced chest CT and GBP scan in the department of nuclear medicine at National Cancer Center. Angle (${\theta}$) between ventricular septum and imaginary midline was measured in transverse image of enhanced chest CT scan, and the patients whose difference between the angle of ${\theta}$ and 45 degree was more than 10 degrees were included. GBP scan was acquired using both LAO 45 and LAO ${\theta}$ views, and LVEFs measured by automated and manual region of interest (Auto-ROI and Manual-ROI) modes respectively were analyzed. Results $Mean{\pm}SD$ of ${\theta}$ on total 253 patients was $37.0{\pm}8.5^{\circ}$. Among them, the patients whose difference between 45 and ${\theta}$ degrees were more than ${\pm}10$ degrees were 88 patients ($29.3{\pm}6.1^{\circ}$). In Auto-ROI mode, there was statistically significant difference between LAO 45 and LAO ${\theta}$ (LVEF $45=62.0{\pm}6.6%$ vs. LVEF ${\theta}=64.0{\pm}5.6%$; P = 0.001). In Manual-ROI mode, there was also statistically significant difference between LAO 45 and LAO ${\theta}$ (LVEF $45=66.7{\pm}7.2%$ vs. LVEF ${\theta}=69.0{\pm}6.4%$; P < 0.001). Intraclass correlation coefficients of both methods were more than 95%. In case of comparison between Auto-ROI and Manual ROI of each LAO 45 and LAO ${\theta}$, there was no significant difference statistically. Conclusion We could measure the angle of ventricular septal wall accurately by using transverse image of enhanced chest CT and applied to LAO acquisition in the GBP scan. It might be the alternative method to acquire the best septal view of LAO effectively. We could notify significant difference between conventional LAO 45 and LAO ${\theta}$ view.

  • PDF

Utility of Wide Beam Reconstruction in Whole Body Bone Scan (전신 뼈 검사에서 Wide Beam Reconstruction 기법의 유용성)

  • Kim, Jung-Yul;Kang, Chung-Koo;Park, Min-Soo;Park, Hoon-Hee;Lim, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.83-89
    • /
    • 2010
  • Purpose: The Wide Beam Reconstruction (WBR) algorithms that UltraSPECT, Ltd. (U.S) has provides solutions which improved image resolution by eliminating the effect of the line spread function by collimator and suppression of the noise. It controls the resolution and noise level automatically and yields unsurpassed image quality. The aim of this study is WBR of whole body bone scan in usefulness of clinical application. Materials and Methods: The standard line source and single photon emission computed tomography (SPECT) reconstructed spatial resolution measurements were performed on an INFINA (GE, Milwaukee, WI) gamma camera, equipped with low energy high resolution (LEHR) collimators. The total counts of line source measurements with 200 kcps and 300 kcps. The SPECT phantoms analyzed spatial resolution by the changing matrix size. Also a clinical evaluation study was performed with forty three patients, referred for bone scans. First group altered scan speed with 20 and 30 cm/min and dosage of 740 MBq (20 mCi) of $^{99m}Tc$-HDP administered but second group altered dosage of $^{99m}Tc$-HDP with 740 and 1,110 MBq (20 mCi and 30 mCi) in same scan speed. The acquired data was reconstructed using the typical clinical protocol in use and the WBR protocol. The patient's information was removed and a blind reading was done on each reconstruction method. For each reading, a questionnaire was completed in which the reader was asked to evaluate, on a scale of 1-5 point. Results: The result of planar WBR data improved resolution more than 10%. The Full-Width at Half-Maximum (FWHM) of WBR data improved about 16% (Standard: 8.45, WBR: 7.09). SPECT WBR data improved resolution more than about 50% and evaluate FWHM of WBR data (Standard: 3.52, WBR: 1.65). A clinical evaluation study, there was no statistically significant difference between the two method, which includes improvement of the bone to soft tissue ratio and the image resolution (first group p=0.07, second group p=0.458). Conclusion: The WBR method allows to shorten the acquisition time of bone scans while simultaneously providing improved image quality and to reduce the dosage of radiopharmaceuticals reducing radiation dose. Therefore, the WBR method can be applied to a wide range of clinical applications to provide clinical values as well as image quality.

  • PDF

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.