• Title/Summary/Keyword: 비효율

Search Result 17,422, Processing Time 0.048 seconds

Studies on the Effect of Diffusion Process to Decay Resistance of Mine Props (간이처리법(簡易處理法)에 의한 갱목(坑木)의 내부효력(耐腐効力)에 관한 연구(硏究))

  • Shim, Chong Supp;Shin, Dong So;Jung, Hee Suk
    • Journal of Korean Society of Forest Science
    • /
    • v.29 no.1
    • /
    • pp.1-19
    • /
    • 1976
  • This study has been made to make an observation regarding present status of the coal mine props which is desperately needed for coal production, despite of great shortage of the timber resources in this country, and investigate the effects of diffusion process on the decay resistances of the mine props as applied preservatives of Malenit and chromated zinc chloride. The results are as follows. 1. Present status of the coal mine props Total demand of coal mine props in the year of 1975 was approximately 456 thousand cubic meters. The main species used for mine props are conifer (mainly Pinus densiflora) and hardwood (mainly Quercus). Portions between them are half and half. With non fixed specification, wide varieties of timber in size and form are used. And volume of wood used per ton-of coal production shows also wide range from 0.017 cubic meter to 0.03 cubic meter. 2. Decay resistance test a) The oven dry weight decreased between untreated specimen and treated specimen has not shown any significantly, although it has shown some differences in average values between them. It may be caused by the shorter length of the test. b) The strength of compression test between untreated specimen and treated specimen has also shown the same results as shown in case of weight decrease. Reasons assumed are the same. c) The amounts of the extractives in one percent of sodium hydroxide (NaOH) between untreated and treated specimen have shown the large value in case of untreated specimen than that of treated. 3. The economical benifit between untreated and treated wood when applied in field has seen better in long term base in case of treated wood, although the primary cost of treated wood add a little bit more cost than that of the untreated wood.

  • PDF

Effects of Strains and Enviromental Factors on Economic Traits in Korean Native Chicken (한국재래닭의 경제 형질에 미치는 계통 및 환경의 효과)

  • Sang, Byeong-Don;Choi, Cheol-Hwan;Kim, Hak-Gyu;Kim, Si-Dong;Jang, Byeong-Gwi;Na, Jae-Cheon;Yu, Dong-Jo;Lee, Sang-Jin;;Lee, Jun-Heon
    • Korean Journal of Poultry Science
    • /
    • v.30 no.4
    • /
    • pp.235-244
    • /
    • 2003
  • This study was carried out to estimate the effects of strain and generation on the major production performances of Korean native chicken. The data were collected from 11,583 birds of the 7 generations over the year 1995 through 2001 at National Livestock Research Institute, Korea. Results obtained were summarized as follows: The body weights at 150 and 270 days of age were 1,557.7 and 1880.7, 1,471.7 and 1,738.2, 1,393.5 and 1,694.9, 1,591.2 and 1,910.0, 1,545.6 and 1,763.6g in Red Brown, Yellow Brown, Gray Brown, Black, and White strains, respectively. The Coefficient of Variations (CVs) of the body weights at 150 and 270 days of age were 9.65~13.79% and 13.29~15.16%. The ages at first egg were 150.0, 148.3, 149.5, 152,8 and 147.7 days in Red Brown, Yellow Brown, Gray Brown, Black and White strains, respectively and the CVs were between 9.33 and 10.11%. The egg weights at the first egg and 270 days of age were 33.2 and 50.8, 32.2 and 49.2, 32.2 and 49.1, 33.0 and 50.0, 30.7 and 47.8g respectively. The CVs of the egg weights of the first egg and at 270 days of age were 13.54~15.27% and 6.93~7.36%. The numbers of eggs produced by 270 days of age were 77.0, 79.3, 77.3, 73,7, 75.4, respectively, and observed CVs were between 29.98~36.99%. The significant strain effects were investigated in all the major economic traits in Korean native chicken. The significant strain effects were investigated in all the major economic traits in Korean native chicken. The highest least square (LS) means of the body weights at 150 and 270 days of age were observed in Black strain as 1,594.38 and 1,911.57g. The earliest LS mean of the ages at first egg was 146.88days in White strain. The heaviest LS means of egg weights at the first egg and 270 days of age were observed in Red Brown strain as 32.20 and 50.74g. The LS mean of the largest number of egg production was 79.50 eggs in Yellow Brown strain. Also, The significant generation effects were investigated in all the major economic traits. The highest LS means of the body weights at 150 and 270 days of age were observed in the generation 3 as 1,599.74 and 1,905.01g. The earliest LS mean of the age at first egg was 143.31 days in 4th generation. The heaviest LS means of egg weights at the first egg and 270 days of age were observed in 7th and 5th generation as 35.68 and 50.42g. The LS means of the largest number of egg production was 78.53 eggs in generation 6. In general, light body weight, short time for the age at first egg, heavy egg weight, and large number of egg production were observed as the generation proceeded.

Analysis and Improvement Strategies for Korea's Cyber Security Systems Regulations and Policies

  • Park, Dong-Kyun;Cho, Sung-Je;Soung, Jea-Hyen
    • Korean Security Journal
    • /
    • no.18
    • /
    • pp.169-190
    • /
    • 2009
  • Today, the rapid advance of scientific technologies has brought about fundamental changes to the types and levels of terrorism while the war against the world more than one thousand small and big terrorists and crime organizations has already begun. A method highly likely to be employed by terrorist groups that are using 21st Century state of the art technology is cyber terrorism. In many instances, things that you could only imagine in reality could be made possible in the cyber space. An easy example would be to randomly alter a letter in the blood type of a terrorism subject in the health care data system, which could inflict harm to subjects and impact the overturning of the opponent's system or regime. The CIH Virus Crisis which occurred on April 26, 1999 had significant implications in various aspects. A virus program made of just a few lines by Taiwanese college students without any specific objective ended up spreading widely throughout the Internet, causing damage to 30,000 PCs in Korea and over 2 billion won in monetary damages in repairs and data recovery. Despite of such risks of cyber terrorism, a great number of Korean sites are employing loose security measures. In fact, there are many cases where a company with millions of subscribers has very slackened security systems. A nationwide preparation for cyber terrorism is called for. In this context, this research will analyze the current status of Korea's cyber security systems and its laws from a policy perspective, and move on to propose improvement strategies. This research suggests the following solutions. First, the National Cyber Security Management Act should be passed to have its effectiveness as the national cyber security management regulation. With the Act's establishment, a more efficient and proactive response to cyber security management will be made possible within a nationwide cyber security framework, and define its relationship with other related laws. The newly passed National Cyber Security Management Act will eliminate inefficiencies that are caused by functional redundancies dispersed across individual sectors in current legislation. Second, to ensure efficient nationwide cyber security management, national cyber security standards and models should be proposed; while at the same time a national cyber security management organizational structure should be established to implement national cyber security policies at each government-agencies and social-components. The National Cyber Security Center must serve as the comprehensive collection, analysis and processing point for national cyber crisis related information, oversee each government agency, and build collaborative relations with the private sector. Also, national and comprehensive response system in which both the private and public sectors participate should be set up, for advance detection and prevention of cyber crisis risks and for a consolidated and timely response using national resources in times of crisis.

  • PDF

The Correction Factor of Sensitivity in Gamma Camera - Based on Whole Body Bone Scan Image - (감마카메라의 Sensitivity 보정 Factor에 관한 연구 - 전신 뼈 영상을 중심으로 -)

  • Jung, Eun-Mi;Jung, Woo-Young;Ryu, Jae-Kwang;Kim, Dong-Seok
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.12 no.3
    • /
    • pp.208-213
    • /
    • 2008
  • Purpose: Generally a whole body bone scan has been known as one of the most frequently executed exams in the nuclear medicine fields. Asan medical center, usually use various gamma camera systems - manufactured by PHILIPS (PRECEDENCE, BRIGHTVIEW), SIEMENS (ECAM, ECAM signature, ECAM plus, SYMBIA T2), GE (INFINIA) - to execute whole body scan. But, as we know, each camera's sensitivity is not same so it is hard to consistent diagnosis of patients. So our purpose is when we execute whole body bone scans, we exclude uncontrollable factors and try to correct controllable factors such as inherent sensitivity of gamma camera. In this study, we're going to measure each gamma camera's sensitivity and study about reasonable correction factors of whole body bone scan to follow up patient's condition using different gamma cameras. Materials and Methods: We used the $^{99m}Tc$ flood phantom, it recommend by IAEA recommendation based on general counts rate of a whole body scan and measured counts rates by the use of various gamma cameras - PRECEDENCE, BRIGHTVIEW, ECAM, ECAM signature, ECAM plus, IFINIA - in Asan medical center nuclear medicine department. For measuring sensitivity, all gamma camera equipped LEHR collimator (Low Energy High Resolution multi parallel Collimator) and the $^{99m}Tc$ gamma spectrum was adjusted around 15% window level, the photo peak was set to 140-kev and acquirded for 60 sec and 120 sec in all gamma cameras. In order to verify whether can apply calculated correction factors to whole body bone scan or not, we actually conducted the whole body bone scan to 27 patients and we compared it analyzed that results. Results: After experimenting using $^{99m}Tc$ flood phantom, sensitivity of ECAM plus was highest and other sensitivity order of all gamma camera is ECAM signature, SYMBIA T2, ECAM, BRIGHTVIEW, IFINIA, PRECEDENCE. And yield sensitivity correction factor show each gamma camera's relative sensitivity ratio by yielded based on ECAM's sensitivity. (ECAM plus 1.07, ECAM signature 1.05, SYMBIA T2 1.03, ECAM 1.00, BRIGHTVIEW 0.90, INFINIA 0.83, PRECEDENCE 0.72) When analyzing the correction factor yielded by $^{99m}Tc$ experiment and another correction factor yielded by whole body bone scan, it shows statistically insignificant value (p<0.05) in whole body bone scan diagnosis. Conclusion: In diagnosing the bone metastasis of patients undergoing cancer, whole body bone scan has been conducted as follow up tests due to its good points (high sensitivity, non invasive, easily conducted). But as a follow up study, it's hard to perform whole body bone scan continuously using same gamma camera. If we use same gamma camera to patients, we have to consider effectiveness of equipment's change by time elapsed. So we expect that applying sensitivity correction factor to patients who tested whole body bone scan regularly will add consistence in diagnosis of patients.

  • PDF

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

A Study on the Improvement Plans of Police Fire Investigation (경찰화재조사의 개선방안에 관한 연구)

  • SeoMoon, Su-Cheol
    • Journal of Korean Institute of Fire Investigation
    • /
    • v.9 no.1
    • /
    • pp.103-121
    • /
    • 2006
  • We are living in more comfortable circumstances with the social developments and the improvement of the standard of living, but, on the other hand, we are exposed to an increase of the occurrences of tires on account of large-sized, higher stories, deeper underground building and the use of various energy resources. The materials of the floor in a residence modern society have been going through various alterations in accordance with the uses of a residence and are now used as final goods in interioring the bottom of apartments, houses and shops. There are so many kinds of materials you usually come in contact with, but in the first place, we need to make an experiment on the spread of the fire with the hypocaust used as the floors of apartments, etc. and the floor covers you usually can get easily. We, scientific investigators, can get in contact with the accidents caused by incendiarism or an accidental fire closely connected with petroleum stuffs on the floor materials that give rise to lots of problems. on this account, I'd like to propose that we conduct an experiment on fire shapes by each petroleum stuff and that discriminate an accidental tire from incendiarism. In an investigation, it seems that finding a live coal could be an essential part of clearing up the cause of a tire but it could not be the cause of a fire itself. And besides, all sorts of tire cases or fire accidents have some kind of legislation and standard to minimize and at an early stage cope with the damage by tires. That is to say, we are supposed to install each kind of electric apparatus, automatic alarm equipment, automatic fire extinguisher in order to protect ourselves from the danger of fires and check them at any time and also escape urgently in case of fire-outbreaking or build a tire-proof construction to prevent flames from proliferating to the neighboring areas. Namely, you should take several factors into consideration to investigate a cause of a case or an accident related to fire. That means it's not in reason for one investigator or one investigative team to make clear of the starting part and the cause of a tire. accordingly, in this thesis, explanations would be given set limits to the judgement and verification on the cause of a fire and the concrete tire-spreading part through investigation on the very spot that a fire broke out. The fire-discernment would also be focused on the early stage fire-spreading part fire-outbreaking resources, and I think the realities of police tire investigations and the problems are still a matter of debate. The cause of a fire must be examined into by logical judgement on the basis of abundant scientific knowledge and experience covering the whole of fire phenomena. The judgement of the cause should be made with fire-spreading situation at the spot as the central figure and in case of verifying, you are supposed to prove by the situational proof from the traces of the tire-spreading to the fire-outbreaking sources. The causal relation on a fire-outbreak should not be proved by arbitrary opinion far from concrete facts, and also there is much chance of making mistakes if you draw deduction from a coincidence. It is absolutely necessary you observe in an objective attitude and grasp the situation of a tire in the investigation of the cause. Having a look at the spot with a prejudice is not allowed. The source of tire-outbreak itself is likely to be considered as the cause of a tire and that makes us doubt about the results according to interests of the independent investigators. So to speak, they set about investigations, the police investigation in the hope of it not being incendiarism, the fire department in the hope of it not being problems in installments or equipments, insurance companies in the hope of it being any incendiarism, electric fields in the hope of it not being electric defects, the gas-related in the hope of it not being gas problems. You could not look forward to more fair investigation and break off their misgivings. It is because the firing source itself is known as the cause of a fire and civil or criminal responsibilities are respected to the firing source itself. On this occasion, investigating the cause of a fire should be conducted with research, investigation, emotion independent, and finally you should clear up the cause with the results put together.

  • PDF

Soil Surface Fixation by Direct Sowing of Zoysia japonica with Soil Improvement on the Dredged Soil Slope (해저준설토 사면에서 개량제 처리에 의한 한국들잔디 직파 지표고정 공법에 관한 연구)

  • Jeong, Yong-Ho;Lee, Im-Kyun;Seo, Kyung-Won;Lim, Joo-Hoon;Kim, Jung-Ho;Shin, Moon-Hyun
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.14 no.4
    • /
    • pp.1-10
    • /
    • 2011
  • This study was conducted to compare the growth of Zoysia japonica depending on different soil treatments in Saemangeum sea dike, which is filled with dredged soil. Zoysia japonica was planted using sod-pitching method on the control plot. On plots which were treated with forest soil and soil improvement, Zoysia japonica seeds were sprayed mechanically. Sixteen months after planting, coverage rate, leaf length, leaf width, and root length were measured and analyzed. Also, three Zoysia japonica samples per plot were collected to analyze nutrient contents. Coverage rate was 100% in B treatment plot(dredged soil+$40kg/m^3$ soil improvement+forest soil), in C treatment plots (dredged soil+$60kg/m^3$ soil improvement+forest soil), and D treatment plots (dredged soil+$60kg/m^3$ soil improvement), while only 43% of the soil surface was covered with Zoysia japonica on control plots. The width of the leaf on C treatment plots (3.79mm) was the highest followed by D treatment (3.49mm), B treatment (2.40mm) and control plots (1.97mm). Leaf and root length of D treatment was 30.18cm and 13.18cm, which were highest among different treatments. The leaf length of D treatment was highest followed by C, B, and A treatments. The root length of D treatment was highest followed by C, A, and B treatments. The nitrogen and phosphate contents of the above ground part of Zoysia japonica were highest in C treatment, followed by D, B, and A treatments. The nitrogen and phosphate contents of the underground part of Zoysia japonica were highest in D treatment, followed by C, A, and B treatments. C and D treatments showed the best results in every aspect of grass growth. The results of this study could be used to identify the cost effective way to improve soil quality for soil surface fixation on reclaimed areas using grass species.

Quality Assurance for Intensity Modulated Radiation Therapy (세기조절방사선치료(Intensity Modulated Radiation Therapy; IMRT)의 정도보증(Quality Assurance))

  • Cho Byung Chul;Park Suk Won;Oh Do Hoon;Bae Hoonsik
    • Radiation Oncology Journal
    • /
    • v.19 no.3
    • /
    • pp.275-286
    • /
    • 2001
  • Purpose : To setup procedures of quality assurance (OA) for implementing intensity modulated radiation therapy (IMRT) clinically, report OA procedures peformed for one patient with prostate cancer. Materials and methods : $P^3IMRT$ (ADAC) and linear accelerator (Siemens) with multileaf collimator are used to implement IMRT. At first, the positional accuracy, reproducibility of MLC, and leaf transmission factor were evaluated. RTP commissioning was peformed again to consider small field effect. After RTP recommissioning, a test plan of a C-shaped PTV was made using 9 intensity modulated beams, and the calculated isocenter dose was compared with the measured one in solid water phantom. As a patient-specific IMRT QA, one patient with prostate cancer was planned using 6 beams of total 74 segmented fields. The same beams were used to recalculate dose in a solid water phantom. Dose of these beams were measured with a 0.015 cc micro-ionization chamber, a diode detector, films, and an array detector and compared with calculated one. Results : The positioning accuracy of MLC was about 1 mm, and the reproducibility was around 0.5 mm. For leaf transmission factor for 10 MV photon beams, interleaf leakage was measured $1.9\%$ and midleaf leakage $0.9\%$ relative to $10\times\;cm^2$ open filed. Penumbra measured with film, diode detector, microionization chamber, and conventional 0.125 cc chamber showed that $80\~20\%$ penumbra width measured with a 0.125 cc chamber was 2 mm larger than that of film, which means a 0.125 cc ionization chamber was unacceptable for measuring small field such like 0.5 cm beamlet. After RTP recommissioning, the discrepancy between the measured and calculated dose profile for a small field of $1\times1\;cm^2$ size was less than $2\%$. The isocenter dose of the test plan of C-shaped PTV was measured two times with micro-ionization chamber in solid phantom showed that the errors upto $12\%$ for individual beam, but total dose delivered were agreed with the calculated within $2\%$. The transverse dose distribution measured with EC-L film was agreed with the calculated one in general. The isocenter dose for the patient measured in solid phantom was agreed within $1.5\%$. On-axis dose profiles of each individual beam at the position of the central leaf measured with film and array detector were found that at out-of-the-field region, the calculated dose underestimates about $2\%$, at inside-the-field the measured one was agreed within $3\%$, except some position. Conclusion : It is necessary more tight quality control of MLC for IMRT relative to conventional large field treatment and to develop QA procedures to check intensity pattern more efficiently. At the conclusion, we did setup an appropriate QA procedures for IMRT by a series of verifications including the measurement of absolute dose at the isocenter with a micro-ionization chamber, film dosimetry for verifying intensity pattern, and another measurement with an array detector for comparing off-axis dose profile.

  • PDF

A Study of Guidelines for Genetic Counseling in Preimplantation Genetic Diagnosis (PGD) (착상전 유전진단을 위한 유전상담 현황과 지침개발을 위한 기초 연구)

  • Kim, Min-Jee;Lee, Hyoung-Song;Kang, Inn-Soo;Jeong, Seon-Yong;Kim, Hyon-J.
    • Journal of Genetic Medicine
    • /
    • v.7 no.2
    • /
    • pp.125-132
    • /
    • 2010
  • Purpose: Preimplantation genetic diagnosis (PGD), also known as embryo screening, is a pre-pregnancy technique used to identify genetic defects in embryos created through in vitro fertilization. PGD is considered a means of prenatal diagnosis of genetic abnormalities. PGD is used when one or both genetic parents has a known genetic abnormality; testing is performed on an embryo to determine if it also carries the genetic abnormality. The main advantage of PGD is the avoidance of selective pregnancy termination as it imparts a high likelihood that the baby will be free of the disease under consideration. The application of PGD to genetic practices, reproductive medicine, and genetic counseling is becoming the key component of fertility practice because of the need to develop a custom PGD design for each couple. Materials and Methods: In this study, a survey on the contents of genetic counseling in PGD was carried out via direct contact or e-mail with the patients and specialists who had experienced PGD during the three months from February to April 2010. Results: A total of 91 persons including 60 patients, 49 of whom had a chromosomal disorder and 11 of whom had a single gene disorder, and 31 PGD specialists responded to the survey. Analysis of the survey results revealed that all respondents were well aware of the importance of genetic counseling in all steps of PGD including planning, operation, and follow-up. The patient group responded that the possibility of unexpected results (51.7%), genetic risk assessment and recurrence risk (46.7%), the reproduction options (46.7%), the procedure and limitation of PGD (43.3%) and the information of PGD technology (35.0%) should be included as a genetic counseling information. In detail, 51.7% of patients wanted to be counseled for the possibility of unexpected results and the recurrence risk, while 46.7% wanted to know their reproduction options (46.7%). Approximately 96.7% of specialists replied that a non-M.D. genetic counselor is necessary for effective and systematic genetic counseling in PGD because it is difficult for physicians to offer satisfying information to patients due to lack of counseling time and specific knowledge of the disorders. Conclusions: The information from the survey provides important insight into the overall present situation of genetic counseling for PGD in Korea. The survey results demonstrated that there is a general awareness that genetic counseling is essential for PGD, suggesting that appropriate genetic counseling may play a important role in the success of PGD. The establishment of genetic counseling guidelines for PGD may contribute to better planning and management strategies for PGD.

Different Uptake of Tc-99m ECD and Tc-99m HMPAO in the Normal Brains: Analysis by Statistical Parametric Mapping (정상 뇌 혈류 영상에서 방사성의약품에 따라 혈류 분포에 차이가 있는가: 통계적 파라미터 지도를 사용한 분석)

  • Kim, Euy-Neyng;Jung, Yong-An;Sohn, Hyung-Sun;Kim, Sung-Hoon;Yoo, Ie-Ryung;Chung, Soo-Kyo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.4
    • /
    • pp.244-254
    • /
    • 2002
  • Purpose: This study investigated the differences between technetium-99m ethyl cysteinate dimer (Tc-99m ECD) and technetium-99m hexamethylpropylene amine oxime (Tc-99m HMPAO) uptake in the normal brain by means of statistical parametric mapping (SPM) analysis. Materials and Methods: We retrospectively analyzed age and sex matched 53 cases of normal brain SPECT. Thirty-two cases were obtained with Tc-99m ECD and 21 cases with Tc-99m HMPAO. There were no abnormal findings on brain MRIs. All of the SPECT images were spatially transformed to standard space, smoothed and globally normalized. The differences between the Tc-99m ECD and Tc-99m HMPAO SPECT images were statistically analyzed using statistical parametric mapping (SPM'99) software. The differences bgetween the two groups were considered significant ant a threshold of corrected P values less than 0.05. Results: SPM analysis revealed significantly different uptakes of Tc-99m ECD and Tc-99m HMPAO in the normal brains. On the Tc-99m ECD SPECT images, relatively higher uptake was observed in the frontal, parietal and occipital lobes, in the basal ganglia and thalamus, and in the superior region of the cerebellum. On the Tc-99m HMPAO SPECT images, relatively higher uptakes was observed in subcortical areas of the frontal region, temporal lobe, and posterior portion of inferior cerebellum. Conclusion: Uptake of Tc-99m ECD and Tc-99m HMPO in the normallooking brain was significantly different on SPM analysis. The selective use of Tc-99m ECD of Tc-99m HMPAO in brain SPECT imaging appears especially valuable for the interpretation of cerebral perfusion. Further investigation is necessary to determine which tracer is more accurate for diagnosing different clinical conditions.