• Title/Summary/Keyword: acquisition time

Search Result 2,034, Processing Time 0.029 seconds

Real-time Body Surface Motion Tracking using the Couch Based Computer-controlled Motion Phantom (CBMP) and Ultrasonic Sensor: A Feasibility Study (CBMP (Couch Based Computer-Controlled Motion Phantom)와 초음파센서에 기반한 실시간 체표면 추적 시스템 개발: 타당성 연구)

  • Lee, Suk;Yang, Dae-Sik;Park, Young-Je;Shin, Dong-Ho;Huh, Hyun-Do;Lee, Sang-Hoon;Cho, Sam-Ju;Lim, Sang-Wook;Jang, Ji-Sun;Cho, Kwang-Hwan;Shin, Hun-Joo;Kim, Chul-Yong
    • Progress in Medical Physics
    • /
    • v.18 no.1
    • /
    • pp.27-34
    • /
    • 2007
  • Respiration sating radiotherapy technique developed In consideration of the movement of body surface and Internal organs during respiration, is categorized into the method of analyzing the respiratory volume for data processing and that of keeping track of fiducial landmark or dermatologic markers based on radiography. However, since these methods require high-priced equipments for treatment and are used for the specific radiotherapy. Therefore, we should develop new essential method whilst ruling out the possible problems. This study alms to obtain body surface motion by using the couch based computer-controlled motion phantom (CBMP) and US sensor, and to develop respiration gating techniques that can adjust patients' beds by using opposite values of the data obtained. The CBMP made to measure body surface motion is composed of a BS II microprocessor, sensor, host computer and stopping motor etc. And the program to control and operate It was developed. After the CBMP was adjusted by entering random movement data, and the phantom movements were acquired using the sensors, the two data were compared and analyzed. And then, after the movements by respiration were acquired by using a rabbit, the real-time respiration gating techniques were drawn by operating the phantom with the opposite values of the data. The result of analysing the acquisition-correction delay time for the data value shows that the data value coincided within 1% and that the acquistition-correction delay time was obtained real-time $(2.34{\times}10^{-4}sec)$. And the movement was the maximum movement was 6 mm In Z direction, In which the respiratory cycle was 2.9 seconds. This study successfully confirms the clinical application possibility of respiration gating techniques by using a CBWP and sensor.

  • PDF

The Differentiation of Benign from Maligant Soft Tissue Lesions using FDG-PET: Comparison between Semi-quantitative Indices (FDG-PET을 이용한 악성과 양성 연부조직 병변의 감별: 반정량적 지표간의 비교)

  • Choi, Joon-Young;Lee, Kyung-Han;Choe, Yearn-Seong;Choi, Yong;Kim, Sang-Eun;Seo, Jai-Gon;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.31 no.1
    • /
    • pp.90-101
    • /
    • 1997
  • The purpose of this study is to evaluate the diagnostic accuracy of various quantitative indices for the differentiation of benign from malignant primary soft tissue tumors by FDG-PET. A series of 32 patients with a variety of histologically or clinically confirmed benign (20) or malignant (12) soft tissue lesions were evaluated with emission whole body (5min/bed position) PET after injection of [$^{18}F$]FDG. Regional 20min transmission scan for the attenuation correction and calculation of SUV was performed in 16 patients (10 benign, 6malignant) followed by dynamic acquisition for 56min. Postinjection transmission scan for the attenuation correction and calculation of SUV was executed in the other 16 patients (10 benign, 6 malignant). The following indices were obtained. the peak and average SUV (pSUV, aSUV) of lesions, tumor-to-background ratio acquired at images of 51 min p.i. ($TBR_{51}$), tumor-to-background ratio of areas under time-activity curves ($TBR_{area}$) and the ratio between the activities of tumor ROI at 51 min p. i. and at the time which background ROI reaches maximum activity on the time-activity curves ($T_{51}/T_{max}$). The pSUV, aSUV, $TBR_{51}$, and $TBR_{area}$ in malignant lesions were significantly higher than those in benign lesions. We set the cut-off values of pSUV, aSUV, $TBR_{51},\;TBR_{area}$ and $T_{51}/T_{max}$ for the differentiation of benign and malignant lesions at 3.5, 2.8, 5.1, 4.3 and 1.55, respectively. The sensitivity, specificity and accuracy were 91.7%, 80.0%, 84.4% by pSUV and aSUV, 83.3%, 85.0%, 84.4% by $TBR_{51}$, 83.3%, 100%, 93.8% by $TBR_{area}$ and 66.7%, 70.0%, 68.8% by $T_{51}/T_{max}$. The time-activity curves did not give additional information compared to SUV or TBR. The one false negative was a case with low-grade fibrosarcoma and all four false positives were cases with inflammatory change on histology. The visual, analysis of FDG-PET also detected the metastatic lesions in malignant cases with comparable accuracy In conclusion, all pSUV, aSUV, $TBR_{51}$, and $TBR_{area}$ are useful metabolic semi-quantitative indices with good accuracy for the differentiation of benign from malignant soft-tissue lesions.

  • PDF

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

The Application of Operations Research to Librarianship : Some Research Directions (운영연구(OR)의 도서관응용 -그 몇가지 잠재적응용분야에 대하여-)

  • Choi Sung Jin
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.4
    • /
    • pp.43-71
    • /
    • 1975
  • Operations research has developed rapidly since its origins in World War II. Practitioners of O. R. have contributed to almost every aspect of government and business. More recently, a number of operations researchers have turned their attention to library and information systems, and the author believes that significant research has resulted. It is the purpose of this essay to introduce the library audience to some of these accomplishments, to present some of the author's hypotheses on the subject of library management to which he belives O. R. has great potential, and to suggest some future research directions. Some problem areas in librianship where O. R. may play a part have been discussed and are summarized below. (1) Library location. It is usually necessary to make balance between accessibility and cost In location problems. Many mathematical methods are available for identifying the optimal locations once the balance between these two criteria has been decided. The major difficulties lie in relating cost to size and in taking future change into account when discriminating possible solutions. (2) Planning new facilities. Standard approaches to using mathematical models for simple investment decisions are well established. If the problem is one of choosing the most economical way of achieving a certain objective, one may compare th althenatives by using one of the discounted cash flow techniques. In other situations it may be necessary to use of cost-benefit approach. (3) Allocating library resources. In order to allocate the resources to best advantage the librarian needs to know how the effectiveness of the services he offers depends on the way he puts his resources. The O. R. approach to the problems is to construct a model representing effectiveness as a mathematical function of levels of different inputs(e.g., numbers of people in different jobs, acquisitions of different types, physical resources). (4) Long term planning. Resource allocation problems are generally concerned with up to one and a half years ahead. The longer term certainly offers both greater freedom of action and greater uncertainty. Thus it is difficult to generalize about long term planning problems. In other fields, however, O. R. has made a significant contribution to long range planning and it is likely to have one to make in librarianship as well. (5) Public relations. It is generally accepted that actual and potential users are too ignorant both of the range of library services provided and of how to make use of them. How should services be brought to the attention of potential users? The answer seems to lie in obtaining empirical evidence by controlled experiments in which a group of libraries participated. (6) Acquisition policy. In comparing alternative policies for acquisition of materials one needs to know the implications of each service which depends on the stock. Second is the relative importance to be ascribed to each service for each class of user. By reducing the level of the first, formal models will allow the librarian to concentrate his attention upon the value judgements which will be necessary for the second. (7) Loan policy. The approach to choosing between loan policies is much the same as the previous approach. (8) Manpower planning. For large library systems one should consider constructing models which will permit the skills necessary in the future with predictions of the skills that will be available, so as to allow informed decisions. (9) Management information system for libraries. A great deal of data can be available in libraries as a by-product of all recording activities. It is particularly tempting when procedures are computerized to make summary statistics available as a management information system. The values of information to particular decisions that may have to be taken future is best assessed in terms of a model of the relevant problem. (10) Management gaming. One of the most common uses of a management game is as a means of developing staff's to take decisions. The value of such exercises depends upon the validity of the computerized model. If the model were sufficiently simple to take the form of a mathematical equation, decision-makers would probably able to learn adequately from a graph. More complex situations require simulation models. (11) Diagnostics tools. Libraries are sufficiently complex systems that it would be useful to have available simple means of telling whether performance could be regarded as satisfactory which, if it could not, would also provide pointers to what was wrong. (12) Data banks. It would appear to be worth considering establishing a bank for certain types of data. It certain items on questionnaires were to take a standard form, a greater pool of data would de available for various analysis. (13) Effectiveness measures. The meaning of a library performance measure is not readily interpreted. Each measure must itself be assessed in relation to the corresponding measures for earlier periods of time and a standard measure that may be a corresponding measure in another library, the 'norm', the 'best practice', or user expectations.

  • PDF

The feasibility evaluation of Respiratory Gated radiation therapy simulation according to the Respiratory Training with lung cancer (폐암 환자의 호흡훈련에 의한 호흡동조 방사선치료계획의 유용성 평가)

  • Hong, mi ran;Kim, cheol jong;Park, soo yeon;Choi, jae won;Pyo, hong ryeol
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.2
    • /
    • pp.149-159
    • /
    • 2016
  • Purpose : To evaluate the usefulness of the breathing exercise,we analyzed the change in the RPM signal and the diaphragm imagebefore 4D respiratory gated radiation therapy planning of lung cancer patients. Materials and Methods : The breathing training was enforced on 11 patients getting the 4D respiratory gated radiation therapy from April, 2016 until August. At the same time, RPM signal and diaphragm image was obtained respiration training total three steps in step 1 signal acquisition of free-breathing state, 2 steps respiratory signal acquisition through the guide of the respiratory signal, 3 steps, won the regular respiration signal to the description and repeat training. And then, acquired the minimum value, maximum value, average value, and a standard deviation of the inspiration and expiration in RPM signal and diaphragm image in each steps. Were normalized by the value of the step 1, to convert the 2,3 steps to the other distribution ratio (%), by evaluating the change in the interior of the respiratory motion of the patient, it was evaluated breathing exercise usefulness of each patient. Results : The mean value and the standard deviation of each step were obtained with the procedure 1 of the RPM signal and the diaphragm amplitude as a 100% reference. In the RPM signal, the amplitudes and standard deviations of four patients (36.4%, eleven) decreased by 18.1%, 27.6% on average in 3 steps, and 2 patients (18.2%, 11 people) had standard deviation, It decreased by an average of 36.5%. Meanwhile, the other four patients (36.4%, eleven) decreased by an average of only amplitude 13.1%. In Step 3, the amplitude of the diaphragm image decreased by 30% on average of 9 patients (81.8%, 11 people), and the average of 2 patients (18.2%, 11 people) increased by 7.3%. However, the amplitudes of RPM signals and diaphragm image in 3steps were reduced by 52.6% and 42.1% on average from all patients, respectively, compared to the 2 steps. Relationship between RPM signal and diaphragm image amplitude difference was consistent with patterns of movement 1, 2 and 3steps, respectively, except for No. 2 No. 10 patients. Conclusion : It is possible to induce an optimized respiratory cycle when respiratory training is done. By conducting respiratory training before treatment, it was possible to expect the effect of predicting the movement of the lung which could control the patient's respiration. Ultimately, it can be said that breathing exercises are useful because it is possible to minimize the systematic error of radiotherapy, expect more accurate treatment. In this study, it is limited to research analyzed based on data on respiratory training before treatment, and it will be necessary to verify with the actual CT plan and the data acquired during treatment in the future.

  • PDF

High Resolution MR Images from 3T Active-Shield Whole-Body MRI System (3T 능동차페형 전신 자기공명영상 장비로부터 얻어진 고해상도 자기공명영상)

  • Bo-Young Choe;Sei-Kwon Kang;Myoung-Ja Chu;Hyun-Man Baik;Euy-Neyng Kim
    • Investigative Magnetic Resonance Imaging
    • /
    • v.5 no.2
    • /
    • pp.138-148
    • /
    • 2001
  • Purpose : Within a clinically acceptable time frame, we obtained the high resolution MR images of the human brain, knee, foot and wrist from 3T whole-body MRI system which was equipped with the world first 37 active shield magnet. Materials and Methods : Spin echo (SE) and Fast Spin Echo (FSE) images were obtained from the human brain, knee, foot and wrist of normal subjects using a homemade birdcage and transverse electromagnetic (TEM) resonators operating in quadrature and tuned to 128 MHz. For acquisition of MR images of knee, foot and wrist, we employed a homemade saddle shaped RF coil. Topical common acquisition parameters were as follows: matrix=$512{\times}512$, field of view (FOV) =20 cm, slice thickness = 3 mm, number of excitations (NEX)=1. For T1-weighted MR images, we used TR = 500 ms, TE = 10 or 17.4 ms. For T2-weighted MR images, we used TR=4000 ms, TE = 108 ms. Results : Signal to noise ratio (SNR) of 3T system was measured 2.7 times greater than that of prevalent 1.5T system. MR images obtained from 3T system revealed numerous small venous structures throughout the image plane and provided reasonable delineation between gray and white matter. Conclusion The present results demonstrate that the MR images from 3T system could provide better diagnostic quali\ulcorner of resolution and sensitivity than those of 1.5T system. The elevated SNR observed in the 3T high field magnetic resonance imaging can be utilized to acquire images with a level of resolution approaching the microscopic structural level under in vivo conditions. These images represent a significant advance in our ability to examine small anatomical features with noninvasive imaging methods.

  • PDF

Three dimensional GPR survey for the exploration of old remains at Buyeo area (부여지역 유적지 발굴을 위한 3차원 GPR 탐사)

  • Kim Jung-Bo;Son Jeong-Sul;Yi Myeong-Jong;Lim Seong-Keun;Cho Seong-Jun;Jeong Ji-Min;Park Sam-Gyu
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.49-69
    • /
    • 2004
  • One of the important roles of geophysical exploration in archeological survey may be to provide the subsurface information for effective and systematic excavations of historical remains. Ground Penetrating Radar (GPA) can give us images of shallow subsurface structure with high resolution and is regarded as a useful and important technology in archeological exploration. Since the buried cultural relics are the three-dimensional (3-D) objects in nature, the 3-D or areal survey is more desirable in archeological exploration. 3-D GPR survey based on the very dense data in principle, however, might need much higher cost and longer time of exploration than the other geophysical methods, thus it could have not been applied to the wide area exploration as one of routine procedures. Therefore, it is important to develop an effective way of 3-D GPR survey. In this study, we applied 3-D GPR method to investigate the possible historical remains of Baekje Kingdom at Gatap-Ri, Buyeo city, prior to the excavation. The principal purpose of the investigation was to provide the subsurface images of high resolution for the excavation of the surveyed area. Besides this, another purpose was to investigate the applicability and effectiveness of the continuous data acquisition system which was newly devised for the archeological investigation. The system consists of two sets of GPR antennas and the precise measurement device tracking the path of GPR antenna movement automatically and continuously Besides this hardware system, we adopted a concept of data acquisition that the data were acquired arbitrary not along the pre-established profile lines, because establishing the many profile lines itself would make the field work much longer, which results in the higher cost of field work. Owing to the newly devised system, we could acquire 3-D GPR data of an wide area over about $17,000 m^2$ as a result of the just two-days field work. Although the 3-D GPR data were gathered randomly not along the pre-established profile lines, we could have the 3-D images with high resolution showing many distinctive anomalies which could be interpreted as old agricultural lands, waterways, and artificial structures or remains. This case history led us to the conclusion that 3-D GPR method can be used easily not only to examine a small anomalous area but also to investigate the wider region of archeological interests. We expect that the 3-D GPR method will be applied as a one of standard exploration procedures to the exploration of historical remains in Korea in the near future.

  • PDF

Evaluation of Contrast and Resolution on the SPECT of Pre and Post Scatter Correction (산란보정 전, 후의 SPECT 대조도 및 분해능 평가)

  • Seo, Myeong-Deok;Kim, Yeong-Seon;Jeong, Yo-Cheon;Lee, Wan-Kyu;Song, Jae-Beom
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.127-132
    • /
    • 2010
  • Purpose: Because of limitation of image acquisition method and acquisition time, scatter correction cannot perform easily in SPECT study. But in our hospital, could provide to clinic doctor of scatter corrected images, through introduction of new generation gamma camera has function of simple scatter correction. Taking this opportunity, we will compare scatter corrected and non-scatter corrected image from image quality of point of view. Materials and Methods: We acquisite the 'Hoffman brain phantom' SPECT image and '1mm line phantom' SPECT image, each 18 times, with GE Infinia Hawkeye 4, SPECT-CT gamma camera. At first, we calculated each contrast from axial slice of scatter corrected and non-scatter corrected SPECT image of 'Hoffman brain phantom'. and next, calculated each FWHM of horizontal and vertical from axial slice of scatter corrected and non-scatter corrected SPECT image of '1mm line phantom'. After then, we attempted T test analysis with SAS program on data, contrast and resolution value of scatter corrected and non-scatter corrected image. Results: The contrast of scatter corrected image, elevated from 0.3979 to 0.3509. And the resolution of scatter corrected image, elevated from 3.4822 to 3.6375. p value were 0.0097 in contrast and <0.0001 in resolution. We knew the fact that do improve of contrast and resolution through scatter correction. Conclusion: We got the improved SPECT image through simple and easy way, scatter correct. We will expect to provide improved images, from contrast and resolution point of view. to our clinic doctor.

  • PDF

The Optimization of Reconstruction Method Reducing Partial Volume Effect in PET/CT 3D Image Acquisition (PET/CT 3차원 영상 획득에서 부분용적효과 감소를 위한 재구성법의 최적화)

  • Hong, Gun-Chul;Park, Sun-Myung;Kwak, In-Suk;Lee, Hyuk;Choi, Choon-Ki;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.13-17
    • /
    • 2010
  • Purpose: Partial volume effect (PVE) is the phenomenon to lower the accuracy of image due to low estimate, which is to occur from PET/CT 3D image acquisition. The more resolution is declined and the lesion is small, the more it causes a big error. So that it can influence the test result. Studied the optimum image reconstruction method by using variation of parameter, which can influence the PVE. Materials and Methods: It acquires the image in each size spheres which is injected $^{18}F$-FDG to hot site and background in the ratio 4:1 for 10 minutes by using NEMA 2001 IEC phantom in GE Discovey STE 16. The iterative reconstruction is used and gives variety to iteration 2-50 times, subset number 1-56. The analysis's fixed region of interest in detail part of image and compute % difference and signal to noise ratio (SNR) using $SUV_{max}$. Results: It's measured that $SUV_{max}$ of 10 mm spheres, which is changed subset number to 2, 5, 8, 20, 56 in fixed iteration to times, SNR is indicated 0.19, 0.30, 0.40, 0.48, 0.45. As well as each sphere's of total SNR is measured 2.73, 3.38, 3.64, 3.63, 3.38. Conclusion: In iteration 6th to 20th, it indicates similar value in % difference and SNR ($3.47{\pm}0.09$). Over 20th, it increases the phenomenon, which is placed low value on $SUV_{max}$ through the influence of noise. In addition, the identical iteration, it indicates that SNR is high value in 8th to 20th in variation of subset number. Therefore, to reduce partial volume effect of small lesion, it can be declined the partial volume effect in iteration 6 times, subset number 8~20 times, considering reconstruction time.

  • PDF

Simulation and Measurement of Signal Intensity for Various Tissues near Bone Interface in 2D and 3D Neurological MR Images (2차원과 3차원 신경계 자기공명영상에서 뼈 주위에 있는 여러 조직의 신호세기 계산 및 측정)

  • Yoo, Done-Sik
    • Progress in Medical Physics
    • /
    • v.10 no.1
    • /
    • pp.33-40
    • /
    • 1999
  • Purpose: To simulate and measure the signal intensity of various tissues near bone interface in 2D and 3D neurological MR images. Materials and Methods: In neurological proton density (PD) weighted images, every component in the head including cerebrospinal fluid (CSF), muscle and scalp, with the exception of bone, are visualised. It is possible to acquire images in 2D or 3D. A 2D fast spin-echo (FSE) sequence is chosen for the 2D acquisition and a 3D gradient-echo (GE) sequence is chosen for the 3D acquisition. To find out the signal intensities of CSF, muscle and fat (or scalp) for the 2D spin-echo(SE) and 3D gradient-echo (GE) imaging sequences, the theoretical signal intensities for 2D SE and 3D GE were calculated. For the 2D fast spin-echo (FSE) sequence, to produce the PD weighted image, long TR (4000 ms) and short TE$_{eff}$ (22 ms) were employed. For the 3D GE sequence, low flip angle (8$^{\circ}$) with short TR (35 ms) and short TE (3 ms) was used to produce the PD weighted contrast. Results: The 2D FSE sequence has CSF, muscle and scalp with superior image contrast and SNR of 39 - 57 while the 3D GE sequence has CSF, muscle and scalp with broadly similar image contrast and SNR of 26 - 33. SNR in the FSE image were better than those in the GE image and the skull edges appeared very clearly in the FSE image due to the edge enhancement effect in the FSE sequence. Furthermore, the contrast between CSF, muscle and scalp in the 2D FSE image was significantly better than in the 3D GE image, due to the strong signal intensities (or SNR) from CSF, muscle and scalp and enhanced edges of CSF. Conclusion: The signal intensity of various tissues near bone interface in neurological MR images has been simulated and measured. Both the simulation and imaging of the 2D SE and 3D GE sequences have CSF, fat and muscle with broadly similar image intensity and SNR's and have succeeded in getting all tissues about the same signal. However, in the 2D FSE sequence, image contrast between CSF, muscle and scalp was good and SNR was relatively high, imaging time was relatively short.

  • PDF