• Title/Summary/Keyword: three-dimensional information

Search Result 2,268, Processing Time 0.035 seconds

Evaluation of Setup Uncertainty on the CTV Dose and Setup Margin Using Monte Carlo Simulation (몬테칼로 전산모사를 이용한 셋업오차가 임상표적체적에 전달되는 선량과 셋업마진에 대하여 미치는 영향 평가)

  • Cho, Il-Sung;Kwark, Jung-Won;Cho, Byung-Chul;Kim, Jong-Hoon;Ahn, Seung-Do;Park, Sung-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.81-90
    • /
    • 2012
  • The effect of setup uncertainties on CTV dose and the correlation between setup uncertainties and setup margin were evaluated by Monte Carlo based numerical simulation. Patient specific information of IMRT treatment plan for rectal cancer designed on the VARIAN Eclipse planning system was utilized for the Monte Carlo simulation program including the planned dose distribution and tumor volume information of a rectal cancer patient. The simulation program was developed for the purpose of the study on Linux environment using open source packages, GNU C++ and ROOT data analysis framework. All misalignments of patient setup were assumed to follow the central limit theorem. Thus systematic and random errors were generated according to the gaussian statistics with a given standard deviation as simulation input parameter. After the setup error simulations, the change of dose in CTV volume was analyzed with the simulation result. In order to verify the conventional margin recipe, the correlation between setup error and setup margin was compared with the margin formula developed on three dimensional conformal radiation therapy. The simulation was performed total 2,000 times for each simulation input of systematic and random errors independently. The size of standard deviation for generating patient setup errors was changed from 1 mm to 10 mm with 1 mm step. In case for the systematic error the minimum dose on CTV $D_{min}^{stat{\cdot}}$ was decreased from 100.4 to 72.50% and the mean dose $\bar{D}_{syst{\cdot}}$ was decreased from 100.45% to 97.88%. However the standard deviation of dose distribution in CTV volume was increased from 0.02% to 3.33%. The effect of random error gave the same result of a reduction of mean and minimum dose to CTV volume. It was found that the minimum dose on CTV volume $D_{min}^{rand{\cdot}}$ was reduced from 100.45% to 94.80% and the mean dose to CTV $\bar{D}_{rand{\cdot}}$ was decreased from 100.46% to 97.87%. Like systematic error, the standard deviation of CTV dose ${\Delta}D_{rand}$ was increased from 0.01% to 0.63%. After calculating a size of margin for each systematic and random error the "population ratio" was introduced and applied to verify margin recipe. It was found that the conventional margin formula satisfy margin object on IMRT treatment for rectal cancer. It is considered that the developed Monte-carlo based simulation program might be useful to study for patient setup error and dose coverage in CTV volume due to variations of margin size and setup error.

A Study on the Response Plan through the Analysis of North Korea's Drones Terrorism at Critical National Facilities - Focusing on Improvement of Laws and Systems - (국가중요시설에 대한 북한의 드론테러 위협 분석을 통한 대응방안 연구 - 법적·제도적 개선을 중심으로 -)

  • Choong soo Ha
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.2
    • /
    • pp.395-410
    • /
    • 2023
  • Purpose: The purpose of this study was to analyze the current state of drone terrorism response at such critical national facilities and derive improvements, especially to identify problems in laws and systems to effectively utilize the anti-drone system and present directions for improvement. Method: A qualitative research method was used for this study by analyzing a variety of issues not discussed in existing research papers and policy documents through in-depth interviews with subject matter experts. In-depth interviews were conducted based on 12 semi-structured interviews by selecting 16 experts in the field of anti-drone and terrorism in Korea. The interview contents were recorded with the prior consent of the study participants, transcribed back to the Korean file, and problems and improvement measures were derived through coding. For this, the threats and types were analyzed based on the cases of drone terrorism occurring abroad and measures to establish anti-drone system were researched from the perspective of laws and systems by evaluating the possibility of drone terrorism in the Republic of Korea. Result: As a result of the study, improvements to some of the problems that need to be preceded in order to effectively respond to drone terrorism at critical national facilities in the Republic of Korea, have been identified. First, terminologies related to critical national facilities and drone terrorism should be clearly defined and reflected in the Integrated Defense Act and the Terrorism Prevention Act. Second, the current concept of protection of critical national facilities should evolve from the current ground-oriented protection to a three-dimensional protection concept that considers air threats and the Integrated Defense Act should reflect a plan to effectively install the anti-drone system that can materialize the concept. Third, a special law against flying over critical national facilities should be enacted. To this end, legislation should be enacted to expand designated facilities subject to flight restrictions while minimizing the range of no fly zone, but the law should be revised so that the two wings of "drone industry development" and "protection of critical national facilities" can develop in a balanced manner. Fourth, illegal flight response system and related systems should be improved and reestablished. For example, it is necessary to prepare a unified manual for general matters, but thorough preparation should be made by customizing it according to the characteristics of each facility, expanding professional manpower, and enhancing response training. Conclusion: The focus of this study is to present directions for policy and technology development to establish an anti-drone system that can effectively respond to drone terrorism and illegal drones at critical national facilities going forward.

The Accuracy Evaluation of Digital Elevation Models for Forest Areas Produced Under Different Filtering Conditions of Airborne LiDAR Raw Data (항공 LiDAR 원자료 필터링 조건에 따른 산림지역 수치표고모형 정확도 평가)

  • Cho, Seungwan;Choi, Hyung Tae;Park, Joowon
    • Journal of agriculture & life science
    • /
    • v.50 no.3
    • /
    • pp.1-11
    • /
    • 2016
  • With increasing interest, there have been studies on LiDAR(Light Detection And Ranging)-based DEM(Digital Elevation Model) to acquire three dimensional topographic information. For producing LiDAR DEM with better accuracy, Filtering process is crucial, where only surface reflected LiDAR points are left to construct DEM while non-surface reflected LiDAR points need to be removed from the raw LiDAR data. In particular, the changes of input values for filtering algorithm-constructing parameters are supposed to produce different products. Therefore, this study is aimed to contribute to better understanding the effects of the changes of the levels of GroundFilter Algrothm's Mean parameter(GFmn) embedded in FUSION software on the accuracy of the LiDAR DEM products, using LiDAR data collected for Hwacheon, Yangju, Gyeongsan and Jangheung watershed experimental area. The effect of GFmn level changes on the products' accuracy is estimated by measuring and comparing the residuals between the elevations at the same locations of a field and different GFmn level-produced LiDAR DEM sample points. In order to test whether there are any differences among the five GFmn levels; 1, 3, 5, 7 and 9, One-way ANOVA is conducted. In result of One-way ANOVA test, it is found that the change in GFmn level significantly affects the accuracy (F-value: 4.915, p<0.01). After finding significance of the GFmn level effect, Tukey HSD test is also conducted as a Post hoc test for grouping levels by the significant differences. In result, GFmn levels are divided into two subsets ('7, 5, 9, 3' vs. '1'). From the observation of the residuals of each individual level, it is possible to say that LiDAR DEM is generated most accurately when GFmn is given as 7. Through this study, the most desirable parameter value can be suggested to produce filtered LiDAR DEM data which can provide the most accurate elevation information.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

A Study on 3D Scan Technology for Find Archetype of Youngbeokji in Seongnagwon Garden (성락원 영벽지의 원형 파악을 위한 3D 스캔기술 연구)

  • Lee, Won-Ho;Kim, Dong-Hyun;Kim, Jae-Ung;Park, Dong-Jin
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.31 no.3
    • /
    • pp.95-105
    • /
    • 2013
  • This study on circular identifying purposes was performed of Youngbeokji space located in Seongnagwon(Scenic Sites No.35). Through the data acquisition of 3D high precision, such as the surrounding terrain of the Youngbeokji. The results of this study is summarized like the following. First, the purpose of the stone structures and structure within the Youngbeokji search is an important clue to find that earlier era will be a prototype. 3D scan method of enforcement is searching the whole structure, including the surrounding terrain and having the easy way. Second, the measurement results are as follows. Department of bedrock surveyed from South to North was measured by 7,665mm. From East to West was measured at 7,326mm. The size of the stone structures, $1,665mm{\times}1,721mm$ in the form of a square. Its interior has a diameter of 1, 664mm of hemispherical form. In the lower portion of the rock masses in the South to the North, has fallen out of the $1,006mm{\times}328mm$ scale traces were discovered. Third, the Youngbeokji recorded in the internal terrain Multiresolution approach. After working with the scanner and scan using the scan data, broadband, to merge. Polygon Data conversion to process was conducted and mash as fine scan data are converted to process data. High resolution photos obtained through the creation of 3D terrain data overlap and the final result. Fourthly, as a result of this action, stone structure West of the waterway back outgoing times oil was confirmed. Bangjiwondo is estimated to be seokji of structure hydroponic facility confirmed will artificially carved in the bedrock. As a result of this and the previous situation of the 1960s could compare data was created. This study provides 3D precision ordnance through the acquisition of the data. Excavations at the circle was able to preserve in perpetuity as digital data. In the future, this data is welcome to take a wide variety of professionals. This is the purpose of this is to establish foundations and conservation management measures will be used. In addition, The new ease of how future research and 3D scan unveiled in the garden has been used in the study expect.

Development of an Automatic 3D Coregistration Technique of Brain PET and MR Images (뇌 PET과 MR 영상의 자동화된 3차원적 합성기법 개발)

  • Lee, Jae-Sung;Kwark, Cheol-Eun;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul;Park, Kwang-Suk
    • The Korean Journal of Nuclear Medicine
    • /
    • v.32 no.5
    • /
    • pp.414-424
    • /
    • 1998
  • Purpose: Cross-modality coregistration of positron emission tomography (PET) and magnetic resonance imaging (MR) could enhance the clinical information. In this study we propose a refined technique to improve the robustness of registration, and to implement more realistic visualization of the coregistered images. Materials and Methods: Using the sinogram of PET emission scan, we extracted the robust head boundary and used boundary-enhanced PET to coregister PET with MR. The pixels having 10% of maximum pixel value were considered as the boundary of sinogram. Boundary pixel values were exchanged with maximum value of sinogram. One hundred eighty boundary points were extracted at intervals of about 2 degree using simple threshold method from each slice of MR images. Best affined transformation between the two point sets was performed using least square fitting which should minimize the sum of Euclidean distance between the point sets. We reduced calculation time using pre-defined distance map. Finally we developed an automatic coregistration program using this boundary detection and surface matching technique. We designed a new weighted normalization technique to display the coregistered PET and MR images simultaneously. Results: Using our newly developed method, robust extraction of head boundary was possible and spatial registration was successfully performed. Mean displacement error was less than 2.0 mm. In visualization of coregistered images using weighted normalization method, structures shown in MR image could be realistically represented. Conclusion: Our refined technique could practically enhance the performance of automated three dimensional coregistration.

  • PDF

A Computed Tomography-Based Anatomic Comparison of Three Different Types of C7 Posterior Fixation Techniques : Pedicle, Intralaminar, and Lateral Mass Screws

  • Jang, Woo-Young;Kim, Il-Sup;Lee, Ho-Jin;Sung, Jae-Hoon;Lee, Sang-Won;Hong, Jae-Taek
    • Journal of Korean Neurosurgical Society
    • /
    • v.50 no.3
    • /
    • pp.166-172
    • /
    • 2011
  • Objective : The intralaminar screw (ILS) fixation technique offers an alternative to pedicle screw (PS) and lateral mass screw (LMS) fixation in the C7 spine. Although cadaveric studies have described the anatomy of the pedicles, laminae, and lateral masses at C7, 3-dimensional computed tomography (CT) imaging is the modality of choice for pre-surgical planning. In this study, the goal was to determine the anatomical parameter and optimal screw trajectory for ILS placement at C7, and to compare this information to PS and LMS placement in the C7 spine as determined by CT evaluation. Methods : A total of 120 patients (60 men and 60 women) with an average age of $51.7{\pm}13.6$ years were selected by retrospective review of a trauma registry database over a 2-year period. Patients were included in the study if they were older than 15 years of age, had standardized axial bone-window CT imaging at C7, and had no evidence of spinal trauma. For each lamina and pedicle, width (outer cortical and inner cancellous), maximal screw length, and optimal screw trajectory were measured, and the maximal screw length of the lateral mass were measured using m-view 5.4 software. Statistical analysis was performed using Student's t-test. Results : At C7, the maximal PS length was significantly greater than the ILS and LMS length (PS, $33.9{\pm}3.1$ mm; ILS, $30.8{\pm}3.1$ mm; LMS, $10.6{\pm}1.3$; p<0.01). When the outer cortical and inner cancellous width was compared between the pedicle and lamina, the mean pedicle outer cortical width at C7 was wider than the lamina by an average of 0.6 mm (pedicle, $6.8{\pm}1.2$ mm; lamina, $6.2{\pm}1.2$ mm; p<0.01). At C7, 95.8% of the laminae measured accepted a 4.0-mm screw with a 1.0 mm of clearance, compared with 99.2% of pedicle. Of the laminae measured, 99.2% accepted a 3.5-mm screw with a 1.0 mm clearance, compared with 100% of the pedicle. When the outer cortical and inner cancellous height was compared between pedicle and lamina, the mean lamina outer cortical height at C7 was wider than the pedicle by an average of 9.9 mm (lamina, $18.6{\pm}2.0$ mm; pedicle, $8.7{\pm}1.3$ mm; p<0.01). The ideal screw trajectory at C7 was also measured ($47.8{\pm}4.8^{\circ}$ for ILS and $35.1{\pm}8.1^{\circ}$ for PS). Conclusion : Although pedicle screw fixation is the most ideal instrumentation method for C7 fixation with respect to length and cortical diameter, anatomical aspect of C7 lamina is affordable to place screw. Therefore, the C7 intralaminar screw could be an alternative fixation technique with few anatomic limitations in the cases when C7 pedicle screw fixation is not favorable. However, anatomical variations in the length and width must be considered when placing an intralaminar or pedicle screw at C7.

Quantitative Micro-CT Evaluation of Microleakage in Composite Resin Restorations (Micro-CT를 이용한 복합 레진 수복물 미세 누출도의 정량 분석)

  • Lee, Sang-Ik;Hyun, Hong-Keun;Kim, Young-Jae;Kim, Jung-Wook;Lee, Sang-Hoon;Kim, Chong-Chul;Hahn, Se-Hyun;Jang, Ki-Taeg
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.34 no.2
    • /
    • pp.222-233
    • /
    • 2007
  • One of the most important and basic test of dental restorative materials is the evaluation of microleakage into the tooth-restorative interface. There are many techniques to test microleakage, but most of them have several disadvantages. Recently developed microtomography(micro-CT) can provide the three dimensional image and information about the internal component in non-destructive way, therefore using micro-CT, it is possible to evaluate microleakage exactly in quantitative manner. The purpose of this study is to find a new method for quantitative and non-destructive evaluation of microleakage in composite resin restorations using micro-CT and to compare the new method with conventional dye penetration method. Thus, microleakages of two kinds of dentin bonding systems were evaluated with above two methods. 40 extracted sound human premolars were randomly divided into two groups consisting of 20 samples and restored accordingly. Group 1 : Class V resin restorations with $Adper^{TM}$ Singe Bond Group, 2 : Class V resin restorations with $Adper^{TM}\;Promp^{TM}$ L-pop. The $Filtek^{TM}$ Supreme was applied to the Class V cavities of all teeth. After that, 10 teeth from each group were applied to evaluation of microleakage using micro-CT, and other 10 teeth from each group were using conventional dye penetration method. The conclusions of this study were as follow : 1 Using micro-CT, Group 1 showed significantly less microleakage than Group 2 and there was statistically significant difference(p<0.01) between two groups. 2. Using conventional dye penetration method, Group 1 leaked less than Group 2 and there was statistically significant difference(p<0.01) between two groups 3. The difference between two groups is more evident in the method using micro-CT. 4. In all two methods, microleakage appeared more into the cavities to dentinal margins than enamel margins.

  • PDF