• Title/Summary/Keyword: Step-One Parallel System

Search Result 35, Processing Time 0.022 seconds

INFLUENCES OF APICOECTOMY AND RETROGRADE CAVITY PREPARATION METHODS ON THE APICAL LEAKAGE (치근단절제 및 역충전와동 형성방법이 치근단누출에 미치는 영향)

  • Yang, Jeong-Ok;Kim, Sung-Kyo;Kwon, Tae-Kyung
    • Restorative Dentistry and Endodontics
    • /
    • v.23 no.2
    • /
    • pp.537-549
    • /
    • 1998
  • The purpose of this study was to evaluate the influence of root resection and retrograde cavity preparation methods on the apical leakage in endodontic surgery. To investigate the effect of various root resection and retrograde cavity preparation methods on the apical leakage, 71 roots of extracted human maxillary anterior teeth and 44 mesiobuccal roots of extracted human maxillary first molars were used. Root canals of the all the specimens were prepared with step-back technique and filled with gutta-percha by lateral condensation method. Three millimeters of each root was resected at a 45 degree angle or perpendicular to the long axis of the tooth according to the groups. Retrograde cavities were prepared with ultrasonic instruments or a slow-speed round bur, and occlusal access cavities were filled with zinc oxide eugenol cement. Three coats of clear nail polish were placed on the lateral and coronal surfaces of the specimens except the apical cut one millimeter. All the specimens were immerged in 2% methylene blue solution for 7 days in an incubator at $37^{\circ}C$. The teeth were dissolved in 14 ml of 35% nitric acid solution and the dye present within the root canal system was returned to solution. The leakage of dye was quantitatively measured via spectrophotometric method. The obtained data were analysed statistically using two-way ANOVA and Duncans Multiple Range Test. The results were as follows: 1. No statistically significant difference was observed between ultrasonic retrograde cavity preparation method and slow-speed round bur technique, without apical bevel (p>0.05). 2. Ultrasonic retrograde preparation method showed significantly less apical leakage than slow-speed round bur technique, with bevel (p<0.0001). 3. No statistically significant difference was found between beveled resected root surface and non-beveled resected root surface, with ultrasonic technique (p>0.05). 4. Non-beveled resected root surface showed significantly less apical leakage than beveled resected root surface, with slow-speed round bur technique (p<0.0001). 5. No statistically significant difference in apical leakage was found between the group of retrograde cavity prepared parallel to the long axis of the tooth and the group of one prepared perpendicular to the long axis of the tooth (p>0.05). 6. Regarding isthmus preparation, ultrasonic retrograde preparation method showed significantly less apical leakage than slow-speed round bur technique, in the mesiobuccal root of maxillary molar, without bevel (p<0.0001).

  • PDF

A STUDY ON THE TEMPERATURE CHANGES OF BONE TISSUES DURING IMPLANT SITE PREPARATION (임플랜트 식립부위 형성시 골조직의 온도변화에 관한 연구)

  • Kim Pyung-Il;Kim Yung-Soo;Jang Kyung-Soo;Kim Chang-Whe
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.40 no.1
    • /
    • pp.1-17
    • /
    • 2002
  • The purpose of this study is to examine the possibility of thermal injury to bone tissues during an implant site preparation under the same condition as a typical clinical practice of $Br{\aa}nemark$ implant system. All the burs for $Br{\aa}nemark$ implant system were studied except the round bur The experiments involved 880 drilling cases : 50 cases for each of the 5 steps of NP, 5 steps of RP, and 7 steps of WP, all including srew tap, and 30 cases of 2mm twist drill. For precision drilling, a precision handpiece restraining system was developed (Eungyong Machinery Co., Korea). The system kept the drill parallel to the drilling path and allowed horizontal adjustment of the drill with as little as $1{\mu}m$ increment. The thermocouple insertion hole. that is 0.9mm in diameter and 8mm in depth, was prepared 0.2mm away from the tapping bur the last drilling step. The temperatures due to countersink, pilot drill, and other drills were measured at the surface of the bone, at the depths of 4mm and 8mm respectively. Countersink drilling temperature was measured by attaching the tip of a thermocouple at the rim of the countersink. To assure temperature measurement at the desired depths, 'bent-thermocouples' with their tips of 4 and 8mm bent at $120^{\circ}$ were used. The profiles of temperature variation were recorded continuously at one second interval using a thermometer with memory function (Fluke Co. U.S.A.) and 0.7mm thermocouples (Omega Co., U.S.A.). To simulate typical clinical conditions, 35mm square samples of bovine scapular bone were utilized. The samples were approximately 20mm thick with the cortical thickness on the drilling side ranging from 1 to 2mm. A sample was placed in a container of saline solution so that its lower half is submerged into the solution and the upper half exposed to the room air, which averaged $24.9^{\circ}C$. The temperature of the saline solution was maintained at $36.5^{\circ}C$ using an electric heater (J. O Tech Co., Korea). This experimental condition was similar to that of a patient s opened mouth. The study revealed that a 2mm twist drill required greatest attention. As a guide drill, a twist drill is required to bore through a 'virgin bone,' rather than merely enlarging an already drilled hole as is the case with other drills. This typically generates greater amount of heat. Furthermore, one tends to apply a greater pressure to overcome drilling difficulty, thus producing even greater amount heat. 150 experiments were conducted for 2mm twist drill. For 140 cases, drill pressure of 750g was sufficient, and 10 cases required additional 500 or 100g of drilling pressure. In case of the former. 3 of the 140 cases produced the temperature greater than $47^{\circ}C$, the threshold temperature of degeneration of bone tissue (1983. Eriksson et al.) which is also the reference temperature in this study. In each of the 10 cases requiring extra pressure, the temperature exceeded the reference temperature. More significantly, a surge of heat was observed in each of these cases This observations led to addtional 20 drilling experiments on dense bones. For 10 of these cases, the pressure of 1,250g was applied. For the other 10, 1.750g were applied. In each of these cases, it was also observed that the temperature rose abruptly far above the thresh old temperature of $47^{\circ}C$, sometimes even to 70 or $80^{\circ}C$. It was also observed that the increased drilling pressure influenced the shortening of drilling time more than the rise of drilling temperature. This suggests the desirability of clinically reconsidering application of extra pressures to prevent possible injury to bone tissues. An analysis of these two extra pressure groups of 1,250g and 1,750g revealed that the t-statistics for reduced amount of drilling time due to extra pressure and increased peak temperature due to the same were 10.80 and 2.08 respectively suggesting that drilling time was more influenced than temperature. All the subsequent drillings after the drilling with a 2mm twist drill did not produce excessive heat, i.e. the heat generation is at the same or below the body temperature level. Some of screw tap, pilot, and countersink showed negative correlation coefficients between the generated heat and the drilling time. indicating the more the drilling time, the lower the temperature. The study also revealed that the drilling time was increased as a function of frequency of the use of the drill. Under the drilling pressure of 750g, it was revealed that the drilling time for an old twist drill that has already drilled 40 times was 4.5 times longer than a new drill The measurement was taken for the first 10 drillings of a new drill and 10 drillings of an old drill that has already been used for 40 drillings. 'Test Statistics' of small samples t-test was 3.49, confirming that the used twist drills require longer drilling time than new ones. On the other hand, it was revealed that there was no significant difference in drilling temperature between the new drill and the old twist drill. Finally, the following conclusions were reached from this study : 1 Used drilling bur causes almost no change in drilling temperature but increase in drilling time through 50 drillings under the manufacturer-recommended cooling conditions and the drilling pressure of 750g. 2. The heat that is generated through drilling mattered only in the case of 2mm twist drills, the first drill to be used in bone drilling process for all the other drills there is no significant problem. 3. If the drilling pressure is increased when a 2mm twist drill reaches a dense bone, the temperature rises abruptly even under the manufacturer-recommended cooling conditions. 4. Drilling heat was the highest at the final moment of the drilling process.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Production of Medium-chain Fatty Acids in Brassica napus by Biotechnology (유채에서의 중쇄지방산 생산)

  • Roh, Kyung-Hee;Lee, Ki-Jong;Park, Jong-Sug;Kim, Hyun-Uk;Lee, Kyeong-Ryeol;Kim, Jong-Bum
    • Journal of Applied Biological Chemistry
    • /
    • v.53 no.2
    • /
    • pp.65-70
    • /
    • 2010
  • Medium-chain fatty acids (MCFA) are composed of 8-12 carbon atoms, and are found in coconut, cuphea, and palm kernel oil. MCFA were introduced into clinical nutrition in the 1950s for dietary treatment of malabsorption syndromes because of their rapid absorption and solubility. Recently, MCFA have been applied to Gastrointestinal Permeation Enhancement Technology (GIPET), which is one of the most important parts in drug delivery system in therapeutics. Therefore, to accumulate the MCFA in seed oil of rapeseed, much effort has been conducted by classical or molecular breeding. Laurate can be successfully accumulated up to 60 mol% in the seed oil of rapeseed by the expression of bay thioesterase (Uc FatB1) alone or crossed with a line over-expressing the coconut lysophosphatidic acid acyltransferase (LPAAT) under the control of a napin seed-storage protein promoter. Also, caprylate and caprate were obtained 7 mol% and 29 mol%, respectively, from plants over-expressing of the medium-chain specific thioesterase (Ch FatB2) alone or together with the chain-length-specific condensing enzyme (Ch KASIV). Despite the success of some research in utilizing parallel classical and molecular breeding to produce MCFA, commercially available seed oils have for the most part, not been realized. Recent research in the field of developing MCFA-enriched transgenic plants has established that there is no single rate-limiting step in the production of the target fatty acids. The purpose of this article is to review some of the recent progress in understanding the mechanism and regulation of MCFA production in seed oil of rapeseed.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.