• Title/Summary/Keyword: automatic test

Search Result 1,637, Processing Time 0.035 seconds

Study on Overcoming Interference Factor by Automatic Synthesizer in Endotoxin Test (내독소 검사에서 자동합성장치에 따른 간섭요인 극복에 대한 연구)

  • Kim, Dong Il;Kim, Si Hwal;Chi, Yong Gi;Seok, Jae Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.2
    • /
    • pp.3-6
    • /
    • 2012
  • Purpose : Samsung medical ceter shall find a cause of the interference factor and suggest a solution for it. Materials and Methods : A sample of $^{18}F$-FDG, radioactive pharmaceuticals produced by TRACERlab MX and FASTlab synthesizer. Gel-clot method uses Positive control tube and single test tube. Kinetic chromogenic method uses ENDOSAFE-PTS produced by Charles River. Results : According to Gel clot method of Endotoxin Tests at FASTlab, both turbidity and viscosity increased at 40-fold dilution and Gel clot was detected. In case of TRACERlab MX, Gel clot was detected in most of samples but intermittently not in a few of them. When using ENDOSAFE-PTS, sample CV (Coefficient of Variation) of FASTlab is 0% at all dilution rates whereas spike CV is 0% at 1-fold dilution, 0~35% at 10-fold, 3.6~12.9% at 20-fold, 5.2~7.1% at 30-fold, 1.1~17.4% at 40-fold, spike recovery; 0% at one-fold, 25 ~ 58% at 10-fold, 50 ~ 86% at 20-fold, 70~92% at 30-fold, and 75~120% at 40-fold. Sample CV of TRACERlab MX, is 0% at all dilution rates whereas spike CV is 1.4~4.8% at one-fold dilution, 0.6~19.9% at 10-fold, spike recovery; 35~72% at one-fold dilution and 77~107% at 10-fold. Conclusion : Gel clot does not seem to occur probably to H3PO4 which engages in bonding with Mg2+ion contributing gelation inside PCT. Dilution which is identical to reducing the amount of H3PO4, could remove interfering effects accordingly. Spike recovery was obtained within 70~150% - recommended values of supplier - at 40-fold dilution even in kinetic chromogenic method.

  • PDF

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Upper Body Surface Change Analysis using 3-D Body Scanner (3차원 인체 측정기를 이용한 체표변화 분석)

  • Lee Jeongran;Ashdoon Susan P.
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.29 no.12 s.148
    • /
    • pp.1595-1607
    • /
    • 2005
  • Three-dimensional(3-D) body scanners used to capture anthropometric measurements are now becoming a common research tool far apparel. This study had two goals, to test the accuracy and reliability of 3-D measurements of dynamic postures, and !o analyze the change in upper body surface measurements between the standard anthropometric position and various dynamic positions. A comparison of body surface measurements using two different measuring methods, 3-D scan measurements using virtual tools on the computer screen and traditional manual measurements for a standard anthropometric posture and for a posture with shoulder flexion were $-2\~20mm$. Girth items showed some disagreement of values between the two methods. None of the measurements were significantly different except f3r the neckbase girth for any of the measuring methods or postures. Scan measurements of the upper body items showed significant linear surface change in the dynamic postures. Shoulder length, interscye front and back, and biacromion length were the items most affected in the dynamic postures. Changes of linear body surface were very similar for the two measuring methods within the same posture. The repeatability of data taken from the 3-D scans using virtual tools showed satisfactory results. Three times repeated scan measurements f3r the scapula protraction and scapula elevation posture were proven to be statistically the same for all measurement items. Measurements from automatic measuring software that measured the 3-D scan with no manual intervention were compared with the measurements using virtual tools. Many measurements from the automatic program were larger and showed quite different values.

Ontology-Based Process-Oriented Knowledge Map Enabling Referential Navigation between Knowledge (지식 간 상호참조적 네비게이션이 가능한 온톨로지 기반 프로세스 중심 지식지도)

  • Yoo, Kee-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.61-83
    • /
    • 2012
  • A knowledge map describes the network of related knowledge into the form of a diagram, and therefore underpins the structure of knowledge categorizing and archiving by defining the relationship of the referential navigation between knowledge. The referential navigation between knowledge means the relationship of cross-referencing exhibited when a piece of knowledge is utilized by a user. To understand the contents of the knowledge, a user usually requires additionally information or knowledge related with each other in the relation of cause and effect. This relation can be expanded as the effective connection between knowledge increases, and finally forms the network of knowledge. A network display of knowledge using nodes and links to arrange and to represent the relationship between concepts can provide a more complex knowledge structure than a hierarchical display. Moreover, it can facilitate a user to infer through the links shown on the network. For this reason, building a knowledge map based on the ontology technology has been emphasized to formally as well as objectively describe the knowledge and its relationships. As the necessity to build a knowledge map based on the structure of the ontology has been emphasized, not a few researches have been proposed to fulfill the needs. However, most of those researches to apply the ontology to build the knowledge map just focused on formally expressing knowledge and its relationships with other knowledge to promote the possibility of knowledge reuse. Although many types of knowledge maps based on the structure of the ontology were proposed, no researches have tried to design and implement the referential navigation-enabled knowledge map. This paper addresses a methodology to build the ontology-based knowledge map enabling the referential navigation between knowledge. The ontology-based knowledge map resulted from the proposed methodology can not only express the referential navigation between knowledge but also infer additional relationships among knowledge based on the referential relationships. The most highlighted benefits that can be delivered by applying the ontology technology to the knowledge map include; formal expression about knowledge and its relationships with others, automatic identification of the knowledge network based on the function of self-inference on the referential relationships, and automatic expansion of the knowledge-base designed to categorize and store knowledge according to the network between knowledge. To enable the referential navigation between knowledge included in the knowledge map, and therefore to form the knowledge map in the format of a network, the ontology must describe knowledge according to the relation with the process and task. A process is composed of component tasks, while a task is activated after any required knowledge is inputted. Since the relation of cause and effect between knowledge can be inherently determined by the sequence of tasks, the referential relationship between knowledge can be circuitously implemented if the knowledge is modeled to be one of input or output of each task. To describe the knowledge with respect to related process and task, the Protege-OWL, an editor that enables users to build ontologies for the Semantic Web, is used. An OWL ontology-based knowledge map includes descriptions of classes (process, task, and knowledge), properties (relationships between process and task, task and knowledge), and their instances. Given such an ontology, the OWL formal semantics specifies how to derive its logical consequences, i.e. facts not literally present in the ontology, but entailed by the semantics. Therefore a knowledge network can be automatically formulated based on the defined relationships, and the referential navigation between knowledge is enabled. To verify the validity of the proposed concepts, two real business process-oriented knowledge maps are exemplified: the knowledge map of the process of 'Business Trip Application' and 'Purchase Management'. By applying the 'DL-Query' provided by the Protege-OWL as a plug-in module, the performance of the implemented ontology-based knowledge map has been examined. Two kinds of queries to check whether the knowledge is networked with respect to the referential relations as well as the ontology-based knowledge network can infer further facts that are not literally described were tested. The test results show that not only the referential navigation between knowledge has been correctly realized, but also the additional inference has been accurately performed.

Consideration of Normal Variation of Perfusion Measurements in the Quantitative Analysis of Myocardial Perfusion SPECT: Usefulness in Assessment of Viable Myocardium (심근관류 SPECT의 정량적 분석에서 관류정량값 정상변이의 고려: 생존심근 평가에서의 유용성)

  • Paeng, Jin-Chul;Lim, Il-Han;Kim, Ki-Bong;Lee, Dong-Soo
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.4
    • /
    • pp.285-291
    • /
    • 2008
  • Purpose: Although automatic quantification software of myocardial perfusion SPECT provides highly objective and reproducible quantitative measurements, there is still some limitation in the direct use of quantitative measurements. In this study we derived parameters using normal variation of perfusion measurements, and tried to test the usefulness of these parameters. Materials and Methods: In order to calculate normal variation of perfusion measurements on myocardial perfusion SPECT, 55 patients (M:F = 28:27) of low-likelihood for coronary artery disease were enrolled and $^{201}TI$ rest/$^{99m}Tc$-MIBI stress SPECT studies were performed. Using 20-segment model, mean (m) and standard deviation (SD) of perfusion were calculated in each segment. As a myocardial viability assessment group, another 48 patients with known coronary artery disease, who underwent coronary artery bypass graft surgery (CABG) were enrolled. $^{201}TI$ rest/$^{99m}Tc$-MIBI stress / $^{201}TI$ 24-hr delayed SPECT was performed before CABG and SPECT was followed up 3 months after CABG. From the preoperative 24-hr delayed SPECT, $Q_{delay}$ (perfusion measurement), ${\Delta}_{delay}$ ($Q_{delay}$ - m) and $Z_{delay}$ (($Q_{delay}$ - m)/SD) were defined and diagnostic performances of them for myocardial viability were evaluated using area under curve (AUC) on receiver operating characteristic (ROC) curve analysis. Results: Segmental perfusion measurements showed considerable normal variations among segments. In men, the lowest segmental perfusion measurement was $51.8{\pm}6.5$ and the highest segmental perfusion was $87.0{\pm}5.9$, and they are $58.7{\pm}8.1$ and $87.3{\pm}6.0$, respectively in women. In the viability assessment $Q_{delay}$ showed AUC of 0.633, while those for ${\Delta}_{delay}$ and $Z_{delay}$ were 0.735 and 0.716, respectively. The AUCs of ${\Delta}_{delay}$ and $Z_{delay}$ were significantly higher than that of $Q_{delay}$ (p = 0.001 and 0.018, respectively). The diagnostic performance of ${\Delta}_{delay}$, which showed highest AUC, was 85% of sensitivity and 53% of specificity at the optimal cutoff of -24.7. Conclusion: On automatic quantification of myocardial perfusion SPECT, the normal variation of perfusion measurements were considerable among segments. In the viability assessment, the parameters considering normal variation showed better diagnostic performance than the direct perfusion measurement. This study suggests that consideration of normal variation is important in the analysis of measurements on quantitative myocardial perfusion SPECT.

Development of remote control automatic fire extinguishing system for fire suppression in double-deck tunnel (복층터널 화재대응을 위한 원격 자동소화 시스템 개발 연구)

  • Park, Jinouk;Yoo, Yongho;Kim, Yangkyun;Park, Byoungjik;Kim, Whiseong;Park, Sangheon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.21 no.1
    • /
    • pp.167-175
    • /
    • 2019
  • To effectively deal with the fire in tunnel which is mostly the vehicle fire, it's more important to suppress the fire at early stage. In urban tunnel, however, accessibility to the scene of fire by the fire fighter is very limited due to severe traffic congestion which causes the difficulty with firefighting activity in timely manner and such a problem would be further worsened in underground road (double-deck tunnel) which has been increasingly extended and deepened. In preparation for the disaster in Korea, the range of life safety facilities for installation is defined based on category of the extension and fire protection referring to risk hazard index which is determined depending on tunnel length and conditions, and particularly to directly deal with the tunnel fire, fire extinguisher, indoor hydrant and sprinkler are designated as the mandatory facilities depending on category. But such fire extinguishing installations are found inappropriate functionally and technically and thus the measure to improve the system needs to be taken. Particularly in a double-deck tunnel which accommodates the traffic in both directions within a single tunnel of which section is divided by intermediate slab, the facility or the system which functions more rapidly and effectively is more than important. This study, thus, is intended to supplement the problems with existing tunnel life safety system (fire extinguishing) and develop the remote-controlled automatic fire extinguishing system which is optimized for a double-deck tunnel. Consequently, the system considering low floor height and extended length as well as indoor hydrant for a wide range of use have been developed together with the performance verification and the process for commercialization before applying to the tunnel is underway now.

Automatic Fracture Detection in CT Scan Images of Rocks Using Modified Faster R-CNN Deep-Learning Algorithm with Rotated Bounding Box (회전 경계박스 기능의 변형 FASTER R-CNN 딥러닝 알고리즘을 이용한 암석 CT 영상 내 자동 균열 탐지)

  • Pham, Chuyen;Zhuang, Li;Yeom, Sun;Shin, Hyu-Soung
    • Tunnel and Underground Space
    • /
    • v.31 no.5
    • /
    • pp.374-384
    • /
    • 2021
  • In this study, we propose a new approach for automatic fracture detection in CT scan images of rock specimens. This approach is built on top of two-stage object detection deep learning algorithm called Faster R-CNN with a major modification of using rotated bounding box. The use of rotated bounding box plays a key role in the future work to overcome several inherent difficulties of fracture segmentation relating to the heterogeneity of uninterested background (i.e., minerals) and the variation in size and shape of fracture. Comparing to the commonly used bounding box (i.e., axis-align bounding box), rotated bounding box shows a greater adaptability to fit with the elongated shape of fracture, such that minimizing the ratio of background within the bounding box. Besides, an additional benefit of rotated bounding box is that it can provide relative information on the orientation and length of fracture without the further segmentation and measurement step. To validate the applicability of the proposed approach, we train and test our approach with a number of CT image sets of fractured granite specimens with highly heterogeneous background and other rocks such as sandstone and shale. The result demonstrates that our approach can lead to the encouraging results on fracture detection with the mean average precision (mAP) up to 0.89 and also outperform the conventional approach in terms of background-to-object ratio within the bounding box.

Performance Evaluation of Chest X-ray Image Deep Learning Classification Model according to Application of Optimization Algorithm and Learning Rate (최적화 알고리즘과 학습률 적용에 따른 흉부 X선 영상 딥러닝 분류 모델 성능평가)

  • Ji-Yul Kim;Bong-Jae Jeong
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.5
    • /
    • pp.531-540
    • /
    • 2024
  • Recently, research and development on automatic diagnosis solutions in the medical imaging field using deep learning are actively underway. In this study, we sought to find a fast and accurate classification deep learning modeling for classification of pneumonia in chest images using Inception V3, a deep learning model based on a convolutional artificial neural network. For this reason, after applying the optimization algorithms AdaGrad, RMS Prop, and Adam to deep learning modeling, deep learning modeling was implemented by selectively applying learning rates of 0.01 and 0.001, and then the performance of chest X-ray image pneumonia classification was compared and evaluated. As a result of the study, in verification modeling that can evaluate the performance of the classification model and the learning state of the artificial neural network, it was found that the performance of deep learning modeling for classification of the presence or absence of pneumonia in chest X-ray images was the best when applying Adam as the optimization algorithm with a learning rate of 0.001. I was able to. And in the case of Adam, which is mainly applied as an optimization algorithm when designing deep learning modeling, it showed excellent performance and excellent metric results when selectively applying learning rates of 0.01 and 0.001. In the metric evaluation of test modeling, AdaGrad, which applied a learning rate of 0.1, showed the best results. Based on these results, when designing deep learning modeling for binary-based medical image classification, in order to expect quick and accurate performance, a learning rate of 0.01 is preferentially applied when applying Adam as an optimization algorithm, and a learning rate of 0.01 is preferentially applied when applying AdaGrad. I recommend doing this. In addition, it is expected that the results of this study will be presented as basic data during similar research in the future, and it is expected to be used as useful data in the health and bio industries for the purpose of automatic diagnosis of medical images using deep learning.

The Consideration of the Region of Interest on $^{99m}Tc$-DMSA Renal Scan in Pediatric Hydronephrosis Patients (수신증을 진단 받은 소아 환자의 DMSA 신장 검사에서 정확한 관심영역 설정에 대한 고찰)

  • NamKoong, Hyuk;Lee, Dong-Hyuk;Oh, Shin-Hyun;Cho, Seok-Won;Park, Hoon-Hee;Kim, Jung-Yul;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.1
    • /
    • pp.27-33
    • /
    • 2012
  • Purpose: Most of diagnosis in the pediatric hydronephrosis patients have been performed $^{99m}Tc$-DMSA renal scan. Then the region of interest (ROI) is set for comparative analysis of uptake ratio in left-right kidney after acquiring the image. But if the equipment set an automatic ROI, the ROI could include expanded renal pelvis due to hydronephrosis and the uptake ratio of left-right kidney will be incorrect result. Therefore this study compared both ROIs including expanded renal pelvis and excluding renal pelvis through experiment using normal kidney phantom and expanded renal pelvis phantom and suggested setting method of improved ROI. In addition, this study have been helped by readout doctor for investigate distinction radiopharmaceutical uptake between renal cortex and remained urine by expanded renal pelvis. Materials and Methods: The both of renal phantoms were filled with water and shacked with $^{99m}TcO_4$ 111 MBq. In order to describe the expanded renal pelvis, the five latex balloon were all filled with 10 mL water and each of balloon was mixed with $^{99m}TcO_4$ 18.5, 37, 55.5, 74, 92.5 MBq. And we made phantom with fixed $^{99m}TcO_4$activity of 37 MBq and mixed water 5, 10, 15, 20, 25 mL in each balloon. The left kidney was fixed its shape and the right kidney was modified like as hydronephrosis kidney by attached the latex balloons. And the acquiring counts were 2 million. After acquisition, we compared the image of ROI with Expanded renal pelvis and the image of ROI without renal pelvis for analyzing difference in the uptake ratio of left-right kidney and for reproducibility, set the ROI 5 times in the same images. Patients were injected $^{99m}Tc$-DMSA 1.5~1.9 MBq/kg and scanned 3 to 4 hours after injection. The each of 3 skillful radio technologists performed the comparing estimation by setting ROI. To determine statistical significance between two data, SPSS (ver. 17) Wilcoxon Signed Ranks Test was used. Results: As a result of renal phantom's experiment, we compared with average of counts Background (BKG) ratios in the setting of ROI including expanded renal pelvis and setting of excluding expanded renal pelvis. Therefore, they can obtain changed counts and changed ratios. Patient also can obtain same results. In addition, the radiopharmaceutical uptake in expanded renal pelvis was come out the remained urine that couldn't descend to ureter by the help of readout doctor. Conclusion: As above results, the case of setting ROI including expanded renal pelvis was more abnormally increasing uptake ratio than the case of setting ROI excluding expanded renal pelvis in analysis the uptake ratio in left-right kidney of hydronephrosis. Because of the work convenience and prompted analysis, the automatic ROI is generally used. But in case of the hydronephrosis study, we should set the manual ROI without expanded renal pelvis for an accurate observation of the uptake ratio of left-right kidney since the radiopharmaceutical uptake in expanded renal pelvis is the remained urine.

  • PDF

A Study on the Creep Deformation Characteristic of AZ31 Mg Alloy at High Temperature (AZ3l 마그네슘 합금의 고온 크리이프 변형특성에 관한 연구)

  • An Jungo;Kang Daemi;Koo Yang;Sim Sungbo
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.13 no.3
    • /
    • pp.186-192
    • /
    • 2005
  • The apparent activation energy Qc, the applied stress exponent n, and rupture life have been determined from creep test results of AZ31 Mg alloy over the temperature range of 200$^{\circ}C$ to 300$^{\circ}C$ and the stress range of 23.42 MPa to 93.59 MPa, respectively, in order to investigate the creep behavior. Constant load creep tests were carried out in the equipment including automatic temperature controller with data acquisition computer. At the temperature of $200^{\circ}C{\sim}220^{\circ}C$ and under the stress level of 62.43~93.59 MPa, and at around the temperature of $280^{\circ}C{\sim}300^{\circ}C$ and under the stress level of 23.42~39.00 MPa, the creep behavior obeyed a simple power-law relating steady state creep rate to applied stress and the activation energy fur the creep deformation was nearly equal to that of the self diffusion of Mg alloy including aluminum From the above results, at the temperature of $200^{\circ}C{\sim}220^{\circ}C$ the creep deformation for AZ31 Mg alloy seemed to be controlled by dislocation climb but controlled by dislocation glide at $280^{\circ}C{\sim}300^{\circ}C$ .And relationship beween rupture time and stress at around the temperature of $200^{\circ}C{\sim}220^{\circ}C$ and under the stress level of 62.43~93.59 MPa, and again at around the temperature of $280^{\circ}C{\sim}300^{\circ}C$ and under the stress level of 23.42~39.00 MPa, respectively, appeard as fullow; log$\sigma$=-0.18(T+460)(logtr+21)+5.92, log$\sigma$ = -0.25(T+460)(logtr+21)+8.02 Also relationship beween rupture time and steady state creep rate appears as follow; ln$\dot$ =-0.881ntr-2.45