• Title/Summary/Keyword: 기술신뢰

Search Result 6,522, Processing Time 0.032 seconds

Analysis of Variation for Parallel Test between Reagent Lots in in-vitro Laboratory of Nuclear Medicine Department (핵의학 체외검사실에서 시약 lot간 parallel test 시 변이 분석)

  • Chae, Hong Joo;Cheon, Jun Hong;Lee, Sun Ho;Yoo, So Yeon;Yoo, Seon Hee;Park, Ji Hye;Lim, Soo Yeon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.2
    • /
    • pp.51-58
    • /
    • 2019
  • Purpose In in-vitro laboratories of nuclear medicine department, when the reagent lot or reagent lot changes Comparability test or parallel test is performed to determine whether the results between lots are reliable. The most commonly used standard domestic laboratories is to obtain %difference from the difference in results between two lots of reagents, and then many laboratories are set the standard to less than 20% at low concentrations and less than 10% at medium and high concentrations. If the range is deviated from the standard, the test is considered failed and it is repeated until the result falls within the standard range. In this study, several tests are selected that are performed in nuclear medicine in-vitro laboratories to analyze parallel test results and to establish criteria for customized percent difference for each test. Materials and Methods From January to November 2018, the result of parallel test for reagent lot change is analyzed for 7 items including thyroid-stimulating hormone (TSH), free thyroxine (FT4), carcinoembryonic antigen (CEA), CA-125, prostate-specific antigen (PSA), HBs-Ab and Insulin. The RIA-MAT 280 system which adopted the principle of IRMA is used for TSH, FT4, CEA, CA-125 and PSA. TECAN automated dispensing equipment and GAMMA-10 is used to measure insulin test. For the test of HBs-Ab, HAMILTON automated dispensing equipment and Cobra Gamma ray measuring instrument are used. Separate reagent, customized calibrator and quality control materials are used in this experiment. Results 1. TSH [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(low concentration) [14.8 / 4.4 / 3.7 / 0.0 ] C-2(middle concentration) [10.1 / 4.2 / 3.7 / 0.0] 2. FT4 [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(low concentration) [10.0 / 4.2 / 3.9 / 0.0] C-2(high concentration) [9.6 / 3.3 / 3.1 / 0.0 ] 3. CA-125 [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(middle concentration) [9.6 / 4.3 / 4.3 / 0.3] C-2(high concentration) [6.5 / 3.5 / 4.3 / 0.4] 4. CEA [%diffrence Max / Mean / median] (P-value by t-test > 0.05) C-1(low concentration) [9.8 / 4.2 / 3.0 / 0.0] C-2(middle concentration) [8.7 / 3.7 / 2.3 / 0.3] 5. PSA [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(low concentration) [15.4 / 7.6 / 8.2 / 0.0] C-2(middle concentration) [8.8 / 4.5 / 4.8 / 0.9] 6. HBs-Ab [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(middle concentration) [9.6 / 3.7 / 2.7 / 0.2] C-2(high concentration) [8.9 / 4.1 / 3.6 / 0.3] 7. Insulin [%diffrence Max / Mean / Median] (P-value by t-test > 0.05) C-1(middle concentration) [8.7 / 3.1 / 2.4 / 0.9] C-2(high concentration) [8.3 / 3.2 / 1.5 / 0.1] In some low concentration measurements, the percent difference is found above 10 to nearly 15 percent in result of target value calculated at a lower concentration. In addition, when the value is measured after Standard level 6, which is the highest value of reagents in the dispensing sequence, the result would have been affected by a hook effect. Overall, there was no significant difference in lot change of quality control material (p-value>0.05). Conclusion Variations between reagent lots are not large in immunoradiometric assays. It is likely that this is due to the selection of items that have relatively high detection rate in the immunoradiometric method and several remeasurements. In most test results, the difference was less than 10 percent, which was within the standard range. TSH control level 1 and PSA control level 1, which have low concentration target value, exceeded 10 percent more than twice, but it did not result in a value that was near 20 percent. As a result, it is required to perform a longer period of observation for more homogenized average results and to obtain laboratory-specific acceptance criteria for each item. Also, it is advised to study observations considering various variables.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.