• Title/Summary/Keyword: Workflow component

Search Result 56, Processing Time 0.022 seconds

Three-Dimensional Approaches in Histopathological Tissue Clearing System (조직투명화 기술을 통한 3차원적 접근)

  • Lee, Tae Bok;Lee, Jaewang;Jun, Jin Hyun
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.52 no.1
    • /
    • pp.1-17
    • /
    • 2020
  • Three-dimensional microscopic approaches in histopathology display multiplex properties that present puzzling questions for specimens as related to their comprehensive volumetric information. This information includes spatial distribution of molecules, three-dimensional co-localization, structural formation and whole data set that cannot be determined by two-dimensional section slides due to the inevitable loss of spatial information. Advancement of optical instruments such as two-photon microscopy and high performance objectives with motorized correction collars have narrowed the gap between optical theories and the actual reality of deep tissue imaging. However, the benefits gained by a prolonged working distance, two-photon laser and optimized beam alignment are inevitably diminished because of the light scattering phenomenon that is deeply related to the refractive index mismatch between each cellular component and the surrounding medium. From the first approaches with simple crude refractive index matching techniques to the recent cutting-edge integrated tissue clearing methods, an achievement of transparency without morphological denaturation and eradication of natural and fixation-induced nonspecific autofluorescence out of real signal are key factors to determine the perfection of tissue clearing and the immunofluorescent staining for high contrast images. When performing integrated laboratory workflow of tissue for processing frozen and formalin-fixed tissues, clear lipid-exchanged acrylamide-hybridized rigid imaging/immunostaining/in situ hybridization-compatible tissue hydrogel (CLARITY), an equipment-based tissue clearing method, is compatible with routine procedures in a histopathology laboratory.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

Clinical Implementation of 3D Printing in the Construction of Patient Specific Bolus for Photon Beam Radiotherapy for Mycosis Fungoides

  • Kim, Sung-woo;Kwak, Jungwon;Cho, Byungchul;Song, Si Yeol;Lee, Sang-wook;Jeong, Chiyoung
    • Progress in Medical Physics
    • /
    • v.28 no.1
    • /
    • pp.33-38
    • /
    • 2017
  • Creating individualized build-up material for superficial photon beam radiation therapy at irregular surface is complex with rice or commonly used flat shape bolus. In this study, we implemented a workflow using 3D printed patient specific bolus and describe our clinical experience. To provide better fitted build-up to irregular surface, the 3D printing technique was used. The PolyLactic Acid (PLA) which processed with nontoxic plant component was used for 3D printer filament material for clinical usage. The 3D printed bolus was designed using virtual bolus structure delineated on patient CT images. Dose distributions were generated from treatment plan for bolus assigned uniform relative electron density and bolus using relative electron density from CT image and compared to evaluate the inhomogeneity effect of bolus material. Pretreatment QA is performed to verify the relative electron density applied to bolus structure by gamma analysis. As an in-vivo dosimetry, Optically Stimulated Luminescent Dosimeters (OSLD) are used to measure the skin dose. The plan comparison result shows that discrepancies between the virtual bolus plan and printed bolus plan are negligible. (0.3% maximum dose difference and 0.2% mean dose difference). The dose distribution is evaluated with gamma method (2%, 2 mm) at the center of GTV and the passing rate was 99.6%. The OSLD measurement shows 0.3% to 2.1% higher than expected dose at patient treatment lesion. In this study, we treated Mycosis fungoides patient with patient specific bolus using 3D printing technique. The accuracy of treatment plan was verified by pretreatment QA and in-vivo dosimetry. The QA results and 4 month follow up result shows the radiation treatment using 3D printing bolus is feasible to treat irregular patient skin.

A Specification Technique for Product Line Core Assets using MDA / PIM (MDA / PIM을 이용한 제품계열 핵심자산의 명세 기법)

  • Min, Hyun-Gi;Han, Man-Jib;Kim, Soo-Dong
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.9
    • /
    • pp.835-846
    • /
    • 2005
  • A Product Line (PL) is a set of products (applications) that share common assets in a domain. Product Line Engineering (PLE) is a set of principles, techniques, mechanisms, and processes that enables the instantiation of produce lines. Core assets, the common assets, are created and instantiated to make products in PLE. Model Driven Architecture (MDA) is a new software development paradigm that emphasizes its feasibility with automatically developing product. Therefore, we can get advantages of both of the two paradigms, PLE and MDA, if core assets are represented as PIM in MDA with predefined automatic mechanism. PLE framework in the PIM level has to be interpreted by MDA tools. However, we do not have a standard UML profile for representing core assets. The research about representing PLE framework is not enough to make automatically core assets and products. We represent core asset in PIM level in terms of structural view and semantic view. We also suggest a method for representing architecture, component, workflow, algorithm, and decision model. The method of representing framework with PLE and MDA is used to improve productivity, applicability, maintainability and qualify of product.

INTEGRATION OF SSM AND IDEF TECHNIQUES FOR ANALYZING DOCUMENT MANAGEMENT PROCESSES

  • Vachara Peansupap;Udtaporn Theingkuen
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.725-731
    • /
    • 2009
  • Construction documents are recognized as an essential component for making a decision and supporting on construction processes. In construction, the management of project document is a complex process due to different factors such as document types, stakeholder involvement, document flow, and document flow processes. Therefore, inappropriate management of project documents can cause several impacts on construction work processes such as delay or poor quality of work. Several information and communication technologies (ICT) were proposed to overcome problems concerning document management practice in construction projects. However, the adoption of ICT may have some limitation on the compatibility of specific document workflow. Lack of understanding on designing document system may cause many problems during the use and implementation phase. Thus, this paper proposes the framework that integrates Soft System Methodology (SSM) concept and Integrated Definition Modeling Technique (IDEF) for analyzing document management system in construction project. Research methodology is classified as the case study. Five main construction building projects are selected as case studies. The qualitative data related to problems and processes are collected by interviewing construction project participants such as main contractors, owners, consultants, and designers. The findings from case study show the benefits of using SSM and IDEF. The use of SSM can help identify the problems in managing construction document in rich picture view whereas IDEF can illustrate the document flow in construction project in details. In addition, the idea of integrating these two concepts can be used to identify the root causes of process problems at the information level. As the results, this idea can be applied to analyze and design web-based document management system in the future.

  • PDF

Design of MAHA Supercomputing System for Human Genome Analysis (대용량 유전체 분석을 위한 고성능 컴퓨팅 시스템 MAHA)

  • Kim, Young Woo;Kim, Hong-Yeon;Bae, Seungjo;Kim, Hag-Young;Woo, Young-Choon;Park, Soo-Jun;Choi, Wan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.81-90
    • /
    • 2013
  • During the past decade, many changes and attempts have been tried and are continued developing new technologies in the computing area. The brick wall in computing area, especially power wall, changes computing paradigm from computing hardwares including processor and system architecture to programming environment and application usage. The high performance computing (HPC) area, especially, has been experienced catastrophic changes, and it is now considered as a key to the national competitiveness. In the late 2000's, many leading countries rushed to develop Exascale supercomputing systems, and as a results tens of PetaFLOPS system are prevalent now. In Korea, ICT is well developed and Korea is considered as a one of leading countries in the world, but not for supercomputing area. In this paper, we describe architecture design of MAHA supercomputing system which is aimed to develop 300 TeraFLOPS system for bio-informatics applications like human genome analysis and protein-protein docking. MAHA supercomputing system is consists of four major parts - computing hardware, file system, system software and bio-applications. MAHA supercomputing system is designed to utilize heterogeneous computing accelerators (co-processors like GPGPUs and MICs) to get more performance/$, performance/area, and performance/power. To provide high speed data movement and large capacity, MAHA file system is designed to have asymmetric cluster architecture, and consists of metadata server, data server, and client file system on top of SSD and MAID storage servers. MAHA system softwares are designed to provide user-friendliness and easy-to-use based on integrated system management component - like Bio Workflow management, Integrated Cluster management and Heterogeneous Resource management. MAHA supercomputing system was first installed in Dec., 2011. The theoretical performance of MAHA system was 50 TeraFLOPS and measured performance of 30.3 TeraFLOPS with 32 computing nodes. MAHA system will be upgraded to have 100 TeraFLOPS performance at Jan., 2013.