• Title/Summary/Keyword: Workflow System

Search Result 411, Processing Time 0.033 seconds

Deep Learning in Radiation Oncology

  • Cheon, Wonjoong;Kim, Haksoo;Kim, Jinsung
    • Progress in Medical Physics
    • /
    • v.31 no.3
    • /
    • pp.111-123
    • /
    • 2020
  • Deep learning (DL) is a subset of machine learning and artificial intelligence that has a deep neural network with a structure similar to the human neural system and has been trained using big data. DL narrows the gap between data acquisition and meaningful interpretation without explicit programming. It has so far outperformed most classification and regression methods and can automatically learn data representations for specific tasks. The application areas of DL in radiation oncology include classification, semantic segmentation, object detection, image translation and generation, and image captioning. This article tries to understand what is the potential role of DL and what can be more achieved by utilizing it in radiation oncology. With the advances in DL, various studies contributing to the development of radiation oncology were investigated comprehensively. In this article, the radiation treatment process was divided into six consecutive stages as follows: patient assessment, simulation, target and organs-at-risk segmentation, treatment planning, quality assurance, and beam delivery in terms of workflow. Studies using DL were classified and organized according to each radiation treatment process. State-of-the-art studies were identified, and the clinical utilities of those researches were examined. The DL model could provide faster and more accurate solutions to problems faced by oncologists. While the effect of a data-driven approach on improving the quality of care for cancer patients is evidently clear, implementing these methods will require cultural changes at both the professional and institutional levels. We believe this paper will serve as a guide for both clinicians and medical physicists on issues that need to be addressed in time.

Effect of Patient Safety Training Program of Nurses in Operating Room

  • Zhang, Peijia;Liao, Xin;Luo, Jie
    • Journal of Korean Academy of Nursing
    • /
    • v.52 no.4
    • /
    • pp.378-390
    • /
    • 2022
  • Purpose: This study developed an in-service training program for patient safety and aimed to evaluate the impact of the program on nurses in the operating room (OR). Methods: A pretest-posttest self-controlled survey was conducted on OR nurses from May 6 to June 14, 2020. An in-service training program for patient safety was developed on the basis of the knowledge-attitude-practice (KAP) theory through various teaching methods. The levels of safety attitude, cognition, and attitudes toward the adverse event reporting of nurses were compared to evaluate the effect of the program. Nurses who attended the training were surveyed one week before the training (pretest) and two weeks after the training (posttest). Results: A total of 84 nurses participated in the study. After the training, the scores of safety attitude, cognition, and attitudes toward adverse event reporting of nurses showed a significant increase relative to the scores before the training (p < .001). The effects of safety training on the total score and the dimensions of safety attitude, cognition, and attitudes toward nurses' adverse event reporting were above the moderate level. Conclusion: The proposed patient safety training program based on KAP theory improves the safety attitude of OR nurses. Further studies are required to develop an interprofessional patient safety training program. In addition to strength training, hospital managers need to focus on the aspects of workflow, management system, department culture, and other means to promote safety culture.

A Context-Aware System for Reliable RFID-based Logistics Management (RFID 기반 물류관리의 신뢰성 향상을 위한 상황인지 시스템 개발)

  • Jin, Hee-Ju;Kim, Hoontae;Lee, Yong-Han
    • The Journal of Society for e-Business Studies
    • /
    • v.18 no.2
    • /
    • pp.223-240
    • /
    • 2013
  • RFID(Radio Frequency Identification) is use of an RFID tag applied to object for the purpose of identification and tracking using radio waves. Recently, it is being actively researched and introduced in logistics and manufacturing. RFID portals in supply chains are meant to identify all the tags within a given interrogation zone. Hence the hardware and software mechanisms for RFID tag identification mostly focus on successful read of multiple tags simultaneously. Such mechanisms, however, are inefficient for determining moving direction of tags, sequence of consecutive tags, and validity of the tag reads from the viewpoint of workflow. These types of problems usually cause many difficulties in RFID portal implementation in manufacturing environment, there by having RFID-system developers waste a considerable amount of time. In this research, we designated an RFID portal system with SDO(Sequence, Direction, and Object-flow)-perception capability by using fundamental data supplied by ordinary RFID readers. Using our work, RFID system developers can save a great amount of time building RFID data-capturing applications in manufacturing environment.

A Java-based Dynamic Management Systemfor Heterogeneous Agents (이질적 에이전트를 위한 자바 기반의 동적 관리 시스템)

  • Jang, Ji-Hun;Choe, Jung-Min
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.7
    • /
    • pp.778-787
    • /
    • 1999
  • 이제까지 대부분의 다중 에이전트 시스템에서는 에이전트 사회에 속한 모든 응용 에이전트를 작업 요청에 관계없이 처음부터 구동시킨다고 가정하였다. 이러한 에이전트 정적 구동 방법은 에이전트 관리를 단순하게 해주는 이점을 제공하지만 워크플로우 관리나 전자상거래와 같이 매우 많은 수의 에이전트로 구성되는 응용 분야에서는 시스템 과부하와 자원의 낭비 등 많은 문제점을 초래한다. 동적 에이전트 관리는 이에 대한 해결책으로 아주 많은 수의 에이전트를 포함하는 다중 에이전트 시스템에서 현재 수행중인 작업에 관련된 에이전트만을 선별하여 구동시키고, 작업이 끝난 에이전트는 종료시킴으로써 자원의 낭비를 막고 에이전트간의 상호작용 시에 요구되는 에이전트 통신의 복잡도 부담을 감소시키는 효과를 낸다. 본 논문에서는 자바로 에이전트 관리 시스템을 구현하고, 이 관리 시스템을 통해 각기 다른 언어로 개발된 응용 에이전트가 분산된 환경에서 상호 협력을 통해 작업을 수행할 수 있는 기법을 제안한다. 사용자나 다른 에이전트의 요청으로 에이전트를 동적으로 수행시키기 위해 다른 언어로의 확장을 가능하게 하는 Java Native Interface(JNI)를 사용한 기술 및 이러한 이질적인 에이전트간의 원활한 통신을 위해서 KQML 언어 인터페이스를 통한 통신 기능을 제안한다. 이질적 에이전트의 동적 관리를 가능하게 함으로써 다중 에이전트 시스템의 자원 이용 효율성과 확장성을 높이고 다양한 환경 변화에 대한 적응성과 개선된 협동능력을 제공한다.Abstract It has been assumed that all application agents in a multi-agent system are pre-invoked and remain active regardless of whether they are actually used. Although this kind of static agent invocation simplifies the management of agents, it causes several problems such as the system overload and a waste of resources, especially in the areas of the workflow management and the electronic commerce that consist of tens and even hundreds of application agents. A solution for these problems is the scheme of dynamic agent management that selectively invokes only agents that are actually requested and terminates them when they are no longer needed. This method prevents a waste of system resources and alleviates the complexity of agent communications.This paper proposes an agent management system implemented in Java that supports interactions between application agents that are developed using different languages. Dynamic agent invocation is accomplished by Java Native Interface(JNI) that links two heterogeneous methods, and by KQML language interface that facilitates the communications between heterogeneous agents. This scheme of dynamic agent management provides efficient resource usage, easy extensibility, dynamic adaptibility to changes in the environment, and improved cooperation.

Patent data analysis using clique analysis in a keyword network (키워드 네트워크의 클릭 분석을 이용한 특허 데이터 분석)

  • Kim, Hyon Hee;Kim, Donggeon;Jo, Jinnam
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.5
    • /
    • pp.1273-1284
    • /
    • 2016
  • In this paper, we analyzed the patents on machine learning using keyword network analysis and clique analysis. To construct a keyword network, important keywords were extracted based on the TF-IDF weight and their association, and network structure analysis and clique analysis was performed. Density and clustering coefficient of the patent keyword network are low, which shows that patent keywords on machine learning are weakly connected with each other. It is because the important patents on machine learning are mainly registered in the application system of machine learning rather thant machine learning techniques. Also, our results of clique analysis showed that the keywords found by cliques in 2005 patents are the subjects such as newsmaker verification, product forecasting, virus detection, biomarkers, and workflow management, while those in 2015 patents contain the subjects such as digital imaging, payment card, calling system, mammogram system, price prediction, etc. The clique analysis can be used not only for identifying specialized subjects, but also for search keywords in patent search systems.

Leision Detection in Chest X-ray Images based on Coreset of Patch Feature (패치 특징 코어세트 기반의 흉부 X-Ray 영상에서의 병변 유무 감지)

  • Kim, Hyun-bin;Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.23 no.3
    • /
    • pp.35-45
    • /
    • 2022
  • Even in recent years, treatment of first-aid patients is still often delayed due to a shortage of medical resources in marginalized areas. Research on automating the analysis of medical data to solve the problems of inaccessibility for medical services and shortage of medical personnel is ongoing. Computer vision-based medical inspection automation requires a lot of cost in data collection and labeling for training purposes. These problems stand out in the works of classifying lesion that are rare, or pathological features and pathogenesis that are difficult to clearly define visually. Anomaly detection is attracting as a method that can significantly reduce the cost of data collection by adopting an unsupervised learning strategy. In this paper, we propose methods for detecting abnormal images on chest X-RAY images as follows based on existing anomaly detection techniques. (1) Normalize the brightness range of medical images resampled as optimal resolution. (2) Some feature vectors with high representative power are selected in set of patch features extracted as intermediate-level from lesion-free images. (3) Measure the difference from the feature vectors of lesion-free data selected based on the nearest neighbor search algorithm. The proposed system can simultaneously perform anomaly classification and localization for each image. In this paper, the anomaly detection performance of the proposed system for chest X-RAY images of PA projection is measured and presented by detailed conditions. We demonstrate effect of anomaly detection for medical images by showing 0.705 classification AUROC for random subset extracted from the PadChest dataset. The proposed system can be usefully used to improve the clinical diagnosis workflow of medical institutions, and can effectively support early diagnosis in medically poor area.

Design of MAHA Supercomputing System for Human Genome Analysis (대용량 유전체 분석을 위한 고성능 컴퓨팅 시스템 MAHA)

  • Kim, Young Woo;Kim, Hong-Yeon;Bae, Seungjo;Kim, Hag-Young;Woo, Young-Choon;Park, Soo-Jun;Choi, Wan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.81-90
    • /
    • 2013
  • During the past decade, many changes and attempts have been tried and are continued developing new technologies in the computing area. The brick wall in computing area, especially power wall, changes computing paradigm from computing hardwares including processor and system architecture to programming environment and application usage. The high performance computing (HPC) area, especially, has been experienced catastrophic changes, and it is now considered as a key to the national competitiveness. In the late 2000's, many leading countries rushed to develop Exascale supercomputing systems, and as a results tens of PetaFLOPS system are prevalent now. In Korea, ICT is well developed and Korea is considered as a one of leading countries in the world, but not for supercomputing area. In this paper, we describe architecture design of MAHA supercomputing system which is aimed to develop 300 TeraFLOPS system for bio-informatics applications like human genome analysis and protein-protein docking. MAHA supercomputing system is consists of four major parts - computing hardware, file system, system software and bio-applications. MAHA supercomputing system is designed to utilize heterogeneous computing accelerators (co-processors like GPGPUs and MICs) to get more performance/$, performance/area, and performance/power. To provide high speed data movement and large capacity, MAHA file system is designed to have asymmetric cluster architecture, and consists of metadata server, data server, and client file system on top of SSD and MAID storage servers. MAHA system softwares are designed to provide user-friendliness and easy-to-use based on integrated system management component - like Bio Workflow management, Integrated Cluster management and Heterogeneous Resource management. MAHA supercomputing system was first installed in Dec., 2011. The theoretical performance of MAHA system was 50 TeraFLOPS and measured performance of 30.3 TeraFLOPS with 32 computing nodes. MAHA system will be upgraded to have 100 TeraFLOPS performance at Jan., 2013.

Improvement in facies discrimination using multiple seismic attributes for permeability modelling of the Athabasca Oil Sands, Canada (캐나다 Athabasca 오일샌드의 투수도 모델링을 위한 다양한 탄성파 속성들을 이용한 상 구분 향상)

  • Kashihara, Koji;Tsuji, Takashi
    • Geophysics and Geophysical Exploration
    • /
    • v.13 no.1
    • /
    • pp.80-87
    • /
    • 2010
  • This study was conducted to develop a reservoir modelling workflow to reproduce the heterogeneous distribution of effective permeability that impacts on the performance of SAGD (Steam Assisted Gravity Drainage), the in-situ bitumen recovery technique in the Athabasca Oil Sands. Lithologic facies distribution is the main cause of the heterogeneity in bitumen reservoirs in the study area. The target formation consists of sand with mudstone facies in a fluvial-to-estuary channel system, where the mudstone interrupts fluid flow and reduces effective permeability. In this study, the lithologic facies is classified into three classes having different characteristics of effective permeability, depending on the shapes of mudstones. The reservoir modelling workflow of this study consists of two main modules; facies modelling and permeability modelling. The facies modelling provides an identification of the three lithologic facies, using a stochastic approach, which mainly control the effective permeability. The permeability modelling populates mudstone volume fraction first, then transforms it into effective permeability. A series of flow simulations applied to mini-models of the lithologic facies obtains the transformation functions of the mudstone volume fraction into the effective permeability. Seismic data contribute to the facies modelling via providing prior probability of facies, which is incorporated in the facies models by geostatistical techniques. In particular, this study employs a probabilistic neural network utilising multiple seismic attributes in facies prediction that improves the prior probability of facies. The result of using the improved prior probability in facies modelling is compared to the conventional method using a single seismic attribute to demonstrate the improvement in the facies discrimination. Using P-wave velocity in combination with density in the multiple seismic attributes is the essence of the improved facies discrimination. This paper also discusses sand matrix porosity that makes P-wave velocity differ between the different facies in the study area, where the sand matrix porosity is uniquely evaluated using log-derived porosity, P-wave velocity and photographically-predicted mudstone volume.

Bioinformatics services for analyzing massive genomic datasets

  • Ko, Gunhwan;Kim, Pan-Gyu;Cho, Youngbum;Jeong, Seongmun;Kim, Jae-Yoon;Kim, Kyoung Hyoun;Lee, Ho-Yeon;Han, Jiyeon;Yu, Namhee;Ham, Seokjin;Jang, Insoon;Kang, Byunghee;Shin, Sunguk;Kim, Lian;Lee, Seung-Won;Nam, Dougu;Kim, Jihyun F.;Kim, Namshin;Kim, Seon-Young;Lee, Sanghyuk;Roh, Tae-Young;Lee, Byungwook
    • Genomics & Informatics
    • /
    • v.18 no.1
    • /
    • pp.8.1-8.10
    • /
    • 2020
  • The explosive growth of next-generation sequencing data has resulted in ultra-large-scale datasets and ensuing computational problems. In Korea, the amount of genomic data has been increasing rapidly in the recent years. Leveraging these big data requires researchers to use large-scale computational resources and analysis pipelines. A promising solution for addressing this computational challenge is cloud computing, where CPUs, memory, storage, and programs are accessible in the form of virtual machines. Here, we present a cloud computing-based system, Bio-Express, that provides user-friendly, cost-effective analysis of massive genomic datasets. Bio-Express is loaded with predefined multi-omics data analysis pipelines, which are divided into genome, transcriptome, epigenome, and metagenome pipelines. Users can employ predefined pipelines or create a new pipeline for analyzing their own omics data. We also developed several web-based services for facilitating downstream analysis of genome data. Bio-Express web service is freely available at https://www. bioexpress.re.kr/.

Exception based Dynamic Service Coordination Framework for Web Services (웹 서비스를 위한 예외 상황 기반 동적 서비스 연결 프레임워크)

  • Han Dong-Soo;Lee Sung-Doke;Jung Jong-Ha
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.8
    • /
    • pp.668-680
    • /
    • 2006
  • Web services on the Internet are not always reliable in terms of service availability and performance. Dynamic service coordination capability of a system or an application invoking Web services is essential to cope with such unreliable situations. In dynamic service coordination, if a Web service does not respond within a specific time constraint, it is replaced with another Web service at run time for reliable invocation of Web services. In this paper, we develop an exception based dynamic service coordination framework for Web services. In the framework, all necessary information for dynamic service coordination is explicitly specified and summarized as a set of attributes. Then classes and workflows, supporting dynamic service coordination and invoking Web services, are automatically created based on these attributes. Developers of Web services client programs can make the invocations of Web services reliable by calling the methods of the classes. Some performance loss has been observed in the indirect invocation of a Web service. However, when we consider the flexibility and reliability gained from the method, the performance loss would be acceptable in many cases.