• Title/Summary/Keyword: workflow system

Search Result 414, Processing Time 0.019 seconds

Workcase based Very Large Scale Workflow System Architecture (워크케이스 기반의 초대형 워크플로우 시스템 아키텍쳐)

  • 심성수;김광훈
    • Proceedings of the Korea Database Society Conference
    • /
    • 2002.10a
    • /
    • pp.403-416
    • /
    • 2002
  • 워크플로우 관리 시스템은 정부나 기업과 같은 조직의 작업을 처리하기 위한 비즈니스 프로세스를 컴퓨터를 기반으로 자동화함으로서 작업의 효율을 높이고 비용을 절감한다. 현재에 이르러 이런 워크플로우 시스템을 사용하는 조직들이 점차 거대화되어 가고 네트워크의 발달과 인터넷의 출현으로 인하여 워크플로우 시스템이 처리하여야 하는 작업의 수와 고객과 작업자 수 등이 빠른 속도로 증가하는 추세이다. 이런 추세에서 워크플로우 시스템은 거대 조직 환경에 적합한 워크플로우 시스템 아키텍쳐를 필요하게 된다. 이에 본 논문은 거대 조직 환경을 관리할 수 있는 워크플로우 관리 시스템으로 워크케이스 기반의 초대형 워크플로우 시스템의 아키텍쳐를 설계 및 구현 하고자 한다. 그리고 워크플로우 시스템 아키텍쳐를 분류, 분석하여 장단점을 가려내어 이를 기반으로 워크플로우 시스템 아키텍쳐의 성능을 예측하여 워크케이스 기반 워크플로우 시스템 아키텍쳐가 본 논문에서 제안하는 초대형 워크플로우 시스템의 아키텍쳐라는 것을 예측하여 본다. 또한 초대형 워크플로우 시스템을 위하하부 구조로 EJB(Enterprise Java Beans)를 사용하고 사용 이유를 기술한다. 본 논문에서는 이런 워크케이스 기반의 초대형 워크플로우 시스템 아키텍쳐를 위하여 개념적인 단계와 설계 단계, 구현 단계로 나누어 설계 및 구현을 하며 개념적인 단계에서는 워크케이스 기반 워크플로우 시스템 아키텍쳐에 대하여 상세히 기술하고 설계단계에서는 전체적인 기능 정의와 초대형 워크플로우 시스템의 구조를 설계한다. 그리고 구현 단계에서는 워크케이스 기반의 초대형 워크플로우 시스템 아키텍쳐를 실제 구현하기 위한 환경을 선택하고 구현 단계의 문제점들과 해결책을 기술한다. 다 솔레노이드방식 감압건조장치로 건조한 표고버섯으로 품위에 대한 유의성 검증결과, 표고버섯의 경우 온도별로는 색택과 복원률, 건조실 내부 압력별로는 수축률, 복원률에서 유의차가 있는 것으로 나타났다. 라. 본 연구에서 구명된 감압건조특성을 기초로 하여 배치식 감압건조기를 설계 제작에 활용하고자 한다.ational banks. Several financial interchange standards which are involved in B2B business of e-procurement, e-placement, e-payment are also investigated.. monocytogenes, E. coli 및 S. enteritidis에 대한 키토산의 최소저해농도는 각각 0.1461 mg/mL, 0.2419 mg/mL, 0.0980 mg/mL 및 0.0490 mg/mL로 측정되었다. 또한 2%(v/v) 초산 자체의 최소저해농도를 측정한 결과, B. cereus, L. mosocytogenes, E. eoli에 대해서는 control과 비교시 유의적인 항균효과는 나타나지 않았다. 반면에 S. enteritidis의 경우는 배양시간 4시간까지는 항균활성을 나타내었지만, 8시간 이후부터는 S. enteritidis의 성장이 control 보다 높아져 배양시간 20시간에서는 control 보다 약 2배 이상 균주의 성장을 촉진시켰다.차에 따른 개별화 학습을 가능하게 할 뿐만 아니라 능동적인 참여를 유도하여 학습효율을 높일 수 있을 것으로 기대된다.향은 패션마케팅의 정의와 적용범위를 축소시킬 수 있는 위험을 내재한 것으로 보여진다. 그런가 하면, 많이 다루어진 주제라 할지라도 개념이나 용어가 통일되지 않고

  • PDF

Study on image quality improvement using Non-Linear Look-Up Table (비선형 Look-Up Table을 통한 영상 화질 개선에 관한 연구)

  • Kim, Sun-Chil;Lee, Jun-Il
    • Korean Journal of Digital Imaging in Medicine
    • /
    • v.5 no.1
    • /
    • pp.32-44
    • /
    • 2002
  • The role of radiology department has been greatly increased in the past few years as the technology in the medical imaging devices improved and the introduction of PACS (Picture Archiving and Communications System) to the conventional film-based diagnostic structure is a truly remarkable factor to the medical history. In addition, the value of using digital information in medical imaging is highly expected to grow as the technology over the computer and the network improves. However, the current medical practice, using PACS is somewhat limited compared to the film-based conventional one due to a poor image quality. The image quality is the most important and inevitable factor in the PACS environment and it is one of the most necessary steps to more wide practice of digital imaging. The existing image quality control tools are limited in controlling images produced from the medical modalities, because they cannot display the real image changing status. Thus, the image quality is distorted and the ability to diagnosis becomes hindered compared to the one of the film-based practice. In addition, the workflow of the radiologist greatly increases; as every doctor has to perform his or her own image quality control every time they view images produced from the medical modalities. To resolve these kinds of problems and enhance current medical practice under the PACS environment, we have developed a program to display a better image quality by using the ROI optical density of the existing gray level values. When the LUT is used properly, small detailed regions, which cannot be seen by using the existing image quality controls are easily displayed and thus, greatly improves digital medical practice. The purpose of this study is to provide an easier medical practice to physicians, by applying the technology of converting the H-D curves of the analog film screen to the digital imaging technology and to preset image quality control values to each exposed body part, modality and group of physicians for a better and easier medical practice. We have asked to 5 well known professional physicians to compare image quality of the same set of exam by using the two different methods: existing image quality control and the LUT technology. As the result, the LUT technology was enormously favored over the existing image quality control method. All the physicians have pointed out the far more superiority of the LUT over the existing image quality control method and highly praised its ability to display small detailed regions, which cannot be displayed by existing image quality control tools. Two physicians expressed the necessity of presetting the LUT values for each exposed body part. Overall, the LUT technology yielded a great interest among the physicians and highly praised for its ability to overcome currently embedded problems of PACS. We strongly believe that the LUT technology can enhance the current medical practice and open a new beginning in the future medical imaging.

  • PDF

An Adaptive Business Process Mining Algorithm based on Modified FP-Tree (변형된 FP-트리 기반의 적응형 비즈니스 프로세스 마이닝 알고리즘)

  • Kim, Gun-Woo;Lee, Seung-Hoon;Kim, Jae-Hyung;Seo, Hye-Myung;Son, Jin-Hyun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.3
    • /
    • pp.301-315
    • /
    • 2010
  • Recently, competition between companies has intensified and so has the necessity of creating a new business value inventions has increased. A numbers of Business organizations are beginning to realize the importance of business process management. Processes however can often not go the way they were initially designed or non-efficient performance process model could be designed. This can be due to a lack of cooperation and understanding between business analysts and system developers. To solve this problem, business process mining which can be used as the basis of the business process re-engineering has been recognized to an important concept. Current process mining research has only focused their attention on extracting workflow-based process model from competed process logs. Thus there have a limitations in expressing various forms of business processes. The disadvantage in this method is process discovering time and log scanning time in itself take a considerable amount of time. This is due to the re-scanning of the process logs with each new update. In this paper, we will presents a modified FP-Tree algorithm for FP-Tree based business processes, which are used for association analysis in data mining. Our modified algorithm supports the discovery of the appropriate level of process model according to the user's need without re-scanning the entire process logs during updated.

Integrated Data Safe Zone Prototype for Efficient Processing and Utilization of Pseudonymous Information in the Transportation Sector (교통분야 가명정보의 효율적 처리 및 활용을 위한 통합데이터안심구역 프로토타입)

  • Hyoungkun Lee;Keedong Yoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.3
    • /
    • pp.48-66
    • /
    • 2024
  • According to the three amended Laws of the Data Economy and the Data Industry Act of Korea, systems for pseudonymous data integration and Data Safe Zones have been operated separately by selected agencies, eventually causing a burden of use in SMEs, startups, and general users because of complicated and ineffective procedures. An over-stringent pseudonymization policy to prevent data breaches has also compromised data quality. Such trials should be improved to ensure the convenience of use and data quality. This paper proposes a prototype system of the Integrated Data Safe Zone based on redesigned and optimized pseudonymization workflows. Conventional workflows of pseudonymization were redesigned by applying the amended guidelines and selectively revising existing guidelines for business process redesign. The proposed prototype has been shown quantitatively to outperform the conventional one: 6-fold increase in time efficiency, 1.28-fold in cost reduction, and 1.3-fold improvement in data quality.