• Title/Summary/Keyword: 신뢰성 평가 프로세스

Search Result 119, Processing Time 0.029 seconds

A Study on Calculation of Sectional Travel Speeds of the Interrupted Traffic Flow with the Consideration of the Characteristics of Probe Data (프로브 자료의 특성을 고려한 단속류의 구간 통행속도 산출에 관한 연구)

  • Jeong, Yeon Tak;Jung, Hun Young
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.34 no.6
    • /
    • pp.1851-1861
    • /
    • 2014
  • This study aims to calculate reliable sectional travel speeds with the consideration of the characteristics of the probe data collected in the interrupted traffic flow. First, in order to analysis the characteristics of the probe data, we looked into the distribution of the sectional travel times of each probe vehicle and compared the difference in the sectional travel speeds of each probe vehicle collected by DSRC. As a result, it is shown that outliers should be removed for the distribution of the sectional travel times. However, The comparison results show that the sectional travel speeds from the DSRC probe vehicles are not significantly different. Finally, based on the distribution characteristics of the sectional travel speeds of each probe vehicle and the representative values counted during a collection period, we drew the optimal outlier removal procedure and evaluated the estimation errors. The evaluation results showed that the DSRC sectional travel speeds were found to be similar to the observed values from actually running vehicles. On the contrary, in the case of the sectional travel speeds of intra-city bus, it was analyzed that they were less accurate than the DSRC sectional travel speeds. In the future, it will be necessary to improve BIS process and make use of the travel information on intra-city buses collected in real time to find various ways of applying it as traffic information.

Evaluation of the Effectiveness of Surveillance on Improving the Detection of Healthcare Associated Infections (의료관련감염에서 감시 개선을 위한 평가)

  • Park, Chang-Eun
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.51 no.1
    • /
    • pp.15-25
    • /
    • 2019
  • The development of reliable and objective definitions as well as automated processes for the detection of health care-associated infections (HAIs) is crucial; however, transformation to an automated surveillance system remains a challenge. Early outbreak identification usually requires clinicians who can recognize abnormal events as well as ongoing disease surveillance to determine the baseline rate of cases. The system screens the laboratory information system (LIS) data daily to detect candidates for health care-associated bloodstream infection (HABSI) according to well-defined detection rules. The system detects and reserves professional autonomy by requiring further confirmation. In addition, web-based HABSI surveillance and classification systems use discrete data elements obtained from the LIS, and the LIS-provided data correlates strongly with the conventional infection-control personnel surveillance system. The system was timely, acceptable, useful, and sensitive according to the prevention guidelines. The surveillance system is useful because it can help health care professionals better understand when and where the transmission of a wide range of potential pathogens may be occurring in a hospital. A national plan is needed to strengthen the main structures in HAI prevention, Healthcare Associated Prevention and Control Committee (HAIPCC), sterilization service (SS), microbiology laboratories, and hand hygiene resources, considering their impact on HAI prevention.

A Study for Security-Based Medical Information Software Architecture Design Methodology (의료정보보안 기반 소프트웨어 아키텍처 설계방법)

  • Kim, Jeom Goo;Noh, SiChoon
    • Convergence Security Journal
    • /
    • v.13 no.6
    • /
    • pp.35-41
    • /
    • 2013
  • What is an alternative to medical information security of medical information more secure preservation and safety of various types of security threats should be taken, starting from the software design. Interspersed with medical information systems medical information to be able to integrate the real-time exchange of medical information must be reliable data communication. The software architecture design of medical information systems and sharing of medical information security issues and communication phase allows the user to identify the requirements reflected in the software design. Software framework design, message standard design, design a web-based inter-process communication procedures, access control algorithm design, architecture, writing descriptions, evaluation of various will procedure the establishing architecture. The initial decision is a software architecture design, development, testing, maintenance, ongoing impact. In addition, the project will be based on the decision in detail. Medical information security method based on the design software architecture of today's medical information security has become an important task of the framework will be able to provide.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

CIA-Level Driven Secure SDLC Framework for Integrating Security into SDLC Process (CIA-Level 기반 보안내재화 개발 프레임워크)

  • Kang, Sooyoung;Kim, Seungjoo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.909-928
    • /
    • 2020
  • From the early 1970s, the US government began to recognize that penetration testing could not assure the security quality of products. Results of penetration testing such as identified vulnerabilities and faults can be varied depending on the capabilities of the team. In other words none of penetration team can assure that "vulnerabilities are not found" is not equal to "product does not have any vulnerabilities". So the U.S. government realized that in order to improve the security quality of products, the development process itself should be managed systematically and strictly. Therefore, the US government began to publish various standards related to the development methodology and evaluation procurement system embedding "security-by-design" concept from the 1980s. Security-by-design means reducing product's complexity by considering security from the initial phase of development lifecycle such as the product requirements analysis and design phase to achieve trustworthiness of product ultimately. Since then, the security-by-design concept has been spread to the private sector since 2002 in the name of Secure SDLC by Microsoft and IBM, and is currently being used in various fields such as automotive and advanced weapon systems. However, the problem is that it is not easy to implement in the actual field because the standard or guidelines related to Secure SDLC contain only abstract and declarative contents. Therefore, in this paper, we present the new framework in order to specify the level of Secure SDLC desired by enterprises. Our proposed CIA (functional Correctness, safety Integrity, security Assurance)-level-based security-by-design framework combines the evidence-based security approach with the existing Secure SDLC. Using our methodology, first we can quantitatively show gap of Secure SDLC process level between competitor and the company. Second, it is very useful when you want to build Secure SDLC in the actual field because you can easily derive detailed activities and documents to build the desired level of Secure SDLC.

A Successful Implementation Strategy Of ERP System For University Information System (대학 정보화를 위한 ERP 시스템 구축전략)

  • 김현준;정정회;김영렬;이상식
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2003.11a
    • /
    • pp.577-588
    • /
    • 2003
  • 정보기술의 급진적인 발달로 경쟁이 점점 심화되어져 가고 있는 환경 속에서 경쟁적 우위를 갖기 위한 기업들의 끊임없는 노력은 계속 되고 있다 이러한 시점에 ERP(Enterprise Resource Planning: 전사적 자원관리) 시스템은 기업의 경쟁력을 재고하고 변화하는 비즈니스 환경을 예측하여 신속하고 현명하게 대처하기 위한 수단으로서의 역할을 수행하고 있으며 최근 기업뿐만 아니라 대학에서도 이에 대한 관심이 확산되고 있다. 이미 국외의 많은 대학들이 ERP 시스템을 도입한 실정이고, 국내에서는 정보화 사업의 일환으로 연세대학교가 일반 행정 업무의 효율화를 위해 SAP R/3를 도입했고 향후 학사 행정 시스템과의 통합을 모색 중이다. 정부에서도 국·공립 대학들의 행정업무효율성 제고를 위해 차세대 행정업무시스템으로 ERP 시스템 도입을 본격화하고 있다. 따라서 본 논문에서는 정보화를 추진하고자 하는 국내 대학에 ERP 시스템의 도입을 제안하였고, 기존에 선행 연구된 연구자료를 비교·분석하여 기업의 ERP 시스템 구축 성공요인을 정리하였다. 국내 대학은 ERP 시스템 도입 시 제시된 주요성공요인 등을 고려하고 지속적인 관심으로 프로젝트를 수행함으로써 보다 효과적으로 구축을 완료해야 할 것이며, 아직 국내의 사례가 충분하지 않기 때문에 본 논문에서 제시한 국내외의 사례를 살펴봄으로써 대학의 ERP 시스템 도입에 관해 빠른 이해를 할 수 있을 것이다. 끝으로 본 논문은 초기단계에 있는 국내 대학의 ERP 시스템 도입에 대학이 나가야 할 방향을 제시하였고, 앞으로 국내 대학의 ERP 시스템 도입이 증가할수록 이에 대한 연구도 계속 되어야 할 것이다.보다 더 높게 나타난 반면에, 실무계 전문가 집단은 비용이 정보시스템 보다 더 높게 나타났다는 것이다.리적 특성이론과 사용자 수용이론으로 구분하여 이 특성에 따라서 향후 개발될 3D 핸드폰 아바타의 수용도의 관계를 파악하고자 한다. 본 연구에 대한 자료 수집방법은 D대학교의 교양과목인 “사이버문화의 이해와 활용”을 수강하는 학생들을 대상으로 총 170부를 설문 조사하였으며, 수집된 설문지 중에서 불성실하게 응답한 설문지 6부를 제외한 총 164부를 유효한 설문으로 확보하였다. SPSSWIN 10.0 패키지를 이용하였으며, Cronbach's Alpha값을 통한 신뢰도 분석과 요인분석을 통한 타당성 분석을 하고, 연구변수로 선정한 각 요소들의 아바타 수용도에 미치는 영향력 정도를 파악하기 위해 회귀분석을 실시하였다. 그 결과 심리적 특성과 사용자 수용 특성은 아바타 수용도에 부분적으로 영향을 미친다는 것 결과가 나타났다.웨어 프로세스 평가와 개선 모델의 개발을 위한 기초적인 자료를 제공할 것으로 예상된다 또한, 본 연구 결과는, 우리나라 소프트웨어 조직들이 실제로 무엇을 필요로 하는지를 밝힘으로써, 우리나라의 소프트웨어 산업을 육성하기 위한 실효성 있는 정책 입안을 위한 기초 자료를 제공할 것으로 예상된다.다.를 검증하려고 한다. 협력체계 확립, ${\circled}3$ 전문인력 확보 및 인력구성 조정, 그리고 ${\circled}4$ 방문보건사업의 강화 등이다., 대사(代謝)와 관계(關係)있음을 시사(示唆)해 주고 있다.ble nutrient (TDN) was highest in booting stage (59.7%);

  • PDF

Development of the Approximate Cost Estimating Model for PSC Box Girder Bridge based on the Breakdown of Standard Work (대표공종 기반의 PSC Box 교량 상부공사 개략공사비 산정모델에 관한 연구)

  • Kim, Sang-Bum;Cho, Ji-Hoon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.2
    • /
    • pp.791-800
    • /
    • 2013
  • Needs for developing a better way of cost estimating process for public construction projects have been widely recognized. Those needs are mainly from the early phases of the project through the construction life cycle due to the its importance to the control process. In contrast to the traditional estimating method based on unit-price references, this research utilized this following process. The first step is analyzing the real cost data from actual cost activities (2000~2010) about the statement of P.S.C(Prestressed Concrete) Box Girder Bridge. The collected data was broken into four categories based on technical construction methods such as I.L.M(Incremental Launching Method), M.S.S(Movable Scaffolding System), F.S.M(Full Staging Method), and F.C.M(Free Cantilever Method). The second, actual design documents including the actual cost estimating documents, drawings and specifications were carefully reviewed to cluster the cost itemized statement from four categories. It was also attempted to seek the proper breakdown of standard works that are responsible for more than 95 percentage in each categories in terms of its cost. The third, this research comes up the index for standard unit materials and unit price of standard work and develops the approximate estimating model applying for the specification(length and breadth of bridges) per square area that the user takes as well as suggests the practical application plan within the original time alloted.

Generation of Sea Surface Temperature Products Considering Cloud Effects Using NOAA/AVHRR Data in the TeraScan System: Case Study for May Data (TeraScan시스템에서 NOAA/AVHRR 해수면온도 산출시 구름 영향에 따른 신뢰도 부여 기법: 5월 자료 적용)

  • Yang, Sung-Soo;Yang, Chan-Su;Park, Kwang-Soon
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.13 no.3
    • /
    • pp.165-173
    • /
    • 2010
  • A cloud detection method is introduced to improve the reliability of NOAA/AVHRR Sea Surface Temperature (SST) data processed during the daytime and nighttime in the TeraScan System. In daytime, the channels 2 and 4 are used to detect a cloud using the three tests, which are spatial uniformity tests of brightness temperature (infrared channel 4) and channel 2 albedo, and reflectivity threshold test for visible channel 2. Meanwhile, the nighttime cloud detection tests are performed by using the channels 3 and 4, because the channel 2 data are not available in nighttime. This process include the dual channel brightness temperature difference (ch3 - ch4) and infrared channel brightness temperature threshold tests. For a comparison of daytime and nighttime SST images, two data used here are obtained at 0:28 (UTC) and 21:00 (UTC) on May 13, 2009. 6 parameters was tested to understand the factors that affect a cloud masking in and around Korean Peninsula. In daytime, the thresholds for ch2_max cover a range 3 through 8, and ch4_delta and ch2_delta are fixed on 5 and 2, respectively. In nighttime, the threshold range of ch3_minus_ch4 is from -1 to 0, and ch4_delta and min_ch4_temp have the fixed thresholds with 3.5 and 0, respectively. It is acceptable that the resulted images represent a reliability of SST according to the change of cloud masking area by each level. In the future, the accuracy of SST will be verified, and an assimilation method for SST data should be tested for a reliability improvement considering an atmospheric characteristic of research area around Korean Peninsula.

Methods of Incorporating Design for Production Considerations into Concept Design Investigations (개념설계 단계에서 총 건조비를 최소로 하는 생산지향적 설계 적용 방법)

  • H.S.,Bong
    • Bulletin of the Society of Naval Architects of Korea
    • /
    • v.27 no.3
    • /
    • pp.131-136
    • /
    • 1990
  • 여러해 전부터 선박의 생산실적이나 생산성 관련 자료를 기록하고 보완하는 작업을 꾸준히 개선토록 노력해온 결과중 중요한 것 하나는, 선박의 여러 가지 설계 검토과정에서 충분히 활용할 수 있는 함축성 있고 믿을만한 형태의 생산정보를 제공해줄 수 있게 되었다는 것이라고 말 할 수 있겠다. 이러한 자료들은 생산계획상 각 단계(stage)에서의 작업량, 예상재료비와 인건비의 산출등이 포함될 수 있으며, 선박이나 해상구조물의 전반적인 설계방법론(design methodology)을 개선코자 한다면 ''생산지향적 설계(Design for Production)''의 근간이 되는 선박건조전략(build strategy), 구매정책(purchasing policy)과 생산기술(production technology)에 대한 폭넓은 지식이 한데 어우러져야 한다. 최근에는 CIMS의 일부분에서 보는 바와 같은 경영관리, 설계 및 생산지원 시스템의 도입으로 이와 같은 설계 프로세스의 추진을 가능케하고 있다. 이와 병행하여 설계를 지원하기 위한 전산기술, 특히 대화형 화상처리기술(interactive graphics)의 발달은 설계자가 선박의 형상이나 구조 배치를 여러 가지로 변화시켜 가면서 눈으로 즉시 확인할 수 있도록 설계자의 능력을 배가시키는데 크게 기여하고 있다. 여러 가지의 설계안(alternative design arrangement)을 신속히 만들어내고 이를 즉시 검토 평가할 수 있는 능력을 초기설계 단계에서 가질 수 있다면 이는 분명히 큰 장점일 것이며, 더구나 설계초기 단계에 생산관련인자를 설계에서 고려할 수 있다면 이는 더욱 두드러진 발전일 것이다. 생산공법과 관련생산 비용을 정확히 반영한 각 가지의 설계안을 짧은 시간내에 검토하고 생산소요 비용을 산출하여 비교함으로써, 수주계약단계에서 실제적인 생산공법과 신뢰성있는 생산실적자료를 기준으로 하여 총 건조비(total production cost)를 최소로 하는 최적의 설계를 선택할 수 있도록 해 줄 것이다. 이제 이와 같은 새로운 설계도구(design tool)를 제공해 주므로써 초기설계에 각종 생산관련 정보나 지식 및 실적자료가 반영가능토록 발전되었다. 본 논문은 영국의 뉴카슬대학교(Univ. of Newcastle upon Type)에서 위에 언급한 특징들을 반영하여 새로운 선박구조 설계 방법을 개발한 연구결과를 보여주고 있다. 본 선계연구는 5단계로 구분되는데; (1) 컴퓨터 그라픽스를 이용하고 생산정보 데이타베이스와 연결시켜 구조형상(geometry)을 정의하고 구조부재 칫수(scantling) 계산/결정 (2) 블럭 분할(block division) 및 강재 배치(panel arrangement)의 확정을 위해 생산기술 및 건조방식에 대한 정보 제공 (3) 상기 (1) 및 (2)를 활용하여 아래 각 생산 단계에서의 생산작업 분석(work content assessment) a) 생산 준비 단계(Preparation) b) 가공 조립 단계(Fabrication/Assembly) c) 탑재 단계(Erection) (4) 각각의 설계(안)에 대하여 재료비(material cost), 인건비(labour cost) 및 오버헤드 비용(overhead cost)을 산출키 위한 조선소의 생산시설 및 각종 품셈 정보 (5) 총 건조 비용(total production cost)을 산출하여 각각의 설계안을 비교 검토. 본 설계 방식을 산적화물선(Bulk Carrier) 설계에 적용하여 구조배치(structural geometry), 표준화의 정도(levels of standardisation), 구조생산공법(structural topology) 등의 변화에 따른 설계 결과의 민감도를 분석(sensitivity studies)하였다. 전산장비는 설계자의 대화형 접근을 용이하도록 하기 위해 VAX의 화상 처리장치를 이용하여 각 설계안에 대한 구조형상과 작업분석, 건조비 현황 등을 제시할 수 있도록 하였다. 결론적으로 본 연구는 설계초기 단계에서 상세한 건조비 모델(detailed production cost model)을 대화형 화상 처리방법에 접합시켜 이를 이용하여 여러가지 설계안의 도출과 비교검토를 신속히 처리할 수 있도록 함은 물론, 각종 생산 실적정보를 초기설계에 반영하는 최초의 시도라고 믿으며, 생산지향적(Design for Production) 최적설계분야의 발전에 많은 도움이 되기를 기대해 마지 않는다. 참고로 본 시스템의 설계 적용결과를 부록에 요약 소개하며, 상세한 내용은 참고문헌 [4] 또는 [7]을 참조 요망한다.

  • PDF