• Title/Summary/Keyword: Computing System

Search Result 5,901, Processing Time 0.034 seconds

A Study on Construction Site of Virtual Desktop Infrastructure (VDI) System Model for Cloud Computing BIM Service

  • Lee, K.H.;Kwon, S.W.;Shin, J.H.;Choi, G.S.;Moon, D.Y.
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.665-666
    • /
    • 2015
  • Recently BIM technology has been expanded for using in construction project. However its spread has been delayed than the initial expectations, due to the high-cost of BIM infrastructure development, the lack of regulations, the lack of process and so forth. In construction site phase, especially the analysis of current research trend about IT technologies, virtualization and BIM service, data exchange such as drawing, 3D model, object data, properties using cloud computing and virtual server system is defined as a most successful solution. The purpose of this study is enable the cloud computing BIM server to provide several main function such as edit a model, 3D model viewer and checker, mark-up, snapshot in high-performance quality by proper design of VDI system. Concurrent client connection performance is a main technical index of VDI. Through test-bed server client, developed VDI system's multi-connect control will be evaluated. The performance-test result of BIM server VDI will effect to development direction of cloud computing BIM service for commercialization.

  • PDF

Analysis of Component Performance using Open Source for Guarantee SLA of Cloud Education System (클라우드 교육 시스템의 SLA 보장을 위한 오픈소스기반 요소 성능 분석)

  • Yoon, JunWeon;Song, Ui-Sung
    • Journal of Digital Contents Society
    • /
    • v.18 no.1
    • /
    • pp.167-173
    • /
    • 2017
  • As the increasing use of the cloud computing, virtualization technology have been combined and applied a variety of requirements. Cloud computing has the advantage that the support computing resource by a flexible and scalable to users as they want and it utilized in a variety of distributed computing. To do this, it is especially important to ensure the stability of the cloud computing. In this paper, we analyzed a variety of component measurement using open-source tools for ensuring the performance of the system on the education system to build cloud testbed environment. And we extract the performance that may affect the virtualization environment from processor, memory, cache, network, etc on each of the host machine(Host Machine) and a virtual machine (Virtual Machine). Using this result, we can clearly grasp the state of the system and also it is possible to quickly diagnose the problem. Furthermore, the cloud computing can be guaranteed the SLA(Service Level Agreement).

Handwritten One-time Password Authentication System Based On Deep Learning (심층 학습 기반의 수기 일회성 암호 인증 시스템)

  • Li, Zhun;Lee, HyeYoung;Lee, Youngjun;Yoon, Sooji;Bae, Byeongil;Choi, Ho-Jin
    • Journal of Internet Computing and Services
    • /
    • v.20 no.1
    • /
    • pp.25-37
    • /
    • 2019
  • Inspired by the rapid development of deep learning and online biometrics-based authentication, we propose a handwritten one-time password authentication system which employs deep learning-based handwriting recognition and writer verification techniques. We design a convolutional neural network to recognize handwritten digits and a Siamese network to compute the similarity between the input handwriting and the genuine user's handwriting. We propose the first application of the second edition of NIST Special Database 19 for a writer verification task. Our system achieves 98.58% accuracy in the handwriting recognition task, and about 93% accuracy in the writer verification task based on four input images. We believe the proposed handwriting-based biometric technique has potential for use in a variety of online authentication services under the FIDO framework.

A Study on a 4-Stage Phased Defense Method to Defend Cloud Computing Service Intrusion (Cloud Computing 서비스 침해방어를 위한 단계별 4-Stage 방어기법에 관한 연구)

  • Seo, Woo-Seok;Park, Dea-Woo;Jun, Moon-Seog
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.7 no.5
    • /
    • pp.1041-1051
    • /
    • 2012
  • Attack on Cloud Computing, an intensive service solution using network infrastructure recently released, generates service breakdown or intrusive incidents incapacitating developmental platforms, web-based software, or resource services. Therefore, it is needed to conduct research on security for the operational information of three kinds of services (3S': laaS, PaaS, SaaS) supported by the Cloud Computing system and also generated data from the illegal attack on service blocking. This paper aims to build a system providing optimal services as a 4-stage defensive method through the test on the attack and defense of Cloud Computing services. It is a defense policy that conducts 4-stage, orderly and phased access control as follows: controlling the initial access to the network, controlling virtualization services, classifying services for support, and selecting multiple routes. By dispersing the attacks and also monitoring and analyzing to control the access by stage, this study performs defense policy realization and analysis and tests defenses by the types of attack. The research findings will be provided as practical foundational data to realize Cloud Computing service-based defense policy.

An Efficient VM-Level Scaling Scheme in an IaaS Cloud Computing System: A Queueing Theory Approach

  • Lee, Doo Ho
    • International Journal of Contents
    • /
    • v.13 no.2
    • /
    • pp.29-34
    • /
    • 2017
  • Cloud computing is becoming an effective and efficient way of computing resources and computing service integration. Through centralized management of resources and services, cloud computing delivers hosted services over the internet, such that access to shared hardware, software, applications, information, and all resources is elastically provided to the consumer on-demand. The main enabling technology for cloud computing is virtualization. Virtualization software creates a temporarily simulated or extended version of computing and network resources. The objectives of virtualization are as follows: first, to fully utilize the shared resources by applying partitioning and time-sharing; second, to centralize resource management; third, to enhance cloud data center agility and provide the required scalability and elasticity for on-demand capabilities; fourth, to improve testing and running software diagnostics on different operating platforms; and fifth, to improve the portability of applications and workload migration capabilities. One of the key features of cloud computing is elasticity. It enables users to create and remove virtual computing resources dynamically according to the changing demand, but it is not easy to make a decision regarding the right amount of resources. Indeed, proper provisioning of the resources to applications is an important issue in IaaS cloud computing. Most web applications encounter large and fluctuating task requests. In predictable situations, the resources can be provisioned in advance through capacity planning techniques. But in case of unplanned and spike requests, it would be desirable to automatically scale the resources, called auto-scaling, which adjusts the resources allocated to applications based on its need at any given time. This would free the user from the burden of deciding how many resources are necessary each time. In this work, we propose an analytical and efficient VM-level scaling scheme by modeling each VM in a data center as an M/M/1 processor sharing queue. Our proposed VM-level scaling scheme is validated via a numerical experiment.

Ubiquitous Computing Technology Based Environmental Monitoring and Diagnosis System : Architecture and Case Study (유비쿼터스 컴퓨팅 기술 기반 환경 모니터링/진단 시스템의 아키텍처 및 사례 연구)

  • Yoon, Joo-Sung;Hwang, Jung-Min;Suh, Suk-Hwan;Lee, Chang-Min
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.36 no.4
    • /
    • pp.230-242
    • /
    • 2010
  • In this paper, an environmental monitoring and diagnosis system based on ubiquitous computing technology, shortly u-Eco Monitoring System, is proposed. u-Eco Monitoring System is designed to: 1) Collect information from the manufacturing processes via ubiquitous computing technology, 2) Analyze the current status, 3) Identify the cause of problem if detected by rule-based and case-based reasoning, and 4) Provide the results to the operator for proper decision making. Based on functional modeling, a generic architecture is derived, followed by application to a manufacturing system in iron and steel making industry. Finally, to show the validity of the proposed method, a prototype is developed and tested. The developed methods can be used as a conceptual framework for designing environmental monitoring and diagnosis system for industrial practices by which monitoring accuracy and response time for abnormal status can be significantly enhanced, and relieving operator pressure from manual monitoring and error-prone decision making.

The development of the high effective and stoppageless file system for high performance computing (High Performance Computing 환경을 위한 고성능, 무정지 파일시스템 구현)

  • Park, Yeong-Bae;Choe, Seung-Hwan;Lee, Sang-Ho;Kim, Gyeong-Su;Gong, Yong-Jun
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2004.11a
    • /
    • pp.395-401
    • /
    • 2004
  • In the current high network-centralized computing and enterprising environment, it is getting essential to transmit data reliably at very high rates. Until now previous client/server model based NFS(Network File System) or AFS(Andrew's Files System) have met the various demands but from now couldn't satisfy those of the today's scalable high-performance computing environment. Not only performance but data sharing service redundancy have risen as a serious problem. In case of NFS, the locking issue and cache cause file system to reboot and make problem when it is used simply as ip-take over for H/A service. In case of AFS, it provides file sharing redundancy but it is not possible until the storage supporting redundancy and equipments are prepared. Lustre is an open source based cluster file system developed to meet both demands. Lustre consists of three types of subsystems : MDS(Meta-Data Server) which offers the meta-data services, OST(Objec Storage Targets) which provide file I/O, and Lustre Clients which interact with OST and MDS. These subsystems with message exchanging and pursuing scalable high-performance file system service. In this paper, we compare the transmission speed of gigabytes file between Lustre and NFS on the basis of concurrent users and also present the high availability of the file system by removing more than one OST in operation.

  • PDF

New Paradigm of e-Logistics System Management - An Proactive u-Logistics System Based on Ubiquitous Technology -

  • Hwang Heung-Suk;Cho Gyu-Sung
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2006.05a
    • /
    • pp.153-158
    • /
    • 2006
  • The emergence of ubiquitous autonomic computing and network environment will change the service architecture information system which will be a new application area in SCM/logistics systems. In this study we surveyed the technical trend map of ubiquitous and its application in SCM/logistics support system design. We described the evolutional model of ubiquitous computing community for SCM/logistics system. It is consisted of three view points; self-growing, autonomic, and context-aware, which will allow the decision makers to be benefited from web and mobile technology and are useful for proactive SCM/logistics support system. Finally, we suggested a cooperative research planning for the development ubiquitous system between the government research center, university, and industry research activities.

  • PDF

Performance Improvement of Data Replication in Cloud Computing (Cloud Computing에서의 데이터 복제 성능 개선)

  • Lee, Joon-Kyu;Lee, Bong-Hwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.53-56
    • /
    • 2008
  • Recently, the distributed system is being evolved into a new paradigm, named cloud computing, which provides users with efficient computing resources and services from data centers. Cloud computing would reduce the potential danger of Grid computing which utilizes resource sharing by constructing centralized data center. In this paper, a new data replication scheme is proposed for Hadoop distributed file system by changing 1:1 data transmission to 1:N. The proposed scheme considerably reduced the data transmission delay comparing to the current mechanism.

  • PDF