• Title/Summary/Keyword: Computing amount

Search Result 689, Processing Time 0.025 seconds

A Novel Framework for Defining and Submitting Workflows to Service-Oriented Systems

  • Bendoukha, Hayat;Slimani, Yahya;Benyettou, Abdelkader
    • Journal of Information Processing Systems
    • /
    • v.10 no.3
    • /
    • pp.365-383
    • /
    • 2014
  • Service-oriented computing offers efficient solutions for executing complex applications in an acceptable amount of time. These solutions provide important computing and storage resources, but they are too difficult for individual users to handle. In fact, Service-oriented architectures are usually sophisticated in terms of design, specifications, and deployment. On the other hand, workflow management systems provide frameworks that help users to manage cooperative and interdependent processes in a convivial manner. In this paper, we propose a workflow-based approach to fully take advantage of new service-oriented architectures that take the users' skills and the internal complexity of their applications into account. To get to this point, we defined a novel framework named JASMIN, which is responsible for managing service-oriented workflows on distributed systems. JASMIN has two main components: unified modeling language (UML) to specify workflow models and business process execution language (BPEL) to generate and compose Web services. In order to cover both workflow and service concepts, we describe in this paper a refinement of UML activity diagrams and present a set of rules for mapping UML activity diagrams into BPEL specifications.

A Pattern-Based Prediction Model for Dynamic Resource Provisioning in Cloud Environment

  • Kim, Hyuk-Ho;Kim, Woong-Sup;Kim, Yang-Woo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.10
    • /
    • pp.1712-1732
    • /
    • 2011
  • Cloud provides dynamically scalable virtualized computing resources as a service over the Internet. To achieve higher resource utilization over virtualization technology, an optimized strategy that deploys virtual machines on physical machines is needed. That is, the total number of active physical host nodes should be dynamically changed to correspond to their resource usage rate, thereby maintaining optimum utilization of physical machines. In this paper, we propose a pattern-based prediction model for resource provisioning which facilitates best possible resource preparation by analyzing the resource utilization and deriving resource usage patterns. The focus of our work is on predicting future resource requests by optimized dynamic resource management strategy that is applied to a virtualized data center in a Cloud computing environment. To this end, we build a prediction model that is based on user request patterns and make a prediction of system behavior for the near future. As a result, this model can save time for predicting the needed resource amount and reduce the possibility of resource overuse. In addition, we studied the performance of our proposed model comparing with conventional resource provisioning models under various Cloud execution conditions. The experimental results showed that our pattern-based prediction model gives significant benefits over conventional models.

An Efficient Design and Implementation of an MdbULPS in a Cloud-Computing Environment

  • Kim, Myoungjin;Cui, Yun;Lee, Hanku
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3182-3202
    • /
    • 2015
  • Flexibly expanding the storage capacity required to process a large amount of rapidly increasing unstructured log data is difficult in a conventional computing environment. In addition, implementing a log processing system providing features that categorize and analyze unstructured log data is extremely difficult. To overcome such limitations, we propose and design a MongoDB-based unstructured log processing system (MdbULPS) for collecting, categorizing, and analyzing log data generated from banks. The proposed system includes a Hadoop-based analysis module for reliable parallel-distributed processing of massive log data. Furthermore, because the Hadoop distributed file system (HDFS) stores data by generating replicas of collected log data in block units, the proposed system offers automatic system recovery against system failures and data loss. Finally, by establishing a distributed database using the NoSQL-based MongoDB, the proposed system provides methods of effectively processing unstructured log data. To evaluate the proposed system, we conducted three different performance tests on a local test bed including twelve nodes: comparing our system with a MySQL-based approach, comparing it with an Hbase-based approach, and changing the chunk size option. From the experiments, we found that our system showed better performance in processing unstructured log data.

A Study on Efficient Test Data Compression Method for Test-per-clock Scan (Test-per-clock 스캔 방식을 위한 효율적인 테스트 데이터 압축 기법에 관한 연구)

  • Park, Jae-Heung;Yang, Sun-Woong;Chang, Hoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.39 no.9
    • /
    • pp.45-54
    • /
    • 2002
  • This paper proposes serial test data compression, a novel DFT scheme for embedded cores in SOC. To reduce test data amounts, share bit compression and fault undetectable fault pattern compression techniques was used. A Circuits using serial test data compression method are derived from a scan DFT method including a test-per-clock technique. For an experiment of the proposed compression method, full scan versions of ISCASS85 and ISCASS89 were used. ATALANTA has been used for ATPG and fault simulation. The amount of test data has been reduced by maximum 98% comparing with original data.

Computing-Inexpensive Matrix Model for Estimating the Threshold Voltage Variation by Workfunction Variation in High-κ/Metal-gate MOSFETs

  • Lee, Gyo Sub;Shin, Changhwan
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.14 no.1
    • /
    • pp.96-99
    • /
    • 2014
  • In high-${\kappa}$/metal-gate (HK/MG) metal-oxide-semiconductor field-effect transistors (MOSFETs) at 45-nm and below, the metal-gate material consists of a number of grains with different grain orientations. Thus, Monte Carlo (MC) simulation of the threshold voltage ($V_{TH}$) variation caused by the workfunction variation (WFV) using a limited number of samples (i.e., approximately a few hundreds of samples) would be misleading. It is ideal to run the MC simulation using a statistically significant number of samples (>~$10^6$); however, it is expensive in terms of the computing requirement for reasonably estimating the WFV-induced $V_{TH}$ variation in the HK/MG MOSFETs. In this work, a simple matrix model is suggested to implement a computing-inexpensive approach to estimate the WFV-induced $V_{TH}$ variation. The suggested model has been verified by experimental data, and the amount of WFV-induced $V_{TH}$ variation, as well as the $V_{TH}$ lowering is revealed.

Estimating Economic Service Life of Assets by Using National Wealth Statistic (국부 통계조사자료를 이용한 자산별 경제적 감가상각추정에 대한 연구)

  • Cho, Jin-Hyung;Oh, Hyun-Seung;Lee, Sae-Jae;Suh, Jung-Yul
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.30 no.4
    • /
    • pp.170-181
    • /
    • 2007
  • The purpose of computing economic depreciation value is to find valuation of assets closely in line with market prices. The valuation of industrial assets are called Engineering Valuation. The two representative techniques for such valuation are Hulten-Wykoff Method, which estimates real value using regression equations, and T-factor Method devised at Iowa State University. The two are all empirical methods for computing service life (duration period). In this paper, we derived the service life by empirical methods using national wealth statistics, and also by more conventional methods such as original group method and retirement method. The results from each method are compared with one another. We also computed economic service life from these results. In S. Korea where amount of asset value statistics is still insufficient, the most effective method for empirically computing economic service life turns out to be the one using national wealth statistics. In addition, we also present economic relationship between depreciation value computed by using Hulten-Wykoff Method and depreciation value computed by using T-factor Method.

Efficient Top-k Join Processing over Encrypted Data in a Cloud Environment

  • Kim, Jong Wook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.10
    • /
    • pp.5153-5170
    • /
    • 2016
  • The benefit of the scalability and flexibility inherent in cloud computing motivates clients to upload data and computation to public cloud servers. Because data is placed on public clouds, which are very likely to reside outside of the trusted domain of clients, this strategy introduces concerns regarding the security of sensitive client data. Thus, to provide sufficient security for the data stored in the cloud, it is essential to encrypt sensitive data before the data are uploaded onto cloud servers. Although data encryption is considered the most effective solution for protecting sensitive data from unauthorized users, it imposes a significant amount of overhead during the query processing phase, due to the limitations of directly executing operations against encrypted data. Recently, substantial research work that addresses the execution of SQL queries against encrypted data has been conducted. However, there has been little research on top-k join query processing over encrypted data within the cloud computing environments. In this paper, we develop an efficient algorithm that processes a top-k join query against encrypted cloud data. The proposed top-k join processing algorithm is, at an early phase, able to prune unpromising data sets which are guaranteed not to produce top-k highest scores. The experiment results show that the proposed approach provides significant performance gains over the naive solution.

Hilbert-curve based Multi-dimensional Indexing Key Generation Scheme and Query Processing Algorithm for Encrypted Databases (암호화 데이터를 위한 힐버트 커브 기반 다차원 색인 키 생성 및 질의처리 알고리즘)

  • Kim, Taehoon;Jang, Miyoung;Chang, Jae-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.10
    • /
    • pp.1182-1188
    • /
    • 2014
  • Recently, the research on database outsourcing has been actively done with the popularity of cloud computing. However, because users' data may contain sensitive personal information, such as health, financial and location information, the data encryption methods have attracted much interest. Existing data encryption schemes process a query without decrypting the encrypted databases in order to support user privacy protection. On the other hand, to efficiently handle the large amount of data in cloud computing, it is necessary to study the distributed index structure. However, existing index structure and query processing algorithms have a limitation that they only consider single-column query processing. In this paper, we propose a grid-based multi column indexing scheme and an encrypted query processing algorithm. In order to support multi-column query processing, the multi-dimensional index keys are generated by using a space decomposition method, i.e. grid index. To support encrypted query processing over encrypted data, we adopt the Hilbert curve when generating a index key. Finally, we prove that the proposed scheme is more efficient than existing scheme for processing the exact and range query.

A Study on Data Storage and Recovery in Hadoop Environment (하둡 환경에 적합한 데이터 저장 및 복원 기법에 관한 연구)

  • Kim, Su-Hyun;Lee, Im-Yeong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.12
    • /
    • pp.569-576
    • /
    • 2013
  • Cloud computing has been receiving increasing attention recently. Despite this attention, security is the main problem that still needs to be addressed for cloud computing. In general, a cloud computing environment protects data by using distributed servers for data storage. When the amount of data is too high, however, different pieces of a secret key (if used) may be divided among hundreds of distributed servers. Thus, the management of a distributed server may be very difficult simply in terms of its authentication, encryption, and decryption processes, which incur vast overheads. In this paper, we proposed a efficiently data storage and recovery scheme using XOR and RAID in Hadoop environment.

CLIAM: Cloud Infrastructure Abnormal Monitoring using Machine Learning

  • Choi, Sang-Yong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.105-112
    • /
    • 2020
  • In the fourth industrial revolution represented by hyper-connected and intelligence, cloud computing is drawing attention as a technology to realize big data and artificial intelligence technologies. The proliferation of cloud computing has also increased the number of threats. In this paper, we propose one way to effectively monitor to the resources assigned to clients by the IaaS service provider. The method we propose in this paper is to model the use of resources allocated to cloud systems using ARIMA algorithm, and it identifies abnormal situations through the use and trend analysis. Through experiments, we have verified that the client service provider can effectively monitor using the proposed method within the minimum amount of access to the client systems.